Hello all. People were very kind when I originally posted the start of this series. I've refrained from spamming you with every part but I thought I'd post to say the very final installment is done.
I got a bit weird with it this time as I felt like I had an infinite amount to say, all of which only barely got to the underlying point i was trying to make. So much that I wrote I also cut, it's ridiculous.
Anyway now the series is done I'm going to move on to smaller discrete pieces as I work on my book about Tech Culture's propensity to far-right politics. I'll be dropping interesting stuff I find, examples of Right Libertarians saying ridiculous things, so follow along if that's your jam.
Gonna give particular praise to the opening, because this really caught my eye:
Tech culture often denigrates humans through its assumptions that human skills, knowledge and functions can be improved through their replacement by technological replacements, and through transhumanist narratives that rely on a framing of human consciousness as fundamentally computational.
As for the whole "replacement by technological replacements" part...well, we've all seen the AI art slop-nami, its crystal fucking clear what you're referring to.
Forgot to say: yes AI generated slop is one key example, but often I'm also thinking of other tasks that are often presumed to be basic because humans can be trained to perform them with barely any conscious effort. Things like self-driving vehicles, production line work, call center work etc. Like the fact that full self drive requires supervision, often what happens with tech automation is that they create things that de-skill the role or perhaps speed it up, but still require humans in the middle to do things that are simple for us, but difficult to replicate computationally. Humans become the glue, slotted into all the points of friction and technical inadequacy, to keep the whole process running smoothly.
Unfortunately this usually leads to downward pressure on the wages of the humans and the expectation that they match the theoretical speed of the automation rather than recognise that the human is the the actual pace setter because without them the pace would be 0.
Funnily enough that was the bit I wrote last just before hitting post on Substack. A kind of "what am I actually trying to say here?" moment. Sometimes I have to switch off the academic bit of my brain and just let myself say what I think to get to clarity. Glad it hit home.
Thanks for the link. I'm going to read that piece and have a look though the ensuing discussion.
It seems to me like when you say "human minds are computational things" you can mean this in several ways that can be roughly categorized by what your ideas of "minds" and of "computational things" are.
You can use "computational things" to be an extremely expansive category, capable of containing vast complexity but potentially completely impractical for fully recreating on a drawing board. In this use, the word user would often agree with the statement but it wouldn't belittle the phenomenon that is the human mind.
Or you can use "human minds" in a way that sees them as something relatively simple - kinda like a souped up 80486 computer, maybe. Nothing all too irreplaceable or special, in any case. Maybe an Athlon can be sentient and sapient! Most who say it like that would probably disagree with the sentiment because it small-mindedly minimizes people.
Then there's the tech take version, which somehow does both: "Computation is everything and everything is computation, but also I have no appreciation for complexity nor a conceptualization of what all I don't see about the human mind". Within the huge canvas of what can be conceived of if you think in computation terms, they opt for tiny crayon scribbles.
Shorter: "Minds are computers" can imply views of (1) minds as simpler than they are, (2) computers as potentially very complex and general, or (3) both.