Like an AI trained on its own output, they’re growing increasingly divorced from reality, and are reinforcing their own worst habits of thought.
The ideologues of Silicon Valley are in model collapse.
To train an AI model, you need to give it a ton of data, and the quality of output from the model depends upon whether that data is any good. A risk AI models face, especially as AI-generated output makes up a larger share of what’s published online, is “model collapse”: the rapid degradation that results from AI models being trained on the output of AI models. Essentially, the AI is primarily talking to, and learning from, itself, and this creates a self-reinforcing cascade of bad thinking.
We’ve been watching something similar happen, in real time, with the Elon Musks, Marc Andreessens, Peter Thiels, and other chronically online Silicon Valley representatives of far-right ideology. It’s not just that they have bad values that are leading to bad politics. They also seem to be talking themselves into believing nonsense at an increasing rate. The world they seem to believe exists, and which they’re reacting and warning against, bears less and less resemblance to the actual world, and instead represents an imagined lore they’ve gotten themselves lost in.
You can label it as a trendy new tech term, but it is the age old yes-manism that I have watched eat up tech executives and CEOs that I have worked with.
This is worse. They (Musk et. al) are seemingly only exposed to info that is algorithmically tailored to what they already ascribe to, and thus see a patently false view of the world. That in turn leads them to further ascribe to another layer of dogma, leading to another layer of insular views that further feed the cycle.