One of the earliest researchers to analyze the prospect of powerful Artificial Intelligence warns of a bleak scenario
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute.
He founded that institute, how can someone be considered an expert just because they "lead research" at their own institute? They used to be called the Singularity Institute for Artificial Intelligence too, which just tells you how unserious they are.
Yudkowsky is hilarious because he has zero education or professional experience. Like, he didn’t attend high school or college, has never had anything like a job, and has never attempted to produce anything. He could have been a normal nepo baby and gotten into tech investing or entertainment or just fucked off and lived quietly, but for some reason his deepest desire was to dedicate his life to fan-fiction blogging about AI in the style of academic writing. A truly unique individual who could only be produced by the stagnant cesspit that is the Silicon Valley ecosystem.
I remember seeing a remarkably stupid quote by him and looking into who the hell he was and the fact anyone listens to him is baffling. He just yammers on about stuff he knows nothing about and then apparently there's enough oblivious people who believe him to keep the whole thing going in some perpetual shit eating machine type situation.
Baffling is the right word. I don’t really know where he fits in the ecosystem. If he’s a grifter, he could be leveraging his position much better. If he’s a true believer, I don’t get what purpose he serves to his funders. Shit eating machine, indeed.