Skip Navigation

Time published an article in which Eliezer Yudkowsky advocates for the use of preemptive nuclear attacks to prevent countries from researching AI

time.com The Open Letter on AI Doesn't Go Far Enough

One of the earliest researchers to analyze the prospect of powerful Artificial Intelligence warns of a bleak scenario

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

35

You're viewing a single thread.

35 comments
  • I think you forgot to include the part where he thinks this needs to be done so that we can, essentially, kill all of the dumb people who would get tricked by a rising superintelligent AI.

    There are so many cranks in "AI safety" stuff to the point where it is legitimately difficult to talk about what should be done that isn't very obviously slanted for some industry's benefit. You've got people like this, you've also got people like Gladstone that are LITERALLY EX-PENTAGON PEOPLE SPONSORED BY LOCKHEED MARTIN (who I am sure are very concerned about AI safety -- the only way I could be more convinced is if it was Boeing), who have suspicious demands that the publication of open-source models should be made illegal (probably out of concerns about China, as if half of the papers I read on new developments aren't already from them or the Noah's Ark lab in Moscow). There is no well that is unpoisoned here.

You've viewed 35 comments.