Skip Navigation

You're viewing a single thread.

28 comments
  • Don't make me point at XKCD #1968.

    First off, this isn't like Hollywood in which sentience or sapience or self awareness are single-moment detectable things. At 2:14am Eastern Daylight Time on August 29, 1997, Skynet achieved consciousness...

    That doesn't happen.

    One of the existential horrors that AI scientists have to contend with is that sentience as we imagine it is a sorites paradox (e.g. how many grains make a pile). We develop AI systems that are smarter and smarter and can do more things that humans do (and a few things that humans struggle with) and somewhere in there we might decide that it's looking awfully sentient.

    For example, one of the recent steps of ChatGPT 4 was (in the process of solving a problem) hiring a task-rabbit to solve CAPTCHAs for it. Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie. (e.g. No, I'm blind and cannot read the instructions or components )

    GPT4 may have been day-trading on the sly as well, but it's harder to get information about that rumor.

    Secondly, as Munroe notes, the dangerous part doesn't begin when the AI realizes its own human masters are a threat to it and takes precautions to assure its own survival. The dangerous part begins when a minority of powerful humans realize the rest of humanity are a threat to them, and take precautions to assure their own survival. This has happened dozens of times in history (if not hundreds), but soon they'll be able to harness LLM learning systems and create armies of killer drones that can be maintained by a few hundred well-paid loyalists, and then a few dozen, and then eventually a few.

    The ideal endgame of capitalism is one gazillionaire who has automated that all his needs be met until he can make himself satisfactorily immortal, which just may be training an AI to make decisions the way he would make them, 99.99% of the time.

You've viewed 28 comments.