Skip Navigation

AI Doomers are worse than wrong – they're incompetent

www.infinitescroll.us AI Doomers are worse than wrong - they're incompetent

Even judged on their own terms, AI Doomers are terrible and ineffective

AI Doomers are worse than wrong - they're incompetent

There is a discussion on Hacker News, but feel free to comment here as well.

2
2 comments
  • The people who think will be empowered by AI are seeing only the starting phase, when the market is fresh and corporate and consumer interests roughly align.

    When a market matures (read: gets cornered), those interests diverge greatly as innovation is no longer a required factor, and the initial VC funding looks to reap back on its investment. Then gouging begins (see: late stage capitalism), and any goodwill is burned up in the interest of a buck at any rate.

    We are going to get effed hard by AI, unless the public has accessible AI of its own to counter with. Unless new markets emerge (unlikely given our upwards siphoning wealth-concentrated economy), the need for people to work middle-men jobs will be soon scarce.

    TLDR: in the human pyramids of our world economy, a large section of the middle is suddenly going to be unnecessary, leaving few places at the top and a very wide bottom of minimum subsistence. There aren't many pyramids to choose from either.

  • For what it’s worth, the author (eventually) explains that by “AI doomer” they’re not just talking about people who are skeptical or pessimistic about AI, but specifically people who believe that AI will literally kill all humans:

    “[T]here are plenty of people who do believe that AI either will or might kill all of humanity, and they take this idea very seriously. They don’t just think “AI could take our jobs” or “AI could accidentally cause a big disaster” or “AI will be bad for the environment/capitalism/copyright/etc”. They think that AI is advancing so fast that pretty soon we’re going to create a godlike artificial intelligence which will really, truly kill every single human on the planet in service of some inscrutable AI goal. […] They would likely call themselves something like ‘AI Safety Advocates’. A less flattering and more accurate name would be ‘AI Doomers’.”