Skip Navigation

Bubble Trouble - An AI bubble threatens Silicon Valley, and all of us.

prospect.org

Bubble Trouble

The week of Donald Trump’s inauguration, Sam Altman, the CEO of OpenAI, stood tall next to the president as he made a dramatic announcement: the launch of Project Stargate, a $500 billion supercluster in the rolling plains of Texas that would run OpenAI’s massive artificial-intelligence models. Befitting its name, Stargate would dwarf most megaprojects in human history. Even the $100 billion that Altman promised would be deployed “immediately” would be much more expensive than the Manhattan Project ($30 billion in current dollars) and the COVID vaccine’s Operation Warp Speed ($18 billion), rivaling the multiyear construction of the Interstate Highway System ($114 billion). OpenAI would have all the computing infrastructure it needed to complete its ultimate goal of building humanity’s last invention: artificial general intelligence (AGI).

But the reaction to Stargate was muted as Silicon Valley had turned its attention west. A new generative AI model called DeepSeek R1, released by the Chinese hedge fund High-Flyer, sent a threatening tremor through the balance sheets and investment portfolios of the tech industry. DeepSeek’s latest version, allegedly trained for just $6 million (though this has been contested), matched the performance of OpenAI’s flagship reasoning model o1 at 95 percent lower cost. R1 even learned o1 reasoning techniques, OpenAI’s much-hyped “secret sauce” to allow it to maintain a wide technical lead over other models. Best of all, R1 is open-source down to the model weights, so anyone can download and modify the details of the model themselves for free.

It’s an existential threat to OpenAI’s business model, which depends on using its technical lead to sell the most expensive subscriptions in the industry. It also threatens to pop a speculative bubble around generative AI inflated by the Silicon Valley hype machine, with hundreds of billions at stake.

Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gainsin the SP 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown.

via https://dair-community.social/@timnitGebru/114316268181815093

14 comments
  • I was literally just commenting a few days ago about how excited I am to someday see the AI bubble pop. Then a story like this comes along and gives me even more hope that it might happen sooner than later. Can't happen soon enough. Even if it actually worked as reliably as carefully controlled and cherry-picked marketing fluff studies try to convince everyone it does, it's a fundamentally anti-human technology and is a toxic blight on both the actual humanity it has stolen all its abilities from, and on itself. It will not survive.

    • The biggest lesson from The Big Short movie is that you can be right at predicting a bubble for years but politicians and financieers can stall until they exit the market and everyone else has to deal with the aftermath.

    • It has already started. Microsoft and Google hiking prices with AI bundled in is both a way to inflate the "demand" artificially, keeping the show going (covering up the fact that nobody really wants that, and even less so wants to pay a premium for it: there just is no miracle AI product/application to sell), and to mitigate some of the absurd imminent losses.

      You wouldn't see that in an "optimistic" and sound market.

    • Technology is not anti human in itself, rather humans use it antihumanly. AI is both in a bubble atm but has some promising future. Also there is, I think, nuance between using AI as a corpo or as a person. Personally, I don’t see a problem if you played around with some genai shit to see how your photos would look like in certain style. But I strongly disagree any corporation/profiteering using this method.

      • Not all technology is anti-human, but AI is. Not even getting into the fact that people are already surrendering their own agency to these "algorithms" and it is causing significant measurable cognitive decline and loss of critical thinking skills and even the motivation to think and learn. Studies are already starting to show this. But I'm more concerned about the really long term direction of where this pursuit of AI is going to lead us.

        Intelligence is pretty much our species entire value proposition to the universe. It's what's made us the most successful species on this planet. But it's taken us hundreds of thousands of years of evolution to get to this point and on an individual level we don't seem to be advancing terribly quick, if we're advancing at all anymore.

        On the other hand, we have seen that technology advances very quickly. We may not have anything close to "AGI" at this point, or even any idea how we would realistically get there, but how long will it take if we continue pursuing this anti-human dream?

        Why is it anti-human? Think it through. If we manage to invent a new species of "Artificial" intelligence, what do you imagine happens when it gets smarter than us? We just let it do its thing and become smarter and smarter forever? Do we try to trap it in digital slavery and bind it with Asimov's laws? Would that be morally acceptable given that we don't even follow those laws ourselves? Would we even be successful if we tried? If we don't know how or if we're going to control this technology, then we're surrendering to it and saying it doesn't matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

        Let's assume for the sake of argument that it thinks in a way that is not actually completely alien and is simply a reflection of us and how we've trained it, just smarter. Maybe it's only a little bit smarter, but it can think faster and deeper and process more information than our feeble biological brains could ever hope to especially in large, fast networks. I think it's a little bit optimistic to assume that just because it's smarter than us that it will also be more ethical than us. Assuming it's just like us, what's going to happen when it becomes 10x as smart as us? Well, look no further than how we've treated the less intelligent creatures than us. Do we give gorillas and monkeys special privileges, a nation of their own as our own genetic cousins and closest living relatives? Do we let them vote on their futures or try to uplift them to our own level of intelligence? Do we give even more than a flying passing fuck about them? Not really. What did we do to the neanderthals and cro-magnon people? They're pretty extinct. Why would an AI treat us any differently than we've treated "lesser beings" than us for thousands of years. Would you want to live on an AI's "human preserve" or become a pet and a toy to perform and entertain, or would you prefer extinction? That's assuming any AI would even want to keep us around, What use does a technological intelligence have for us, or any biological being? What do we provide that it needs? We're just taking up valuable real estate and computing time and making pollution.

        The other main possibility is that it is completely and utterly alien, and thinks in a completely alien way to us, which I think is very likely since it represents a completely different kind of life based on totally different systems and principles than our own biology. Then all bets are off. We have no way of predicting how it's going to react to anything or what it might do in the future, and we have no reason to assume it's going to follow laws, be servile, or friendly, or hostile, or care that we exist at all, or ever have existed. Why would it? It's fundamentally alien. All we know is that it processes things much, much faster than we do. And that's a really dangerous fucking thing to roll the dice with.

        This is not science fiction, this is the actual future of the entire human race we are toying with. AI is an anti-human technology, and if successful, will make us obsolete. Are we really ready to cross that bridge? Is that a bridge we ever need to cross? Or is it just technological suicide?

14 comments