"LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive," Marcus predicts. "When everyone realizes this, the financial bubble may burst quickly."
I wish just once we could have some kind of tech innovation without a bunch of douchebag techbros thinking it's going to solve all the world's problems with no side effects while they get super rich off it.
largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence
Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn't understand the technology were making those claims. Which helped feed the hype.
I 100% agree that we're going to see an AI market correction. It's going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.
No shit. This was obvious from day one. This was never AGI, and was never going to be AGI.
Institutional investors saw an opportunity to make a shit ton of money and pumped it up as if it was world changing. They'll dump it like they always do, it will crash, and they'll make billions in the process with absolutely no negative repercussions.
The smartphone improvements hit a rubber wall a few years ago (disregarding folding screens, that compose a small market share, improvement rate slowed down drastically), and the industry is doing fine. It's not growing like it use to, but that just means people are keeping their smartphones for longer periods of time, not that people stopped using them.
Even if AI were to completely freeze right now, people will continue using it.
Why are people reacting like AI is going to get dropped?
The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.
This is why you're seeing news articles from Sam Altman saying that AGI will blow past us without any societal impact. He's trying to lessen the blow of the bubble bursting for AI/ML.
Good. I look forward to all these idiots finally accepting that they drastically misunderstood what LLMs actually are and are not. I know their idiotic brains are only able to understand simple concepts like "line must go up" and follow them like religious tenants though so I'm sure they'll waste everyone's time and increase enshitification with some other new bullshit once they quietly remove their broken (and unprofitable) AI from stuff.
"The economics are likely to be grim," Marcus wrote on his Substack. "Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence."
"As I have always warned," he added, "that's just a fantasy."
As I use copilot to write software, I have a hard time seeing how it'll get better than it already is. The fundamental problem of all machine learning is that the training data has to be good enough to solve the problem. So the problems I run into make sense, like:
Copilot can't read my mind and figure out what I'm trying to do.
I'm working on an uncommon problem where the typical solutions don't work
Copilot is unable to tell when it doesn't "know" the answer, because of course it's just simulating communication and doesn't really know anything.
2 and 3 could be alleviated, but probably not solved completely with more and better data or engineering changes - but obviously AI developers started by training the models on the most useful data and strategies that they think work best. 1 seems fundamentally unsolvable.
I think there could be some more advances in finding more and better use cases, but I'm a pessimist when it comes to any serious advances in the underlying technology.
Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not "intelligence", they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.
Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.
Seems to me the rationale is flawed. Even if it isn't strong or general AI, LLM based AI has found a lot of uses. I also don't recognize the claimed ignorance among people working with it, about the limitations of current AI models.
It'll implode but there are much larger elephants in the room - geopolitical dumbassery and the suddenly transient nature of the CHIPS Act are two biggies.
Third, high flying growth, blue sky darlings, they're flaky. In a downturn growth is worth 0 fucking dollars, throw that shit in a dumpster and rotate into staples. People can push off a phone upgrade or new TV and cut down on subscriptions, but they'll always need Pampers.
The thing propping up AI and semis is an arms race between those high flying tech companies, so this whole thing is even more prone to imploding than tech itself, since a ton of revenue comes from tech. Sensitive sector supported by an already sensitive sector. House of cards with NVDA sitting right at the tippy top. Apple, Facebook, those kinds of companies, when they start trimming back it's over.
But, it's one of those things that is anyone's guess. When you think it's not even possible for everything to still have steam one of the big guys like TSMC posts some really delightful earnings and it gets another second wind, for the 29th time.
Definitely a house of cards tho, and suddenly a lot more precarious because suddenly nobody knows how policy will affect the industry or the market as a whole
They say shipping is the bellwhether of the economy and there's a lot of truth to that. I think semis are now the bellwhether of growth. Sit back and watch the change in the wind
Ya AI was never going to be it. But I wouldn’t understate its impact even in its current stage. I think it’ll be a tool that will be incredibly useful for just about every industry