I dont think this LLM in everything trend is going to last very long. It's way too expensive for it to be in literally all consumer things. I can imagine it finding some success in B2B applications but who is going to pay Logitech to pay OpenAI $30 per million tokens? (Lambda for comparison is $0.20 per 1M requests if you pay the public rate)
What’s wild to me is that there’s continuing mass layoffs in tech in the middle of a huge AI bubble…when it finally bursts it’s going to be utterly brutal.
The crypto bubble lasted a long time, and unlike it, AI actually does something (not anything useful, or terribly well, but something), so I expect the bubble will last a while yet.
I disagree, because I think what will happen is that these companies won’t use “AI” that is hosted in the cloud, but will instead send some minimally functional model to users that runs on their GPU, and later NPU (as those become common), and engage in screen recording and data collection about you and everything the mouse clicks on.
Disabling AI/data collection will disable any mouse technology or feature implemented after 1999, because AI or something.
At this point, I think AI stands for “absolute intrusion” when it comes to consumer products.
I don't really see why they need AI for that but yes I imagine companies will want to deploy AI on user equipment. These aren't going to be nearly as sophisticated or useful as what can run in the cloud though.
That’s sort of the point. It’s not really that the AI is useful, it’s that it’s the next big unregulated and misunderstood thing.
Companies are using the idea of “training models” to harvest user data well beyond any reasonable scope, so they can sell it.
The breadth of information that’s being openly collected under the guise of ‘AI’ was unconscionable 10 years ago, and even 5 years ago, folks would have been freaked out. Now businesses are pretending it’s just a run of the mill requirement to make their software work.