Pretty much everything AI is a scam, I mean it has its uses but isn't exactly as claimed yet. Pretty much every non phone AI gadget I've seen so far definetly is a scam.
If you think that "pretty much everything AI is a scam", then you're either setting your expectations way too high, or you're only looking at startups trying to get the attention of investors.
There are plenty of AI models out there today that are open source and can be used for a number of purposes: Generating images (stable diffusion), transcribing audio (whisper), audio generation, object detection, upscaling, downscaling, etc.
Part of the problem might be with how you define AI... It's way more broad of a term than what I think you're trying to convey.
There's folks who think it's as amazing as all the tech firms tell us:
And we're all gonna die
Or
And life will be amazing
Then there's folks who think AI is hype whack bananas
And think it's a scam.
And lastly,
The folks who see that we've already changed life as we know it with AI. That there's still massive potential, but folks in categories 1 and 2(, and 3,) are all kinda nuts.
This is because dedicated consumer AI hardware is a dumb idea. If it's powerful enough to run a model locally, you should be able to use it for other things (like, say, as a phone or PC) and if it's sending all its API requests to the cloud, then it has no business being anything but a smartphone app or website.
I can’t agree with that. ASICs can specialize to do one thing at lightning speeds, and fail to do even the most basic of anything else. It’s like claiming your GPU is super powerful so it should be able to run your PC without a CPU.
Just go all out, and gamble that in 5 years the technology will be here to actually make it all function like your dreamt it would. And by then you are the defacto name within that space and can take advantage of that.
I use chat gpt occasionally. It's not a scam, it's useful for what I need it to do. I'm just not fooled by the notion that these LLM know factual data or can do much more than generate text. If you accept that, LLMs are pretty darn useful.
Investments in AI are in the billions. With that kind of money flying around, it's going to attract a lot of snake oil salesmen. It didn't help that for the general public and investors, any sufficiently advanced technology is indistinguishable from magic, and LLMs reached that point for many.
Just keep the hype cycle in mind. It'll all go downhill after the point of inflated expectations. With AI, it always does.
They clearly don’t want you to know that, granted that they’ve conveniently renamed their company, and announced they don’t want anything to do with crypto, right before the Rabbit announcement went live
BUT THE LAM! People reported on the “large action model” like it was real. It always sounded like bullshit, in this case. Even if they were selling ideas they feel are obvious and inevitable.
I dunno. It sounds like a somewhat feasible thing that could be kinda useful if done right. It just doesn't actually exist which is the problem here. It doesn't sound too crazy which is why people bought this thing. The part I struggle with conceptually is that LAM would essentially weaponize bots - the same thing all these stupid captchas are meant to stop. Also it would drive users away from websites and therefore away from ads. This would be all out war and the money (I. E. Websites with ad revenue) would ultimately win unfortunately.