AI hallucinations are getting worse – and they're here to stay
AI hallucinations are getting worse – and they're here to stay

AI hallucinations are getting worse – and they're here to stay

cross-posted from: https://slrpnk.net/post/21967633
AI hallucinations are getting worse – and they're here to stay
AI hallucinations are getting worse – and they're here to stay
cross-posted from: https://slrpnk.net/post/21967633
"AI design is inherently defective and will never work correctly."
Guess we need to jam it into more things!
As much as I agree with the sentiment and as much as I despise the current state of tech and llm's, software and tech in general are very brittle, riddled with problems and human mistakes(a bug is just a made up word that allows displacement of responsibility).
Just rambling don't really have a useful point
Photocopy of a photocopy.
It was evident that this was inevitable since they poisoned the Internet, the very thing they train their crap on, with their own slop.
They temporarily mitigated it by pirating every existing form of media... but that would only work as long as they didn't train their models on anything published after they poisoned the well, which would make it even more useless for most use cases, so they kept using their own slop for training, maybe believing their own lies that they'll be able to fix it, maybe planning to sell and run just before the bubble bursts.
Last time we fed something its own shit and corpses we got mad cow disease.
Guess mad "AI" ¹ is on the menu for the foreseeable future. 🤷♂️
¹ (It's not even real AI, just fancy applied statistics to make a marginally better — but, thanks to model poisoning, progressively worse — autocomplete.)
We are in the age of Simulacra and Simulation. The Matrix has you...
.
“Here to stay”? No.
No, I don’t think so.
I think yes. Look at how long they've been trying to cram voice assistants down our throats. There's no point at which they'll say "no, I don't think these are ready yet, let's pull them back".
It is clear to all that a bubble has been growing.
If you're insisting the bubble will never burst, then there has to eventually be an actual use case for this that makes back the billions they're investing no? What's that use case? A Copilot subscription?
I mean - I still don’t use them so - ? And knowing they’re infected with AI, I wouldn’t use them for anything other than the simplest, statistically-improbable-to-get-wrong tasks.
Oh you got a solution for ai hallucinations?
Yep. No AI.
Yeah.
But this is expected.
These anomalies occur because of the learning of the models. They don't have them when newly released because they have been trained on "clean" data.
As the resolution of vectoring increases, the speed at which the data becomes corrupted increases.
Most "hallucinations" are not really hallucinations. What happens is people put in multiple prompts changing definitions put forward by the model and then the original data gets downranked, so when a question is asked, it repeats the false data the user put in. Then they put in the screenshot at the end, not showing all the garbage they put in.
Now remember models usually discard this information for a new session, as any new information has to go through a model approval process comparing it with the clean database originally mined.
Yeah, people don't really realize that the reason these models are "free" is that we're part of the learning process, and is a critical step towards AGI. I don't think that the current generation of neural networks, not just GPTs in general, would be capable of such feat, especially as current neural networks are just very simplified models of neurons that can be represented with a simple matrix multiplication.
AI is hapsburg jawing itself
Paywall
Not pay but sign up wall. But I get it. It's annoying af.