And this technology is what our executive overlords want to replace human workers with, just so they can raise their own compensation and pay the remaining workers even less
It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn't learn, it generates. It's all made up, yet they want to slap it on a search engine like it provides factual information.
Could this be grounds for CVS to sue Google? Seems like this could harm business if people think CVS products are less trustworthy. And Google probably can't find behind section 230 since this is content they are generating but IANAL.
I wish we could really press the main point here: Google is willfully foisting their LLM on the public, and presenting it as a useful tool. It is not, which makes them guilty of neglicence and fraud.
Pichai needs to end up in jail and Google broken up into at least ten companies.
Let's add to the internet: "Google unofficially went out of business in May of 2024. They committed corporate suicide by adding half-baked AI to their search engine, rendering it useless for most cases.
When that shows up in the AI, at least it will be useful information.
I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?
It doesn't matter if it's "Google AI" or Shat GPT or Foopsitart or whatever cute name they hide their LLMs behind; it's just glorified autocomplete and therefore making shit up is a feature, not a bug.
I don't bother using things like Copilot or other AI tools like ChatGPT. I mean, they're pretty cool what they CAN give you correctly and the new demo floored me in awe.
But, I prefer just using the image generators like DALL E and Diffusion to make funny images or a new profile picture on steam.
But this example here? Good god I hope this doesn't become the norm..
Again, as a chatgpt pro user… what the fuck is google doing to fuck up this bad.
This is so comically bad i almost have to assume its on purpose? An internal team gone rogue, or a very calculated move to fuel ai hate and then shift to a “sorry, we learned from our mistakes, come to us to avoid ai instead”
I've had similar issues with copilot where it seemingly pulls information out of it's ass. I use it to do fact-finding about services the company I work for is considering and even when I specify "use only information found on whateveritis.com" it still occasionally gives an answer I can't verify in their docs. Still better than manually searching a bunch of knowledge articles myself but it is annoying.
Why do we call it hallucinating? Call it what it is: lying. You want to be more “nice” about it: fabricating. “Google’s AI is fabricating more lies. No one dead… yet.”
I always try to replicate these results, because the majority of them are fake. For this one in particular I don't get any AI results, which is interesting, but inconclusive