and the former kings of tuning the algorithms to favor seo that bubbled useful info up harder have thrown it all out in the name of impressing everyone with how space age and sci fi their tech is. it's not about advancing science or even pushing a useful product. it's strictly a tool for scams. is it a surprise that scammers are gaming the google scam better than anyone else? not really. they've always had a step up compared to the average internet denizen thanks to practice. this is why i get so frustrated when people dismiss ai skepticism as being a product of luddites.
you're getting scammed to think ai will benefit you
systems built by scammers will always benefit scammers
the luddites were right. scientific advancements should benefit the workers, not the rich
I mean, this isn't specifically an AI issue, this is scammers updating the info in Google business listings because the airlines don't actually care to maintain those pages (and Google doesn't want actual humans doing any work to make sure their shit is accurate). This has been going on before AI, AI is just following the garbage in, garbage out model that everyone said was going to be the result of this push.
Your historical information is accurate, but I disagree with your framing. This particular scam is so powerful because the information is organized, parsed, and delivered in a fashion that makes it look professional and makes it look believable.
Google and the other AI companies have put themselves in mind. They know that their system is encouraging this type of scam, but they don't dare put giant disclaimers at the top of every AI generated paragraph, because they're trying to pretend that their s*** is good, except when it's not, and then it's not their fault. In other words, it's basic dishonesty.
This is why "AI" should be avoided at all cost. It's all bullshit. Any tool that "hallucinates" - I. E. Is error strewn - is not fit for purpose. Gaming the AI is just the latest example of the crap being spewed by these systems.
The underlying technology has its uses but its niche and focused applications, nowhere near as capable or as ready as the hype.
We don't use Wikipedia as a primary source because it has to be fact checked. AI isn't anywhere as near accurate as Wikipedia.so why use it?
Sometimes BS is exactly what I need! Like, hallucinated brainstorm suggestions can work for some workflows and be safe when one is careful to discard or correct them. Copying a comment I made a week ago:
I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.
Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.
In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.
Gotta tell you, you made a fairly extreme pronouncement against a very general term / idea with this:
"AI" should be avoided at all cost
Do you realize how ridiculous this sounds? It sounds, to me, like this - "Vague idea I poorly understand ('AI') should be 'avoided' (???) with disregard for any negative consequences, without considering them at all"
Cool take you've got?
Edit to add: whoops! Just realized the community I'm in. Carry on, didn't mean to come to the precise wrong place to make this argument lol.
Listen, I know that the term "AI" has been, historically, used to describe so many things to the point of having no meaning, but I think, given the context, it is pretty obvious what AI they are referring to.
Ironically, that is possibly one of the few legit uses.
Doctors can't learn about every obscure condition and illness. This means they can miss the symptoms of them for a long time. An AI that can check for potential matches to the symptoms involved could be extremely useful.
The provisio is that it is NOT a replacement for a doctor. It's a supplement that they can be trained to make efficient use of.
Yet again tech companies are here to ruin the day. LLM are such a neat little language processing tool. It's amazing for reverse looking up definitions (where you know the concept but can't remember some dumb name) or when looking for starting points or want to process your ideas and get additional things to look at, but most definitely not a finished product of any kind. Fuck tech companies for selling it as a search engine replacement!
Wait, are you advocating people blindly trust unreliable sources and then get angry at the unreliable source when it turns out to be unreliable rather than learn from shit like this to avoid becoming a victim?
Remember when 4chan got people to microwave their phones because they got them to believe it would charge it?
If calling those people stupid is victim blaming then so be it. I’m blaming the victim.
This case isn’t as clear as that but even before the AI mania the instant answer at the top of Google results was frequently incorrect. Being able to discern BS from real results has always been necessary and AI doesn’t change that.
I’ve been using Kagi this year and it keeps LLM results out of the way unless I want them. When you open their AI assistant it says
Assistant can make mistakes. Think for yourself when using it.
Would that make Google liable? I mean that wouldn't be a case of users posting information that would be a case of Google posting information in that case wouldn't it? So it seems to me they'd be legally liable at that point.
I think there's a disclaimer with all AI summaries. Although, I just tried googling United airlines and there is no longer an AI summary, only a message directly from their website.
They gave up working search with algorithms that are easier to reason about and correct for with a messy neural network that is broken in so many ways and basically impossible to generally correct while retaining its core characteristics. A change with this many regressions should've never been pushed to production.
Honestly I wanted to write a snug comment about "But but it even says AI can sometimes make mistakes!", but after clicking through multiple links and disclaimers I can't find Google actually admitting that
Google has been sponsoring scammers as first search results since its creation. Google has caused hundreds of millions of dollars in losses to people, and need to be sued for it.
I feel like Jason ought to have considered it. Spammers have been using this kind of tactic for decades. Of course they're going to change to whatever medium is popular.
This has been an issue since long before LLMs. Before the AI summary box, scammers used targeted ads to place ahead of the actual company you were searching for.
Yeah, but that was easy to spot, both by people and by Google. There were at least some guardrails, imperfect, not ideal, but they existed. With llm there is basically none
Having worked with AI and AI products in my last job before I was let go I can say this:
Out of the box AI is very good at the following:
Mundane very simple binary/boolean tasks. Is this a yes/no. Can I find a piece of information that I was told is here based on your statement? Etc
Condensing very complex processes into very simplistic things - NOTE you will lose a lot of information based on this action unless you refine a statement.
Making overarching summaries - kinda similar to 2 but also its own thing, think more creating a summary of a book.
Programmed AI - read machine learning, because you are still telling it how to interpret things - can be good at (depending how good you are at telling it what it should do):
Interpreting meaning in a statement.
Understanding if - then constructs.
Deducing plausible outcomes.
ALL AI struggles at:
Interpreting real vs fake (thats why you literally teach it how to understand what a spot light is with your captcha)
Understanding complexity in speech and tonal differences - I am SO happy to be here /s
Thinking on its own - using collected data to make an inference that it was not directly programmed to understand
The big craze over AI totally was misunderstood. AI is best to be thought of as Automated Intelligence and the word Artificial at its current state is a complete misnomer.
This is just one example of people having been mislead by the name to not fully understand what is up with AI.
I get paranoid enough about making sure I'm clicking the correct search result and not some scam. I hope I would avoid any AI answers but yeah, to many people it could be confusing.
At the top, but that isn't what this post is saying. This is saying that Google's AI gave the scammer answer. Not that they provided a link you could click on, but that Google itself said this is the number.
It's not an AI, it's just word prediction, which also just follows stupid algorithms, just like those who determine search results. Both can be tricked / manipulated if you understand how they work. It's still the same principle for both cases.
All I'm saying is that an LLM is only decent at generating text. Everything else it sucks at. So it isn't logical to use it as a search engine. Google is also to blame for that. They are slapping AI to whatever they have running. Like most tech companies these days.
Compare that to Perplexity, which is a search engine first and only uses an LLM to make a summary of websites that are riddled with SEO. And then add a link so you can check for yourself if the LLM is hallucinating or not.
Don't be like a Techbro conflating LLMs and General AI. They are different things. And the sooner all users understand this the better.
this is always the question. and the answer is if i can't get there in time by not flying, then so sorry, i won't be able to attend. i don't fly, because fuck literally everything involved with flying. which apparently now also includes bogus customer service phone numbers