I tried to use ChatGPT to find a song that had a particular phrase in it. I could only remember that phrase, not the song or the band.
It hallucinated a band and a song and I almost walked away thinking I knew the answer. Then I remembered this is ChatGPT and it lies. So I looked up through conventional means that band and song.
Neither. Existed.
So I went back to ChatGPT and said "<band> doesn't even exist so they couldn't have written <song> (which also doesn't exist)". It apologized profusely and then said another band and song. This time I was wary and checked right away at which point, naturally, I discovered neither existed.
So I played with ChatGPT instead and said "Huh, those guys look interesting. What other albums have they released and what hits have they written?"
ChatGPT hallucinated an entire release catalogue of albums that don't exist, one of which was published on a label that doesn't exist, citing songs that didn't exist as their hits, even going so far as to say the band never reached higher than #12 on Billboard's list.
ChatGPT is a dangerous tool. It's going to get someone killed sooner, rather than later.
I have a very unusual last name. There is only one other person in the country with my first and last name and they have a different middle initial from me.
So one day, I asked ChatGPT to tell me about myself including my middle initial.
Did you know that I was a motivational speaker for businesses and I had published a half-dozen books on it?
Good theory, but this Mr. Flying Thomas Squid that ChatGPT talked about lived in the U.S. like me.
(And yes, I worked in the entertainment industry in various roles for about a decade. Oddly, the other person with my name was in a neighboring industry and we worked about two miles apart for years, but we've only met once.)
I should try that. I have an unusual first name, according to the Social Security Administration, only 600 people have this name, and I appear to be the oldest one. Also no one else has my first and last name. I should try that out.
I apologize, but I'm not able to provide a synopsis of "The Mighty Eagle" by John Carrol. After searching my knowledge base, I don't have any information about a book with that exact title and author. It's possible this may be a lesser-known work or there could be an error in the title or author name provided. Without being able to verify the book's existence or details, I can't offer an accurate synopsis. If you have any additional information about the book or author that could help clarify, I'd be happy to assist further.
I've been asking that one about a wide range of topics and been very impressed with its replies. It's mixed on software dev, which is to be expected. It also missed on a simple music theory question I asked, and then missed again when asked to correct it (don't have the details at hand to quote, unfortunately). But overall I've found it to be reliable and much faster than the necessary reading for me to answer the question myself.
On the other hand, AI is definitely good at creative writing.
Well...yeah. That's what it was designed to do. This is what happens when tech-bros try to cudgel an "information manager" onto an algorithm that was designed solely to create coherent text from nothing. It's not "hallucinating" - it's following its core directive.
Maybe all of this will lead to actual systems that do these things properly, but it's not going to be based on llm's. That much seems clear.
That's kind of like saying a wheel wasn't designed to move things around, that it's just a thick circle. My point above wasn't that things can never change - iteration can lead to amazing things. But we can't put an empty chassis on some wheels and call it a car, either.
Tried it with ChatGPT 4o with a different title/author. Said it couldn't find it. That it might be a new release or lesser-known title. Also with a fake title and a real author. Again, said it didn't exist.
They're definitely improving on the hallucination front.
For fun I decided to give it a try with TheBloke_CapybaraHermes-2.5-Mistral-7B-GPTQ (Because that's the model I have loaded for at the moment) and got a fun synopsis about a Fictional Narrative about Tom, a US Air Force Eagle, who struggled to find purpose and belonging after his early retirement due to injury. He then stumbled upon an underground world of superheroes and is given a chance to use his abilities to fight for justice.
I'm tempted to ask it for a chapter outline, summaries of each chapter, then having it write out the chapters themselves just to see how deep it can go before it all falls apart.
LLMs have many limitations, but can be quite entertaining.
Is it a modified version of like the main llama3 or other? I've found once they get "uncensored" you can push them past the training to come up with something to make the human happy. The vanilla ones are determined to find you an answer. There is also the underlying problem that in the end the beginnings of the prompt response is still a probability matching and not some reasoning and fact checking, so it will find something to a question, and that answer being right is very dependent on it being in the training data and findable.
Y'know when you post stupid bullshit like this it really glosses over real issues with ai like propaganda but go on about how you can get it to hallucinate by asking it a question in bad faith lmao
You can trigger hallucinations in today's versions of LLMs with this kind of questions. Same with a knife : you can hurt yourself by missusing it ... and in fact you have to be knowledgeable and careful with both.
The knife doesn't insist it won't hurt you, and you can't get cut holding the handle. Comparatively, AI insists it is correct, and you can get false information using it as intended.
Hallucinated books from AI describing what mushroom you could pick in the forest have been published and some people did die because of this.
We have to be careful when using a.i. !
Don't you have better things to do than asking ChatGPT questions you already know it can't answer correctly? Why are you trying to inflate wheels using a hammer?