There needs to be like an information campaign or something... The average person doesn't realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.
I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff at the bottom of the page are inaccurate aren't cutting it.
I had a friend who read to me this beautiful thing ChatGPT wrote about an idyllic world. The prompt had been something like, “write about a world where all power structures are reversed.”
And while some of the stuff in there made sense, not all of it did. Like, “in schools, students are in charge and give lessons to the teachers” or something like that.
But she was acting like ChatGPT was this wise thing that had delivered a beautiful way for society to work.
I had to explain that, no, ChatGPT gave the person who made the thing she shared what they asked for. It’s not a commentary on the value of that answer at all, it’s merely the answer. If you had asked ChatGPT to write about a world where all power structures were double what they are now, it would give you that.