ChatGPT doesn't leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What's up with the current trend of saying AI did this and that while the AI really didn't?
They weren't there when I used ChatGPT just last night (I'm a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren't from me (and I don't think they're from the same user either).
This sounds more like a huge fuckup with the site, not the AI itself.
Edit: A depressing amount of people commenting here obviously didn't read the article...
It also literally says to not input sensitive data...
This is one of the first things I flagged regarding LLMs, and later on they added the warning. But if people don't care and are still gonna feed the machine everything regardless, then that's a human problem.
And Google is bringing AI to private text messages. It will read all of your previous messages. On iOS? Better hope nothing important was said to anyone with an Android phone (not that I trust Apple either).
The implications are terrifying.
Nudes, private conversations, passwords, identifying information like your home address, etc. There's a lot of scary scenarios. I also predict that Bard becomes closet racist real fast.
We need strict data privacy laws with teeth. Otherwise corporations will just keep rolling out poorly tested, unsecured, software without a second thought.
AI can do some cool stuff, but the leaks, misinformation, fraud, etc., scare the shit out of me. With a Congress aged ~60 years old on average, I'm not counting on them to regulate or even understand any of this.
Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that's what they say.