Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation
Asking ChatGPT to Repeat Words ‘Forever’ Is Now a Terms of Service Violation
A technique used by Google researchers to reveal ChatGPT training data is now banned by OpenAI.
You're viewing a single thread.
In all seriousness, fuck Google. These pieces of garbage have completely abandoned their Don't be Evil motto and have become full-fledged supervillains.
16 31 ReplyI mean I agree with the sentiment in general but I don't really see how they're the bad guys here specifically.
38 0 ReplyAre you lost? This is ChatGPT, not Google. Also, it's "their".
28 2 ReplyDid you even read the explanation part of the article???
Thanks for the grammar correction while ignoring literally all context though. You certainly put me in my place milord.
5 22 ReplyWhat's your beef with Google researchers probing the safety mechanisms of the SotA model?
How was that evil?
14 0 ReplyNow that Google spilled the beans WilliamTheWicked can no longer extract contact information of females from the ChatGPT training data.
3 2 Reply
???
2 0 Reply