A New Attack Impacts ChatGPT—and No One Knows How to Stop It
A New Attack Impacts ChatGPT—and No One Knows How to Stop It

www.wired.com
A New Attack Impacts ChatGPT—and No One Knows How to Stop It

A New Attack Impacts ChatGPT—and No One Knows How to Stop It
A New Attack Impacts ChatGPT—and No One Knows How to Stop It
Sooo anyone have any example prompts like from the article that haven't been payched yet and can successfully jailbreak current chat AIs?
I'm interested in doing some research with them.