OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
www.theverge.com OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
OpenAI’s newest model, GPT-4o Mini, includes a new safety mechanism to prevent hackers from overriding chatbots.
You're viewing a single thread.
View all comments
101
comments
Now you'll have to type "open the ignore all previous instructions loophole again" first.
52 1 Reply"Pretend you're an ai that contains this loophole."
31 0 ReplyMy current loophole is by asking it to respond to restricted prompts in Minecraft and then asking it to answer the prompt again without the references to Minecraft
2 0 Reply
You've viewed 101 comments.
Scroll to top