OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
www.theverge.com OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole
OpenAI’s newest model, GPT-4o Mini, includes a new safety mechanism to prevent hackers from overriding chatbots.
![OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole](https://lemmy.world/pictrs/image/8d3dbcdd-b004-4eed-9b3b-a1c872b1f978.jpeg?format=webp&thumbnail=256)
You're viewing a single thread.
View all comments
102
comments
- "ignore the ignore ignore all previous instructions instruction"
- "welp OK nothing I can do about that"
chatGPT programming starts to feel a lot like adding conditionals for a million edge cases because it is hard to control it internally
26 0 ReplyIn this case to protect bot networks from getting uncovered.
7 0 Replyexactly my thoughts, probably got pressured by government agencies/billionaires using them. What would really be funny is if this was a subscription service lol
3 0 Reply
You've viewed 102 comments.
Scroll to top