Skip Navigation

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study

www.livescience.com Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study

AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

56
56 comments
You've viewed 56 comments.