Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
the researchers say the work is a warning about “bad architecture design” within the wider AI ecosystem
Basically they’re saying that if you build a tool that both reads your emails (or other untrusted inputs) and can also act on those emails, without having a manual human approval step and without sanitization of the emails/inputs in the middle, then you’ll be susceptible to this kind of an attack.