Skip Navigation

MIT's 'PhotoGuard' protects your images from malicious AI edits | Engadget

www.engadget.com MIT's 'PhotoGuard' protects your images from malicious AI edits | Engadget

A new digital watermarking technique for MIT CSAIL seeks to prevent unauthorized image edits by malicious AI..

MIT's 'PhotoGuard' protects your images from malicious AI edits | Engadget

A new digital watermarking technique for MIT CSAIL seeks to prevent unauthorized image edits by malicious AI..

11
11 comments
  • Lol what a load of shit. Just let me set the whole picture to blur 1-3% to blend in those pixels.. Aaannnddd.. Now your face is superimposed onto a naked lady.

  • Really wanna see how it handles the standard Photoshop touch ups. It's not like the news media has never altered photos to solicit a skewed perception.

    • The big problem is AI will (eventually) "see" things as a human does so even in the case that these MIT researchers are able to insert nearly invisible artifacts that fool AI into thinking the edges are different than they actually are, a sufficiently large training set will allow the AI to see that the color borders are more important than artifact borders...

      Which will allow AI to bypass this type of watermarking.

      • I really hate the label AI. They're data models, not intelligence - artificial or otherwise. It's PAI. Pseudo Artificial Intelligence, which we've had since the 80s.

        The thing is that these data models are, in the end, fed to algorithms to provide output. That being the case it's a mathematical certainty that it can be reversed and thus, shown to be from such an algorithm. Watermark or not, if an algorithm makes a result, then you can deduce the algorithm from a given set of it's results.

        It wouldn't be able to meaningfully distinguish 4'33" from silence though. Nor could it determine a flat white image wasn't made by an algorithm.

        I think what we're really demonstrating in all this is just exactly how algorithmically human beings think already. Something psychology has been talking about for a longer time still.

  • Am I missing something here? The immunized images just look like they've been 'glazed' which has been a thing for a few months.

    Edit: ah I see, the original glaze could only protect against training and this supposedly will make an image protected against img2img diffusion which explains why it's visual impact is more pronounced.

You've viewed 11 comments.