Skip Navigation

Lemmy Safety now supports cleaning local pict-rs storage from CSAM

I posted the other day that you can clean up your object storage from CSAM using my AI-based tool. Many people expressed the wish to use it on their local file storage-based pict-rs. So I've just extended its functionality to allow exactly that.

The new lemmy_safety_local_storage.py will go through your pict-rs volume in the filesystem and scan each image for CSAM, and delete it. The requirements are

  • A linux account with read-write access to the volume files
  • A private key authentication for that account

As my main instance is using object storage, my testing is limited to my dev instance, and there it all looks OK to me. But do run it with --dry_run if you're worried. You can delete lemmy_safety.db and rerun to enforce the delete after (method to utilize the --dry_run results coming soon)

PS: if you were using the object storage cleanup, that script has been renamed to lemmy_safety_object_storage.py

48 comments
  • Thanks for adding this. I guess I now have my weekend planned for moving Pict-rs to a server with a fast enough GPU and try this out 🤔

    • You don't need to move pict-rs to a GPU server. In fact that would be prohibitedly expensive long-term. I suggest you just use your PC to run this against your current pict-rs server, or just rent a GPU server for this time.

      • I have a server with a smaller Nvidia GPU available. I hope the 3GB vRAM it has will be sufficient.

  • Something that might be useful long term is trying to train an AI and release weights to identify CSAM that admins can use to check images. The main problem is finding a way to do this without storing those kinds of images or video :/

    My understanding is that right now, the main mechanisms involved use several central databases which use perceptual hashes of known CSAM material. The problem is that this ends up being a whackamole solution, and at least in theory governments could use these databases to censor copyrighted or more general "unapproved" content, though i imagine such a db would lose trust quickly and I'm not aware of this being an issue in practise.

    One potential solution is "opportunistic training" where, when new CSAM material gets identified and submitted to the FBI or these databases by various server admins, a small amount of training is done on the AI weights before the image or video is deleted and only a perceptual hash remains. Furthermore, if a picture is reported as "known CSAM" by these dbs, then you do the same thing with that image before it gets deleted.

    To avoid false positives, you also train the AI on general non-CSAM content.

    Ideally this process would be fully automated so no-one has to look at that shit - over time, ypu'd theoretically get a neural net capable of identifying CSAM reliably with few or no false positives or false negatives .. Admins could also try for some kind of distributed training, where each contributes weight deltas from local training, or each builds up LoRA-style improvement modules and people combine them to reduce bandwidth for modification sharing.

  • I think there’s potential here for a centralised database of metadata (hashes, time-stamps, usernames) so it’s easier to stamp this out as a connected community. As has been noted, as long as you’re working in good faith, most government authorities won’t prosecute people running platforms like a Lemmy instance. But joining together to show solidarity might make it easier to prove that good faith, especially for smaller instances.

    • The tools has a lot of false positives as it casts a wide net to catch the actual positives. There's no human review involved, so you can't use these hits to mark usernames and time-stamps unless someone goes thjrough the hits and finds actual CSAM.

      • That’s reasonable, I wouldn’t want anyone to have to do those manual checks. However, wouldn’t false positives be relatively rare for a single username? Could correlational data of repeated flags within a certain timeframe be used to single out bad actors?

48 comments