Skip Navigation

I just developed and published a script to clear your pict-rs object storage from potential CSAM.

github.com GitHub - Haidra-Org/lemmy-safety: A script that goes through a lemmy pict-rs object storage and tries to prevent illegal or unethical content

A script that goes through a lemmy pict-rs object storage and tries to prevent illegal or unethical content - GitHub - Haidra-Org/lemmy-safety: A script that goes through a lemmy pict-rs object sto...

GitHub - Haidra-Org/lemmy-safety: A script that goes through a lemmy pict-rs object storage and tries to prevent illegal or unethical content

I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we're not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

106

You're viewing a single thread.

106 comments
  • Any thoughts about using this as a middleware between nginx and Lemmy for all image uploads?

    Edit: I guess that wouldn't work for external images - unless it also ran for all outgoing requests from pict-rs.. I think the easiest way to integrate this with pict-rs would be through some upstream changes that would allow pict-rs itself to call this code on every image.

    • Exactly. If the pict-rs dev allowed us to run an executable on each image before accepting it, it would make things much easier

    • You might be able however integrate with my AI Horde endpoint for NSFW checking between nginx and Lemmy.

      https://aihorde.net/api/v2/interrogate/async

      This might allow you to detect NSFW images before they are hosted

      Just send a payload like this

      curl -X 'POST' \
        'https://aihorde.net/api/v2/interrogate/async' \
        -H 'accept: application/json' \
        -H 'apikey: 0000000000' \
        -H 'Client-Agent: unknown:0:unknown' \
        -H 'Content-Type: application/json' \
        -d '{
        "forms": [
          {
            "name": "nsfw"
            }
        ],
        "source_image": "https://lemmy.dbzer0.com/pictrs/image/46c177f0-a7f8-43a3-a67b-7d2e4d696ced.jpeg?format=webp&thumbnail=256"
      }'
      

      Then retrieve the results asynchronously like this

      {
        "state": "done",
        "forms": [
          {
            "form": "nsfw",
            "state": "done",
            "result": {
              "nsfw": false
            }
          }
        ]
      }
      

      or you could just run the nsfw model locally if you don't have so many uploads.

      if you know a way to pre-process uploads before nginx sends them to lemmy, it might be useful

You've viewed 106 comments.