Skip Navigation

No One Knows How to Deal With 'Student-on-Student' AI CSAM

www.404media.co

No One Knows How to Deal With 'Student-on-Student' AI CSAM

cross-posted from: https://rss.ponder.cat/post/193608

Schools, parents, police, and existing laws are not prepared to deal with the growing problem of students and minors using generative AI tools to create child sexual abuse material of other their peers, according to a new report from researchers at Stanford Cyber Policy Center.

The report, which is based on public records and interviews with NGOs, internet platforms staff, law enforcement, government employees, legislators, victims, parents, and groups that offer online training to schools, found that despite the harm that nonconsensual causes, the practice has been normalized by mainstream online platforms and certain online communities.

“Respondents told us there is a sense of normalization or legitimacy among those who create and share AI CSAM,” the report said. “This perception is fueled by open discussions in clear web forums, a sense of community through the sharing of tips, the accessibility of nudify apps, and the presence of community members in countries where AI CSAM is legal.”

The report says that while children may recognize that AI-generating nonconsensual content is wrong they can assume “it’s legal, believing that if it were truly illegal, there wouldn’t be an app for it.” The report, which cites several 404 Media stories about this issue, notes that this normalization is in part a result of many “nudify” apps being available on the Google and Apple app stores, and that their ability to AI-generate nonconsensual nudity is openly advertised to students on Google and social media platforms like Instagram and TikTok. One NGO employee told the authors of the report that “there are hundreds of nudify apps” that lack basic built-in safety features to prevent the creation of CSAM, and that even as an expert in the field he regularly encounters AI tools he’s never heard of, but that on certain social media platforms “everyone is talking about them.”

The report notes that while 38 U.S. states now have laws about AI CSAM and the newly signed federal Take It Down Act will further penalize AI CSAM, states “failed to anticipate that student-on-student cases would be a common fact pattern. As a result, that wave of legislation did not account for child offenders. Only now are legislators beginning to respond, with measures such as bills defining student-on-student use of nudify apps as a form of cyberbullying.”

One law enforcement officer told the researchers how accessible these apps are. “You can download an app in one minute, take a picture in 30 seconds, and that child will be impacted for the rest of their life,” they said.

One student victim interviewed for the report said that she struggled to believe that someone actually AI-generated nude images of her when she first learned about them. She knew other students used AI for writing papers, but was not aware people could use AI to create nude images. “People will start rumors about anything for no reason,” she said. “It took a few days to believe that this actually happened.”

Another victim and her mother interviewed for the report described the shock of seeing the images for the first time. “Remember Photoshop?” the mother asked, “I thought it would be like that. But it’s not. It looks just like her. You could see that someone might believe that was really her naked.”

One victim, whose original photo was taken from a non-social media site, said that someone took it and “ruined it by making it creepy [...] he turned it into a curvy boob monster, you feel so out of control.”

In an email from a victim to school staff, one victim said “I was unable to concentrate or feel safe at school. I felt very vulnerable and deeply troubled. The investigation, media coverage, meetings with administrators, no-contact order [against the perpetrator], and the gossip swirl distracted me from school and class work. This is a terrible way to start high school.”

One mother of a victim the researchers interviewed for the report feared that the images could crop up in the future, potentially affecting her daughter’s college applications, job opportunities, or relationships. “She also expressed a loss of trust in teachers, worrying that they might be unwilling to write a positive college recommendation letter for her daughter due to how events unfolded after the images were revealed,” the report said.

💡Has AI-generated content been a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪emanuel.404‬. Otherwise, send me an email at emanuel@404media.co.

In 2024, Jason and I wrote a story about how one school in Washington state struggled to deal with its students using a nudify app on other students. The story showed how teachers and school administration weren’t familiar with the technology, and initially failed to report the incident to the police even though it legally qualified as “sexual abuse” and school administrators are “mandatory reporters.”

According to the Stanford report, many teachers lack training on how to respond to a nudify incident at their school. A Center for Democracy and Technology report found that 62% of teachers say their school has not provided guidance on policies for handling incidents

involving authentic or AI nonconsensual intimate imagery. A 2024 survey of teachers and principals found that 56 percent did not get any training on “AI deepfakes.” One provider told the authors of the report that while many schools have crisis management plans for “active shooter situations, they had never heard of a school having a crisis management plan for a nudify incident, or even for a real nude image of a student being circulated.”

The report makes several recommendations to schools, like providing victims with third-party counseling services and academic accommodations, drafting language to communicate with the school community when an incident occurs, ensuring that students are not discouraged or punished for reporting incidents, and contacting the school’s legal counsel to assess the school’s legal obligations, including its responsibility as a “mandatory reporter.”

The authors also emphasized the importance of anonymous tip lines that allow students to report incidents safely. It cites two incidents that were initially discovered this way, one in Pennsylvania where a students used the state’s Safe2Say Something tipline to report that students were AI-generating nude images of their peers, and another school in Washington that first learned about a nudify incident through a submission to the school’s harassment, intimidation, and bullying online tipline.

One provider of training to schools emphasized the importance of such reporting tools, saying, “Anonymous reporting tools are one of the most important things we can have in our school systems,” because many students lack a trusted adult they can turn to.

Notably, the report does not take a position on whether schools should educate students about nudify apps because “there are legitimate concerns that this instruction could inadvertently educate students about the existence of these apps.”


From 404 Media via this RSS feed

35 comments
  • Ah yes. Generative AI has revolutionized:

    • Fraud
    • Spam
    • Revenge porn
    • Cheating
    • Schizophrenia

    And now: - Child porn

    But hey, at least it also made many existing services noticeably worse, is causing significant environmental damage and will cause millions to lose their jobs! That makes it all worth it!

    • It makes the line go up, and at the end of the day, isnt that what really matters?

      • It doesn't even do that

        OpenAI is bleeding money and none of the other big tech corporations investing into AI are seeing any returns either. The only people making money off of this are the hardware providers like Nvidia. Generative AI is just not very useful for anything that requires any minimum standard of quality, the only thing it's good for is generating unimaginable amounts of useless slop.

        GenAI is a big tech hailmary like the blockchain and the metaverse before it. Google, Microsoft and Meta desperately need there to be another growth market, they need the line to go up but all the current business models have been exhausted. They are betting on GenAI being the next growth market, the next big thing, they are merely hoping that it will make line go up in the foreseeable future.

    • Buh…buh…..it’s technowogie! What’s the matter luddite? You have something against Da FOOTUR?!?

  • I seriously didn't think they had nudify apps on the app stores. Firstly, these apps are blatantly obviously unethical in virtually all use cases, secondly, I naively thought that app stores were typical American prudes, hostile to anything involving sex or nudity. And here they are peddling revenge porn apps.

    • only real ethical use case is using it for yourself, like on your own picture for entertainment purposes but of course people like us dont need that because hexbear users are already a fucking snacc

  • Aren't these AI programs paid for? Wouldn't they register customer info? How are kids even signing up for these things without their parents cards or whatever? I don't see how there aren't ways to figure this out. It doesn't seem that easy for kids to produce this, there must be ways to make it more difficult.

    I don't even use Google, much less use AI, so I genuinely don't even know how this works, and I normally don't like surveillance in any way, but seems kinda weird that they're just immediately throwing their hands up in the air about CSAM of all things.

    • Many of the ai models can be run locally. And many of the ones that are paid give free trials.

      I mean, I could set this shit up in a single afternoon on my laptop fully locally and woth FOSS. And that was 2 years ago when I wanted to experiment with image generation for a while. I imagine going to local route is much easier now.

      That's kind of why "solving" AI under capitalism is close to impossible. Everything is a temporary workaround or a bandaid. And by capitalism I don't mean "rule by bourgeois classes" but straight up commodity production. I would be very surprised if this shit didn't start appearing in AES systems as well to some extent.

      • Well, that leads to me to the other thing I was thinking that maybe it's not even other kids as much as adults making this CSAM.

        I wouldn't even know how to locally setup some AI like that, I know kids are usually on the cusp of whatever is being released and know more than some olders like me but I still think it sounds like a bit much for a kid who just wants to see some nudes. These kids who have the CSAM made of them usually have some social media and who knows who's taking their picture and processing this stuff. That will be much more difficult to resolve, if not impossible to a certain degree like you said. But I don't think it sounds impossible to prevent kids from doing this stuff to other kids. They make it sound like this is some naughty kids up to no good when in reality these are tools being utilized by peds, in my opinion.

  • The reason laws are falling behind is because this type of AI was intended for CSAM and revenge porn from the start. It was one of the first things people pointed out would happen: if you can create pictures of people naked, it will be used without people's consent. The only conclusion I can come to is the creators of these apps and programs are doing this deliberately. They intended for this to be legal non-consensual material from the very get go. The purpose of the system is whatever the system does.

  • Combine this technology with trans-misogyny and you will have the perfect recipe for hell on earth for anyone trying to transition. I look forward to having my life in the hands of whatever asshole decides to humiliate me for petty revenge at the click of a few buttons

    .

    <<< Me realising I ain't built for this

35 comments