One, yes, some models were trained on CSAM. In AI you'll have checkpoints in a model. As a model learns new things, you have a new checkpoint. SD1.5 was the base model used in this. SD1.5 itself was not trained on any CSAM, but people have giving additional training to SD1.5 to create new checkpoints that have CSAM baked in. Likely, this is what this person was using.
Two, yes, you can get something out of a model that was never in the model to begin with. It's complicated, but a way to think about it is, a program draws raw pixels to the screen. Your GPU applies some math to smooth that out. That math adds additional information that the program never distinctly pushed to your screen.
Models have tensors which long story short, is a way to express an average way pixels should land to arrive at some object. This is why you see six fingered people in AI art. There wasn't any six fingered person fed into the model, what you are seeing the averaging of weights pushing pixels between two different relationships for the word "hand". That averaging is adding new information in the expression of an additional finger.
I won't deep dive into the maths of it. But there's ways to coax new ways to average weights to arrive at new outcomes. The training part is what tells the relationship between A and C to be B'. But if we wanted D' as the outcome, we could retrain the model to have C and E averaging OR we could use things call LoRAs to change the low order ranking of B' to D'. This doesn't require us to retrain the model, we are just providing guidance on ways to average things that the model has already seen. Retraining on C and E to D' is the part old models and checkpoints used to go and that requires a lot of images to retrain that. Taking the outcome B' and putting a thumb on the scale to put it to D' is an easier route, that just requires a generalized teaching of how to skew the weights and is much easier.
I know this is massively summarizing things and yeah I get it, it's a bit hard to conceptualize how we can go from something like MSAA to generating CSAM. And yeah, I'm skipping over a lot of steps here. But at the end of the day, those tensors are just numbers that tell the program how to push pixels around given a word. You can maths those numbers to give results that the numbers weren't originally arranged to do in the first place. AI models are not databases, they aren't recalling pixel for pixel images they've seen before, they're averaging out averages of averages.
I think this case will be slam dunk because highly likely this person's model was an SD1.5 checkpoint that was trained on very bad things. But with the advent of being able to change how averages themselves and not the source tensors in the model work, you can teach new ways for a model to average weights to obtain results the model didn't originally have, without any kind of source material to train the model. It's like the difference between Spatial antialiasing and MSAA.
In the eyes of the law, intent does matter, as well as how it's responded to.
For csam material, you have to knowingly possess it or have sought to possess it.
The AI companies use a project that indexes everything on the Internet, like Google, but with publicly available free output.
They use this data via another project, https://laion.ai/ , which uses the data to find images with descriptions attached, do some tricks to validate that the descriptions make sense, and then publish a list of "location of the image, description of the image" pairs.
The AI companies use that list to grab the images train an AI on them in conjunction with the description.
So, people at Stanford were doing research on the laion dataset when they found the instances of csam.
The laion project pulled their datasets from being available while things were checked and new safeguards put in place.
The AI companies also pulled their models (if public) while the images were removed from the data set and new safeguards implemented.
Most of the csam images in the dataset were already gone by the time the AI companies would have attempted to access them, but some were not.
A very obvious lack of intent to acquire the material, in fact a lack of awareness the material was possessed at all, transparency in response, taking steps to prevent further distribution, and taking action to prevent it from happening again both provides a defensive against accusations, and will make anyone interested less likely to want to make those accusations.
On the other hand, the people who generated the images were knowingly doing so, which is a nono.
They wouldn't be able to generate it had there been none in the training data, so I assume the labelling and verification systems you talk about aren't very good.
That's not accurate. The systems are designed to generate previously unseen concepts or images by combining known concepts.
It's why it can give you an image of a pony using a hangglider, despite never having seen that. It knows what ponies look like, and it knows what hanggliding looks like, so it can find a way to put both into the image. Where it doesn't know, it will make stuff up from what it does know, often requiring potentially very detailed user explanation to describe how a horse would fit in a hangglider, or that it shouldn't have a little person sticking out of it's back.
Could you hypothetically describe csam without describing an adult with a child's head, or specifying that it's a naked child?
That's what a person trying to generate csam would need to do, because it doesn't have those concepts.
If you just asked it directly, like I said "horse flying a hangglider" before, you would get what you describe because it's using the only "naked" it knows.
You would need to specifically ask it to demphasize adult characteristics and emphasize child characteristics.
That doesn't mean that it was trained on that content.
For context from the article:
The DOJ alleged that evidence from his laptop showed that Anderegg "used extremely specific and explicit prompts to create these images," including "specific 'negative' prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults."
Also: Pretending like I was attacking you for knowing how the technology works is a bullshit move.
I'm complaining at your defence to them, not at your explanation of the technology... But that just shows how willing you are to "spin" things in their defence. Little unpaid footman.
You made an incorrect statement about how the technology worked and I corrected you. You doubled down and I made a more detailed explanation.
You called me a "creep" for this, and again just now call me a "little unpaid footman".
If anything's bullshit it's your making it aggressive when it doesn't need to be.
I never said their system was perfect, or that they made no mistakes. I said the system does not need csam to generate csam. I explained why their actions weren't illegal.
You need to work on your reading comprehension if you can't see how those are different from being a bootlicker.
I was like, how do they know what they have - and you were like "another AI has labelled it all, and every now and then a human checks it's work"....
It's AIs all the way down with you.
"Open AI investigated it's self and confirmed it didn't have CSAM in the training data"
They couldn't find out if they wanted to, the training data is too large and the labelling AI isn't designed to know, or label CSAM.
....and yeah, sitting around and using your time to defend tech-bro billionaires IS creepy. They're not about to thank you my guy.
"I just understand the technology"
Yeah, and you're not acknowledging that what I'm saying is accurate. The "labelling AI" can't recognise and report CSAM, and the Tech Bros don't have an accurate idea of what they have stored in their training data.
So yeah, your being creepy when you do all these mental gymnastics to defend them...
.... it's just like the claiming the NSA don't listen to phone conversations, only it's been revealed they do have human operators hearing bits of conversation.
Your a narc and an apologist, and it's creepy because it's misinformation. It's spin and you're volunteering your time to defend them.
why are you talking about open ai? They're not even involved in this.
You asked why they weren't in legal trouble. I told you.
You asserted that any safeguards they put in place ("they" in this case being an open source project, and a startup that provides their models for free, not the billionaires you think you're mad at) couldn't be functional because the tool requires csam to generate csam. I told you that was incorrect, because the whole point is to generate things it hasn't seen before.
You explode on a set of insult laden rants because, as far as I can tell, you don't want to say "oh, I misunderstood. I still think they're grossly irresponsible for not including safeguards in the first place, and how can we actually trust the safeguards they have now?". You know, like a reasonable person would have.
Instead you assumed that the only reason someone could disagree with your factually incorrect assumptions about how something works, is if they're a "creepy misinformation spreading narc" (... Narc? That one doesn't even make sense)
Do you even know what misinformation means? Do you think that "ability to magic csam into existence from nothing" (which is what it can do) is something that I think is somehow better than it only being able to make it from known examples?