If an artist trains an image generation AI on their own digital art catalog, should they be able to get a copyright on what it produces? When does a tool change from a tool to a non-human creator?
Under US copyright law, only works created by humans can be copyrighted. Courts have (imho rightly) denied copyrights to AI-generated images.
My question is when do you think AI image tools cross from the realm of a “tool” (that, for example generates and fills in a background so an item can be removed from a photo) into the realm of “a human didn’t make this”?
What if an artist trains an AI so specialized it only makes their style of art? At what point do you think the images they create with it begin to count as their “work product”?
Are you talking about an artist exclusively running their images through an AI model until it is capable of regenerating images that look like it was created by them and have some semblance of intent?
In order to get anything that looks remotely like what people want, I'm pretty sure they would have to upload millions of pictures of their own creation first. So most people just layer their images on top of the giant mash of ethically sketchy data that already was there.
LoRA models still have the underlying fully trained base model underneath; it is not a complete replacement or complete modification of the model weights.
I based my answer on the assumption that you would not be using somebody else's art to train an AI. The Lora models are already brimming full of other people's art.
Ahh, interesting. I almost admitted to not knowing that when I posted haha - I’m not hugely familiar with how image ai works, wasn’t sure it would be possible for one person to produce enough training material under current tech.
Interesting to think about it as a hypothetical, though.