I pay OpenAI for a chat and image generation service. If I make Mario or something random, I pay them the same amount. If I go sell those pictures of Mario that I made with the service then I am liable for infringement, not OpenAI. OpenAI is not charging me more for making Mario or anything else.
Same as if I draw Mario to keep privately or draw him and sell the images. Adobe is never mentioned as a liability even though I used that software to infringe and paid Adobe for the ability to do so.
Please tell me how it’s different. Don’t tell me scale because they don’t care if it’s one or 1 million Marios. If someone was making money on a million Marios they would be sued independently, whether or not they used AI.
The short version is that it's a licensing issue. All art is free to view, but the moment you try to integrate it into a commercial product/service you'll owe someone money unless the artist is given fair compensation in some other form.
For example, artists agree to provide a usage license to popular art sites to host and display their works. That license does not transfer to the guy/company scraping portfolios to fuel their AI. Unfortunately, as we can see from the article, AI may be able to generate but it still lacks imagination and inspiration; traits fundamental to creating truly derivative works. When money exchanges hands that denies the artist compensation because the work was never licensed and they are excluded from their portion of the sale.
Another example: I am a photographer uploading my images to a stock image site. As part of ToS I agree to provide a license to host, display, and relicense to buyers on my behalf. The stock site now offers an AI that create new images based on its portfolio. The catch is that all attributed works result in a monetary payment to the artists. When buyers license AI generated works based on my images I get a percentage of the sale. The stock site is legally compliant because it has a license to use my work, and I receive fair compensation when the images are used. The cycle is complete.
It gets trickier in practice, but licensing and compensation is the crux of the matter.
But it’s not reposting copyrighted images. It is analyzing them, possibly a long time ago, then using complex math and statistics to learn how to make new images when requested, on the fly. It’s an automated version of the way humans learn how to make art or take pictures. If it happens to produce Mario very closely it is because it learned very well.
That is why this isn’t cut and dry. And why it might be good to think of it as derivative works. I don’t think you will be able to nail down this idea of imagination and inspiration. It’s just not that straight forward.
Edit: Also, the generator is not pumping out copyrighted images intentionally. It is waiting for a prompt from a user. Who will then go and post it somewhere. If it is too close to Mario, it is that human user who has violated copyrights. They only used the generator as a tool. I feel like that is very relevant.
I think the problem is that the user often doesn't realize that they're infringing on copyright because the prompts don't specifically mention the copyrighted work.
In other word, there is a need for a way to tell the AI that only public domain work can be used in the generation.
Very possible people don’t realize. And you know what? We shouldn’t care. But if someone generates a Mario and puts in on their website or makes a fanfic comic, doesn’t matter how they made it… go after that person. Just like you always have, Nintendo…
But I worry for the future of any tool if they win this. Add a feature to a computer art tool that feels too “generatey”, you better watch out… I worry about human artists, having to prove the sources they learned from were not protected copyrights when they lean into a style that feels like Nintendo’s…
I'm not an infrengment lawyer... but disney and nintendo, and NYtime and whole lot of artist seems to think they have a case.
I suspect it is the same as using a sample from a beyonce song in something you are selling, you may have a problem with beyonce's jurists
Does it matter? I can infringe on copyright without AI. I'm infringing copyright right now in my imagination. I'm hoping this can set boundaries on copyright enforcement or begin neutering it altogether.
Your imagination is not making money out of it. The Idea that openAI is making money because they use the work of others is unsetling. What you wish for is an OpenAI monopole on the size of google with control over creativity... sound like a dystopia
I do make money off of it. I just understand that I need to blend enough of other people's ideas and change names until it's considered a "unique idea" which, of course, no idea ever is.
Why is that unsettling? People make money off of other people's ideas all the time. The boundary of when this is allowed and when it isn't is pretty arbitrary.
The only reason the AIs knows what SpongeBob looks like is because they are using tons of copyrighted images in their database as part of their commercial product.
The problem with copyright law is you need, well, copies. AI systems don't have a database of images that they reference. They learn like we do. When you picture SpongeBob in your mind, your not pulling up a reference image in a database. You just "learned" what he looks like. That's how AI models work. They are like giant strings of math that replicate the human brain in structure. You train them by showing them a bunch of images, this is SpongeBob, this is a horse, this is a cowboy hat. The model learns what these things are, but doesn't literally copy the images. Then when you ask for "SpongeBob on a horse wearing a cowboy hat" the model uses the patterns it learned to produce the image you asked for. When your doing the training, presumably you made copies of images for that (which is arguably fair use), but the model itself has no copies. I don't know how all of this shakes out, not an expert in copyright law, but I do know an essential element is the existence of copies, which AI models do not contain, which is why these lawsuits haven't gone anywhere yet, and why AI companies and their lawyers were comfortable enough to invest billions doing this in the first place. I mostly just want to clear up the "database" misconception since it's pretty common.
Copyright does protect fictional characters, however usually these characters are also registered trademarks, in which case it's a very obvious violation to reproduce the likeness of a character.
Is there anything more relaxing than watching multinational corporations get ready for a slap fight?
Edit: “relaxing” isn’t quite the word I’m looking for. I’m trying to express how satisfying it is to see corporations suffer the consequences of their own legal shenanigans. It’s also relieving to know that I have zero stake in this situation, and won’t be affected by the outcome in a meaningful way. I don’t have to care, or feel guilty for not caring.
Yeah, that’s why I chose the words “in a meaningful way”. It’s relatively new technology, so you got along without it before. You can do it again.
I don’t think that’ll happen, though. There’s too much interest, potential, and money in the concept to kill it completely. Plus, we’re all acting as free beta testers, which is incredibly valuable. There’ll be a lot of motivation to find a compromise and keep it going.
These companies will take FOSS AI models from the cold, dead, torrenting hands of the free internet :p
fr though, both of these corp groups push against FOSS AI - media corps because of """intellectual property""" and closed AI companies for monopoly and control """safety""". But the resilience of distributed coordination and hosting makes it basically impossible to kill, just like how the war on piracy is nearly futile.