The wording of the title is a bit weird, which makes me notice how legal cases are usually worded like "weaker party succeeds/fails to change the status quo". The artists lost against the companies in this case?
Anyways, important bits here:
Orrick spends the rest of his ruling explaining why he found the artists’ complaint defective, which includes various issues, but the big one being that two of the artists — McKernan and Ortiz, did not actually file copyrights on their art with the U.S. Copyright Office.
Also, Anderson copyrighted only 16 of the hundreds of works cited in the artists’ complaint. The artists had asserted that some of their images were included in the Large-scale Artificial Intelligence Open Network (LAION) open source database of billions of images created by computer scientist/machine learning (ML) researcher Christoph Schuhmann and collaborators, which all three AI art generator programs used to train.
And then
Even if that clarity is provided and even if plaintiffs narrow their allegations to limit them to Output Images that draw upon Training Images based upon copyrighted images, I am not convinced that copyright claims based a derivative theory can survive absent ‘substantial similarity’ type allegations. The cases plaintiffs rely on appear to recognize that the alleged infringer’s derivative work must still bear some similarity to the original work or contain the protected elements of the original work.
Which eh, I'm not sure I agree with. This is a new aspect of technology that isn't properly covered by existing copyright laws. Our current laws were developed to address a state of the world that no longer exists, and using those old definitions (which I think covered issues around parodies and derivative work) doesn't make sense in this case.
This isn't some individual artist drawing something similar to someone else. This is an AI that can take in all work in existence and produce new content from that without providing any compensation. This judge seems to be saying that's an ok thing to do
did not actually file copyrights on their art with the U.S. Copyright Office.
The way they've worded this isn't really a sufficient explanation of how this works. An artist is automatically granted copyright upon the creation of a work, so it's not that they don't have the right to protect their work. It's just that, without registration, you cannot file a lawsuit to protect your work.
Copyright exists from the moment the work is created. You will have to register, however, if you wish to bring a lawsuit for infringement of a U.S. work.
I tend to agree with the judge's assessment. He must make a decision based on existing law and the plaintiff's claim/argument. You're right existing law doesn't cover this aspect of technology which is why there needs to be new laws enacted by Congress. And the courts are put in a no win situation here because we've failed to establish new rules and regulations for this new technology.
The plaintiff's claim of derivative work doesn't fit here because of what has already been long established what a derivative work looks like. AI generated images aren't really derivative works.
I think rightfully, the court has told them to try again, which is ok.
This is why it is bad that this is happening in the US.
You don't have the concept of the living tree doctrine in your body of law, or if you do, it's not particularly well developed. It's all about the writers intent down there.
Writers intent is sometimes enforced and sometimes not. Ammendments 4-8 are all about criminal rights so it's very clear that the founders were very concerned about people being accused/convicted of crimes, yet today you can't be searched without a warrant unless the cop doesn't like you can can come up with a lie saying he's sure you were doing something illegal.
Generating arbitrary new images is extremely transformative, and reducing a zillion images to a few bytes each is pretty minimal. It is really fucking difficult to believe "draw Abbey Road as a Beeple piece" would get a commissioned human artist bankrupted, if they openly referenced that artist's entire catalog, but didn't exactly reproduce any portion of it.
For language models, it's even sillier. 'The network learned English by reading books!' Uh. Yeah. As opposed to what? If it's in the library, anyone can read it. That's what it's for.
AI keeps getting cited as the next big thing that will shape the world. I think this is an appropriate time to use the legal system to make sure those changes happen in a way that won't screw everything up.
The progress will happen whether we like it or not, taking a moment to clarify rules is a good thing
I think this is an appropriate time to use the legal system to make sure those changes happen in a way that won’t screw everything up.
Tell me which rules would definitely do that without screwing it up worse, for this obscenely complicated technology that's only meaningfully existed for about a year. I could use a laugh.