I’m very interested in this case and am curious to see where the courts draw the line here.
Beware of an incoming hot take - I don’t see the concept of training AI on published works as much different than a human learning from published works as long as they both go on to make their own original works. I have definitely seen AIs straight-up plagiarize before, but that seems like a different issue entirely from producing similar works. I think allowing plagiarism is a problem with the constraints of the training rather than a fundamental problem with the entire concept of AI training.
A standard I could see being applied is one that I think has some precedent, where if the work it is supposed to be similar to is anywhere in the training set then it's a copyright violation. One of the valid defenses against copyright claims in court is that the defendant reasonably could have been unaware of the original work, and that seems to me like a reasonable equivalent.
But humans make works that are similar to other works all the time. I just hope that we set the same standards for AI violating copyright as we have for humans. There is a big difference between derivative works and those that violate copyright.
Beware of an incoming hot take - I don’t see the concept of training AI on published works as much different than a human learning from published works as long as they both go on to make their own original works.
The fact that this is considered a "hot take" is depressing.
It’s much less of a hot take for people in the tech community, but it is for many artists and creatives who feel threatened by AI’s potential to devalue what they’ve dedicated their lives to.