Pressure grows on artificial intelligence firms over the content used to train their products
‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products
The main difference between the two in your analogy, that has great bearing on this particular problem, is that the machine learning model is a product that is to be monetized.
I don't think it is. We have all these non-human stuff we are awarding more rights to than we have. You can't put a corporation in jail but you can put me in jail. I don't have freedom from religion but a corporation does.
Corporations are not people, and should not be treated as such.
If a company does something illegal, the penalty should be spread to the board. It’d make them think twice about breaking the law.
We should not be awarding human rights to non-human, non-sentient creations. LLMs and any kind of Generative AI are not human and should not in any case be treated as such.
Honestly, yes. I’m ok with that. People are not entitled to be able to do anything they want with someone else’s IP. 90 years is almost reasonable. Cut it in half and I’d also consider it fairly reasonable.
I’m all for expanding copyright for individuals and small companies (small media companies, photographers who are incorporated, artists who make money based on commissions, etc) and reducing it for mega corps, but there’s an extremely fine line around that.
I really don't understand this whole "learning" thing that everybody claims these models are doing.
A Markov chain algorithm with different inputs of text and the output of the next predicted word isn't colloquially called "learning", yet it's fundamentally the same process, just less sophisticated.
They take input, apply a statistical model to it, generate output derived from the input. Humans have creativity, lateral thinking and the ability to understand context and meaning. Most importantly, with art and creative writing, they're trying to express something.
"AI" has none of these things, just a probability for which token goes next considering which tokens are there already.
I don't think "learning" is a word reserved only for high-minded creativeness. Just rote memorization and repetition is sometimes called learning. And there are many intermediate states between them.
I think the best counter to this is to consider the zero learning state. A language model or art model without any training data at all will output static, basically. Random noise.
A group of humans socially isolated from the rest of the world will independently create art and music. It has happened an uncountable number of times. It seems to be a fairly automatic emergent property of human societies.
With that being the case, we can safely say that however creativity works, it's not merely compositing things we've seen or heard before.
I disagree with this analysis. Socially isolated humans aren't isolated, they still have nature to imitate. There's no such thing as a human with no training data. We gather training data our whole life, possibly from the womb. Even in an isolated group, we still have others of the group to imitate, who in turn have ancestors, and again animals and natural phenomena. I would argue that all creativity is precisely compositing things we've seen or heard before.
Out of curiosity, how far do you extend this logic?
Let's say I'm an artist who does fractal art, and I do a line of images where I take jpegs of copywrite protected art and use the data as a seed to my fractal generation function.
Have I have then, in that instance, taken a copywritten work and simply applied some static algorithm to it and passed it off as my own work, or have I done something truly transformative?
The final image I'm displaying as my own art has no meaningful visual cues to the original image, as it's just lines and colors generated using the image as a seed, but I've also not applied any "human artistry" to it, as I've just run it through an algorithm.
Should I have to pay the original copywrite holder?
If so, what makes that fundamentally different from me looking at the copywritten image and drawing something that it inspired me to draw?
If not, what makes that fundamentally different from AI images?
I feel like you latched on to one sentence in my post and didn't engage with the rest of it at all.
That sentence, in your defense, was my most poorly articulated, but I feel like you responded devoid of any context.
Am I to take it, from your response, that you think that a fractal image that uses a copywritten image as a seed to it's random number generator would be copyright infringement?
If so, how much do I, as the creator, have to "transform" that base binary string to make it "fair use" in your mind? Are random but flips sufficient?
If so, how is me doing that different than having the machine do that as a tool?
If not, how is that different than me editing the bits using a graphical tool?
That's only because I thought your last sentence was the biggest difference -- everything else is all stuff you did (or theoretically would do), which is the clincher.
(And besides, on Lemmy, comments with effort are sometimes disincentivized 😉)
Art can include buying a toilet and turning it on its side and calling it a fountain. And I imagine, in your scenario, that you could process an entire comic book by flipping just one pixel on each page, print it out, arrange it in a massive mural, and get it featured in the Louvre with the title "is this fair use?" But if you started printing out comic books en masse with the intent to simply resell them in their slightly changed form, you might get in trouble, and probably rightly so. But that's a question of fair use, isn't it?
Fair on all counts. I guess my counter then would be, what is AI art other than running a bunch of pieces of other art through a computer system, then adding some "stuff you did" (to use your phrase) via a prompt, and then submitting the output as your own art.
That's nearly identical to my fractal example, which I think you're saying would actually be fair use?
As far as I know, courts have basically decided that things need to be created by a person first and foremost, not by, say, a monkey (and yes there was an attempt to copyright a monkey selfie). In the flipped pixel example I personally classified as art, there was a lot more transformation than simply flipping a pixel, to the point where it hopefully transformed the original into having a new and unique intent.
You could theoretically make a piece of art where generative AI in a similar way, but it's the human element of composition that would make it art (or, at the very least, something novel and not just regurgitated). In theory, you could pull all of the works of a single comic artist, input it into generative AI and do the exact same thing, making a mural of This Is Not Wally Wood or something.
But hopping onto a generative AI that's been trained with the works of countless artists (and by no other AI networks, because AI degenerates when it trains itself) and simply typing in a phrase... Well, at that point it's closer to pushing a button on a machine that flicks paint onto a canvas, and you didn't make the machine, and it's used by thousands of other people everyday. Only so much paint flicking can be done before it's not particularly interesting or unique.
The problem is that a human doesn’t absorb exact copies of what it learns from, and fair use doesn't include taking entire works, shoving them in a box, and shaking it until something you want comes out.
Except they literally don't. Human memory doesn't retain an exact copy of things. Very good isn't the same as exactly. And human beings can't grab everything they see and instantly use it.