Skip Navigation

AI learned from their work. Now they want compensation.

wapo.st /3OgpcBL
Embed prevented alt text

A rising movement of artists and authors are suing tech companies for training AI on their work without credit or payment

46

You're viewing a single thread.

46 comments
  • Permanently Deleted

    • However, they should, without question, have to pay for the art that they used, or cease using it if the sale won't be completed. Any other outcome is absolutely going to lead to an economic collapse.

      This is the part that drives me crazy, the conclusion doesn't follow from the premise. Just because machine learning poses an economic threat to artists/writers/ect, does not mean it somehow makes how they are trained unethical. It undoubtedly does pose a threat, but not because they're "stealing" work, but because our economic system is fucked. I would challenge anyone to try applying the same copywrite legal argument to the more common training sets using reddit, Twitter, and other online forum text data, which does not have the same copywrite protections, and isn't identifiable in the ML outputs. Machine learning applications have the potential to liberate humanity from millions of hours of meaningless work, why should we be whining about it just because "they took our jobs!"?

      Just like the Napster trials, I think our economic system and industry ought to adapt to the new technology, not cling to legal precedent to protect it from changing. Employment should not be a prerequisite to a standard of living, full stop. If some new technology comes along and replaces the labor of a couple million people, our reaction shouldn't be to stop the progress, but to ensure those people put out of work can still afford to live without creating more meaningless work to fill their time .

      • Most models are trained unethically, relying on weird statements about how humans learn the same way (looking at a few references when drawing a specific thing, you need to know how it looks to draw it lol) and large models (more or less averaging and weighting billions of images stolen from internet with no regards to the licenses)

        • I don't think i said "humans learn the same way", but I do think it helps to understand how ML algorithms work in comparison with existing examples of copyright infringement (i.e. photocopies, duplicated files on a hard drive, word for word or pixel for pixel duplication's, ect.). ML's don't duplicate or photocopy training data, they "weight" (or to use your word choice, "average") the data against a node structure. Other, more subjective copyright infringements are decided on a case-by-case basis, where an artist or entity has produced an "original" work that leans too heavily on a copyrighted work. It is clear that ML's aren't a straight-forward duplication. If you asked an MLA to reproduce an existing image, it wouldn't be able to recreate it exactly, because that data isn't stored in its model, only the approximate instructions on how to reproduce it. It might be able to get close, especially if that example is well represented in the data set, but the image would be fundamentally "new" in the sense that it has not been copied pixel by pixel from an original, only recreated through averaging.

          If our concern is that AI could literally reproduce existing creative work and pass it off as original, then we should pursue legal action against those uses. But to claim that the model itself is an illegal duplication of copyrighted work is ridiculous. If our true concern (as I think it is) that the use of MLAs may supplant the need for paid artists or writers, then I would suggest we re-think how we structure compensation for labor and not simply place barriers to AI deployment. Even if we were to reach some compensation agreement for the use of copyrighted material in the training of AI data, that wouldn't prevent the elimination of artistic labor, it would only solidify AI as an elite, expensive tool owned only by a handful of companies that can afford the cost. It would consolidate our economy further, not democratize it.

          In my opinion, copyright law is already just a band-aid to a broader issue of labor relations, and the issue of AI training data is just a drastic expansion of that same wound.

          • My concern is that billions of works are being used for training with no consent and no regard to the license, and that the model "learns" is not an excuse. If someone saved some of my content for personal use, sure, I don't mind that at all, but huge scale scraping for-profit operation downloading all content they physically can? Fuck off. I just blocked all the crawlers from ever accesing my websites (well, google and bing literally refuse to index my stuff properly anyway, so fuck them too, none of them even managed to read the sitemap properly, and it was definitely valid)

      • With a universal basic income artists would be free to choose to make art for fun instead of survival. Given enough job destruction in transport a UBI-like solution may be manditory as there are not enough jobs for humans generally.

You've viewed 46 comments.