The AI genie is here. What we're deciding now is whether we all have access to it, or whether it's a privilege afforded only to rich people, corporations, and governments.
I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.
If this isn't actually what you want, then what's your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it's likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we're only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.
I know I'm posting this in a hostile space, and I'm sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that's fine (the jury is literally still out on that). What I'm interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don't is the absolute worst possibility.
First of all, the physical process of human inspiration is that a human looks at something, their optic nerves fire, those impulses activate other neruons in the brain, and an idea forms. That's exactly how an AI takes "inspiration" from images. This stuff about free will and consciousness is metaphysics. There's no meaningful difference in the actual process.
Secondly, let's look at this:
SAG-AFTRA just got a contract offer that says background performers would get their likeness scanned and have it belong to the studio FOREVER so that they can simply generate these performers through AI.
This is what is happening RIGHT NOW. And you want to compare the output of an AI to a human's blood sweat and tears, and argue that copyright protections would HURT people rather than help them avoid exploitation.
I'll say right off that I don't appreciate the "you're a bad person" schtick. Switching to personal attacks stinks of desperation. Plus, your personal attack on me isn't even correct, because I don't approve of the situation you described any more than you do. The reason they're trying to slip that into those people's contracts is because those people own their likenesses under existing copyright law. That is, you don't have to come up with a funny interpretation of copyright law where concepts can be copyrighted but only if a machine learns them. They need a license to use those people's likenesses regardless of whether they use an AI or Photoshop or just have a painter do it. Using AI doesn't get them out of that -- if it did; they wouldn't need to try to put it into the contract.
In other words, they aren't using an AI to attack anyone; they're using a powerful bargaining position to try to get people to sign away an established right they already have according to copyright law. That has absolutely nothing to do with anything I'm talking about here, except that you want to attach it to what I'm talking about so you can have something to rage about.
And here's the thing. None of you people ever gave a shit when anybody else's job was automated away. Cashiers have had their work automated away recently and all I hear is "ThAt'S oKaY bEcAuSe tHeIr jOb sUcKs!!!!!!111" Artists have been actually violating the real copyright of other artists (NOT JUST LEARNING CONCEPTS) with fanart (which is a DERIVATIVE WORK OF A COPYRIGHTED CHARACTER) for god only knows how long and there's certainly never been a big outcry about that.
It sucks to be the ones looking down the business end of automation. I know that because as a computer programmer I am too. On the other hand, I can see past the end of my own nose, and I know how amazing it would be if lots of regular people suddenly had the ability to do the things that I do, so I'm not going to sit there and creatively interpret copyright law in an attempt to prevent that from happening. If you're worried about the effects of automation, you need to start thinking about things like a universal healthcare and universal income, not just ESTABLISH SPECIAL PROTECTIONS FOR A TINY SUBSET OF PEOPLE WHOM YOU HAPPEN TO LIKE. It just seems a bit convenient, and (dare I say) selfish that the point in history that we need to start smashing the machines happens to be right now. Why not the printing press or the cotton gin or machines that build railroads or looms or or robots in factories or grocery store kiosks? The transition sucked for all those people as well. It's going to suck for artists, and it'll suck for me, but in the end we can pull through and be better off for it, rather than killing the technology in its infancy and calling everyone a monster who doesn't believe that you and you alone ought to have special privileges.
We need to be using the political clout we have to push us toward a workable post-scarcity economy, as opposed to trying to preserve a single, tiny bit of scarcity so a small group of people can continue to do something while everybody else is automated away and we all end up ruled by a bunch of rent-seeking corporations. Your gatekeeping of the ability of people to do art isn't going to prevent any of that.
P.S. We seem to be at the very beginning of a major climate disaster these last couple weeks, so we're probably all equally fucked anyway.
Dude, I'm not calling you a bad person. I am calling you out of touch with a very real problem.
Look, you asked what the endgame was for people who hoped that copyright would get applied to AI. I TOLD you. We want to slow down the deployment of AI by large companies and establish legal protections for creatives and others who
You responded by comparing the AI to those human creatives, which honestly is a trap I fell into. Because it derails us from the point, which is those creatives need legal protection. The legal system will see AI as a tool no matter HOW similar or dissimilar it is to a human being until an AGI comes along that is granted legal personhood. Then those legal restrictions won't apply to that AGI, and it will instead fall under the legal restrictions applied to people.
Because the intended use of art is communication between PEOPLE. And the person involved in AI right now is the person who feeds it the art and makes a machine to create what they desire. This is not the intended use case. It is not intended to create machines, it is intended to inspire people.
So unless your AI is LEGALLY classified as a person, applying copyright restrictions to it will not apply to a human reader that is inspired.
I DEFINITELY want a legal distinction between using my writing to make a machine and reading my writing.
Because using the work of creatives to make an AI is exploitation. And I don't think we should preserve the right of a corporation to exploit creatives just so that the average person can ALSO exploit creatives.
But if it makes you happy, how about we get a copyright ala Creative Commons that can allow an individual to create an AI using the copyrighted work for non-profit reason, but restrict corporations from doing so with an AI used for profit, and considers any work created by this AI to be noncopyrighted.
But if it makes you happy, how about we get a copyright ala Creative Commons that can allow an individual to create an AI using the copyrighted work for non-profit reason, but restrict corporations from doing so with an AI used for profit, and considers any work created by this AI to be noncopyrighted.
Honestly, I think keeping the output of AI non-copyrighted is probably the best of both worlds, because it allows individuals to use AI as an expressive tool (you keep separating "creatives" from "average people", which I take issue with) while making it impractical for large media companies to use.
At any rate, the reason copyright restrictions would just kill open source AI is that it strikes me as incredibly unlikely that you're going to be able to stop corporations from training AI on media that they own outright. Disney has a massive library of media that they can use as training data, and no amount of stopping open source AI users from training AI on copyrighted works is going to prevent Disney from doing that (same goes for Warner Bros, etc). Disney, which is known for exploiting its own workers, will almost certainly use that AI to replace their animators completely, and they'll be within their legal rights to do so since they own all the copyrights on it.
Now consider companies like Adobe, Artstation, and just about any other website that you can upload art to. When you sign up for those sites, you agree to their user agreement, which has standard boilerplate language that gives them a sublicenseable right to use your work however they see fit (or "for business purposes", which means the same thing). In other words, if you've ever uploaded your work anywhere, you've already given someone else the legal right to train an AI on your work (even with a creative interpretation of copyright law that allows concepts and styles to be copyrighted), which means they're just going to build their own AI and then sell it back to you for a monthly fee.
But artists and writers should be compensated every time someone uses an AI trained on their work, right? Well, let's look at ChatGPT for a moment. I have open source code out there on github, which was almost certainly included in ChatGPT's training data. Therefore, when someone uses ChatGPT for anything (since the training data doesn't go into a database; it just makes tiny tiny little changes to neuron connection weights), they're using my copyrighted work, and thus they owe me a royalty. Who better to handle that royalty check but OpenAI? So now you get on there and use ChatGPT, making use of my work, and some of the "royalty fee" they're now charging goes to me. Similarly, ChatGPT has been trained on some of whatever text you've added to the internet (comments, writing, whatever, it doesn't matter), so when I use it, you get royalties. So far so good. Now OpenAI charges us both, keeps a big commission, and we both pay them $50/month for the privilege of access to all that knowledge, and we both make $20/month because people are using it, for a net -$30/month. Who wins? OpenAI. With a compensation scheme, the big corporations win every time and the rest of us lose, because it costs money to do it, and open source can't do it at all. Better to skip the middle man, say here's an AI that we all contributed to and we all have access to.
So again, what specifically is your plan to slow down deployment? Because onerous copyright restrictions aren't going to stop any of the people who need to be stopped, but they will absolutely stop the people competing with those people.
@IncognitoErgoSum Honestly? Arguing against AI to anyone I can find and supporting any legal action to regulate the industry. That includes my boss when he considers purchasing an AI service.
If find something that's mine has been used to train an AI, I am willing to join a class action suit. The next work contract renegotiation I have will take into account the possibility of my writing being used to train, and it'll be a no. I'm supporting the SAG-AFTRA and WGA strikes because those contracts will set important precedents on how AI can be used in creative industries at least, and will likely spread to other industries.
And I think if enough people don't buy into the hype, and are skeptical, and public opinion remains against it, then it's less likely AI will be used in industries that need a strict safety standard until we get a regulatory agency for it.
It's more about the utilitarian goal of convincing people of something that it's convenient for you if the public believes it, in order to protect yourself and your immediate peers from automation, as opposed to actually seeking the truth and sticking going with established legal precedent.
Legally, your class action lawsuit doesn't really have a leg to stand on, but you might manage to win anyway if you can depend on the ignorance of the judge and the jury about how AI actually works, and prejudice them against it. If you can get people to think of computer scientists and AI researches as "tech bros" instead of scientists with PHDs, you might be able to get them to dismiss what they say as "hype" and "fairy tales".
I still say you're wrong about how the AI actually works, man. You're looking at it with rose-colored goggles, head filled with sci-fi catch phrases. But it's just a math machine.
I'm looking at it with a computer science degree and experience with AI programming libraries.
And yes, it's a machine that simulates neurons using math. We simulate physics with math all the way down to the quantum foam. I don't know what your point is. Whether it's simulated neurons or real neurons, it learns concepts, and concepts cannot be copyrighted.
I have a sneaking suspicion since you switched tactics from googling the wrong flowchart to accusing me of not caring about workers due to a contract dispute that's completely unrelated to anything of the copyright stuff I'm talking about, I have a feeling you at least suspect that I know what I'm talking about.
Anyway, since you're arguing based on personal convenience and not fact, I can't really trust anything that you say anyway, because we're on entirely different wavelengths. You've already pretty much indicated that even if I were to convince you I'm right, you'd still go on doing exactly what you're doing, because you're on a crusade to save a small group of your peers from automation, and damn the rest of us.
Yeah, we're on different wavelengths. But I do have over twenty years in cyber transport and electronics. I know the first four layers in and out, including that physical layer it seems just about all programmers forget about completely.
It's not learning. It's not reading. It's not COMPREHENDING. It is processing. It is not like a person.
I admit, I'm firing from any direction I can get an angle at because this idea that these programs are actual AGI and are comparable to humanity is well... dangerous. There are people with power and influence who want to put these things in areas that WILL get people hurt. There are people who are dying to put them to work doing every bit of writing from scripts to NOTAMs and they are horrifically unreliable because they have no way of verifying the ACCURACY of what they right. They do not have the ability to make a judgement, which is a key component of human thinking. They can only favor the set result coming through the logic gate. If A and B enter, B comes out. If A and A enter, A comes out. It has no way to evaluate whether A or B is the actual answer.
You call it a small group of my peers, but everyone is in trouble because people with money are SEVERELY overestimating the capabilities of these programs. The danger is not that AI will take over the world, but that idiots will hand AI the world and AI will tank it because AI does not come with the capabilities needed to make actual decisions.
So yeah, I bring up the WGA/SAG-AFTRA strike. Because that happens to be the best known example of the harm being done not by the AI, but by the people who have too much faith in the AI and are ready to replace messy humans of all stripes with it.
And I argue with you, because you have too much faith in the AI. I'm not impressed by your degree to be perfectly honest because in my years in the trade I have known too many people with that degree who think they know way more than they do and end up having to rely on people like me to keep them grounded in what actually can be accomplished.
If it's the future potential of AI, that's just a guess. AGI could be 100 years away (or financially impossible) as easily as it could be 5 years. AGI is in the future still, and nobody is really qualified to guess when it'll come to fruition.
If you think I'm wrong about the present potential of AI, I've already seen individuals with no budget use it to express themselves in ways that would have required an entire team and lots of money, and that's where I believe its real potential right now lies. That is, opening up the possibility for regular period to express themselves in ways that were impossible for them before. If Disney starts replacing animators with AI, I'll be right there with you boycotting them. AI should be for everyone, not for large corporations that can already afford to express themselves however they want.
If you think I'm wrong that AIs like ChatGPT and Stable Diffusion do their computing with simulated neurons, let me know and I'll try to find some literature about it from the source. I've had a lot of AI haters confidently tell me that it doesn't (including in this thread), and I don't know if you're in that camp or not.
So what does that mean? Do you not believe that AIs like ChatGPT and Stable Diffusion have neural networks that are made up of simulated neurons? Or are you saying that we haven't simulated an actual human brain? Because the former is factually incorrect, and I never claimed the latter. Please explain exactly what "hype" you believe I'm buying into? Because I don't think you have any clue what it is you think I'm wrong about. You just really don't want me to be right.