Stand-up comedian George Carlin has been brought back to life in an artificial intelligence-generated special called 'I'm Glad I'm Dead.'
AI-Generated George Carlin Drops Comedy Special That Daughter Speaks Out Against: ‘No Machine Will Ever Replace His Genius’::Stand-up comedian George Carlin has been brought back to life in an artificial intelligence-generated special called 'I'm Glad I'm Dead.'
I listened to the whole special, and I can agree with much of what the Carlinbot had to say. I think that's fun.
I know there's overwhelming hatred towards the idea of AI doing stuff like this, but I'm curious as to why exactly that is. I hate this about as much as I hate impressionists, which is a somewhat apt comparison. That is to say, I think it's pretty neat and I'm curious what all went into making it happen, so I can't say I hate it. Could someone break down why this is awful? Is it a "let the dead lie" kind of thing, keeping the dead sacred? Do we want the AI to be completely original, despite it being derivative in nature? Do we simply want AI not to exist at all? Is it just in poor taste? If so, who do we let define what constitutes good or poor taste?
I see AI as a philosophical issue, as it's a technology seeking to cross the uncanny valley and simulate consciousness as we understand it, which has serious implications regarding the nature of consciousness, the concept of the self, how we define life and understanding, how much control we grant this artificial life, what rights artificial life should have, and plenty of other conundrums along the way. I honestly don't think it's as simple as "Carlin wouldn't like this", as this video is ultimately an unsatisfactory impression of a man that only goes on for one hour. There are worse things in the world we could be lambasting (as the Carlinbot points out mid-video), but there are clearly some implications involved that people are very upset by. So, where do we go from here?
I see AI as a philosophical issue, as it's a technology seeking to cross the uncanny valley and simulate consciousness as we understand it...
These things called AI are not conscious, nor are they supposed to be. They're large language models that do text based prediction. They aren't aware of what they're saying, what it means, or the context they exist within. They just recreate patterns that it's seen before. Artificial general intelligence is a totally different thing, and it would have the implications you say. These do not.
With that being said, yes a large part of the issue is that it isn't original. It's trained to create content that you'd expect from Carlin, so really what it's doing is just repeating things. Just listen to the actual Carlin.
Another thing to consider is how this consolidates wealth. Who's getting the money from this? It's just a way to take other people's creative works and capitalize on them without having to pay them. It's purely exploitative as well as in bad taste.
The way I explain it in simplified terms; libraries helped us find books, search engines helped us find documents, LLMs help us find words. I expect LLMs to also provide a similar order of magnitude improvement in knowledge retrieval that we saw from those, which is a huge deal, but they are not on the path to AI consciousness.
That said, they may be an important processing component for assisting a consciousness, just like how in our brains we have different cortexes that primarily assist in processing information subconsciously.
I don't see this stand up as proper AI at all. However, I do see the writing on the wall, and we are definitely attempting to build towards what I referenced, a proper simulation of consciousness. So all the AI projects coming out now feel like stepping stones towards that end.
I thoroughly agree that this happening under a capitalist system is a recipe for shit though. However, we have no way of removing capital from the equation at this time, and like it or not, people are going to be doing more projects exactly like this. As well they will be making money from it because that is literally the only way anything ever gets done when capital is the beginning and end of the discussion. That's more an issue with capitalism than AI personas IMO. This is how things are going to happen, and I feel like we're better off trying to inject morality into the situation than to pretend that it won't happen or that we can stop it from happening. Otherwise, what we're doing is standing around with angry expressions on our faces, doing fuck all while corporation steal our likenesses for profit.
From the perspective of his daughter who knew George Carlin personally, I can see how this would be disturbing. It's as if someone strung up a dead relative like a puppet and put on a show.
I think in more abstract terms from someone who just saw his standup, it's a fun novelty as long as they're not profiting from it or misrepresenting it.
That's basically saying "she should be happy about plagerism." Sure, they like Carlin, or at least think he'll be profitable. That doesn't mean accepting them ripping him off is the right thing.
It's actually not saying that at all. I specifically said that I expected her to dislike it.
The rest of us, less emotionally invested in the person as a person and more as a deceased performer, will have differing opinions.
The creators of this and articles around it keep referring to it as an impression that speaks to their motives a little. This is why i used that common colloquialism. I suspect that the timing of this may be motivated by recent news surrounding the actors' strikes.
Demonstrating how this could be used to convincingly create content from an actor without any of their intentional input (evident by him being dead) should make people question these capabilities more, just like they did when people first started seeing convincing deep fakes.
For me it's a let the dead lie kind of thing. This kind of stuff just means that once a comedian or celebrity dies, a company can continue to squeeze their likeness to death in a money grab. Sure if they could stop at half assed AI shows just as a "can it really be done" thing, then it's not so terrible. But each time they're going to be doing their damnedest to improve it.
Eventually we will reach a dystopia where 1) just because you're dead doesn't mean it's over and 2) why get new talent and new blood when there's dead blood to keep squeezing. Next up stars will have issues where when they go to sign a contract, they have to make sure their families get portions of profits from AI content for literal decades after their death. Otherwise the big companies get to keep making money off them with no protection for their family.
With impressionists you can clearly see that they're huge fans paying homage to someone they respect. With AI it feels like lazily cashing in on someone's death. You just swap out the name for another popular comedian and boom, more views and money. It's the commoditization of culture, like the equivalent of replacing hand crafted wood furniture with flat pack particle board.
Assuming we kill capitalism and form a more utopic society (which the Carlinbot amusingly is all for, but also cynically doesn't think is possible), would AI recreations of existing or previously existing people still be an issue to you? I'm morbidly curious what an AI "me" would be like, and I can imagine others being against something like that coming to be.
To me, AI is a helpful reference and information transformation tool. I have no issue with the technology any more than I do with a word processor. It's all about how it's used so it's 100% about context for me, and in this context, it's clearly a cynical and lazy cash grab by a washed up tech bro comedian.
People are using his likeness to gain attention and probably money. They could go and record those jokes and rants themselves but nobody would be interested in listening.