AI has only been capable of.. half imitating those things for like a year and a half. And most uses are non commercial, and there is not established case law for personal usage of generative artificial intelligence. It would be hard to sue someone for something they aren't profiting from, or that they are not using as a form of slander or libel or harassment.
The EU, the UK, and the US are all currently developing new laws surrounding the usage of AI. But this is all incredibly new and therefore in progress.
Yup, we're in the wild west of it now where anything goes, but that's only because there are no laws currently. Actors have already been fighting this in the movie industry to determine how to work out "face rights" and "voice rights" after they die (so they can do things like Leia in Rogue One), and now that it's for anyone I'm guessing we're going to see laws start to take form very soon.
Yes, and personal use exists so that people who are not profiting in any way off of someone else's copyright are not chargeable under copyright law. Hence why I can download a jpg of an artists painting and that action alone is not breaking any law.
If my AI is not being used to make money, and it is also not being used to slander someone or harass someone, then what law exactly would I be breaking? How would I be harming someone else, by having an AI that I use solely personally and am in no way benefiting from? Can you sue someone for taking a DVD, ripping it, and then rearranging the scenes in a new video file? Have they broken a law solely by creating a derivative work using the original material, even if that work is not making them money and they are commiting no other crime using that work?
Yes, but training is not infringement under current law.
If you learn to draw by tracing Mickey Mouse, but then professionally draw original works, you haven't infringed copyright.
If you subsequently draw Mickey Mouse, you'll hear from Disney's lawyers.
So yes, AI producing IP protected material when prompted results in infringement, just as any production would.
The thing people seem to be up in arms about are things like copying style (not protected) or using for training (not infringing).
If anything, all of this discussion over the past year around AI has revealed just how little people understand about IP laws. They complain that there needs to be laws for things already protected and prohibited, and they complain that companies are infringing for things that are not protected nor prohibited.
For example, in relation to the OP question, in the US there is no federal protection around IP rights for voices and case law to the opposite.
This isn't just a matter of law, but of technology. Part of the point of these large language models is the massive corpus of raw data. It's not supposed to mimic a specific person or work, but rather imitate ALL of them. Ideally, you wouldn't even be able to pinpoint anyone or anything in particular.
(If you're asking about a different type of AI, then disregard)
Likeness is just one aspect of copyright. Another side entirely involves the protection of the innovative and production parts of the creation of the original training material. I.e. ChatGPT wouldn't be possible without the work of hundreds of millions of writers writing things.
Not so different than compensating a course for providing you with their learning materials when you learn a new language.
Because this stuff was already extremely complicated before AI came along.
For example, the thing you are actually dealing with isn't copyright or trademark here, but "right of publicity" which relates to the right to one's likeness for commercial purposes.
Which isn't protected federally in the US and comes down to a state-by-state basis.
There were instances where you had humans impersonators mimicking voice or likeness of others for years before AI. Or even using old materials and re-editing them like Issac Hayes' voice for Chef after the falling out with the South Park creators over their Scientology episode, where they subsequently had his character fully voiced as he joined a pedophile club.
Basically, governments and laws can be fairly slow to adapt. Consider that it wasn't that long ago that you could absolutely tell that something was made by AI whether it was caused by stilted/unnatural speech or mangled fingers in fake pictures of people. In a few short months we've gone from dreamlike hallucinations of real things, to almost passable renditions. AI is just advancing too fast for most governments to make a decision and formulate a coherent and mostly future proof set of laws regarding it.