Curated by THEOUTPOST
On Tue, 19 Nov, 12:04 AM UTC
3 Sources
[1]
AI cloning of celebrity voices outpacing the law, experts warn
David Attenborough among famous people whose voices have been exploited by fraudsters It's the new badge of celebrity status that nobody wants. Jennifer Aniston, Oprah Winfrey and Kylie Jenner have all had their voices cloned by fraudsters. Online blaggers used artificial intelligence to fake the Tiggerish tones of Martin Lewis, the TV financial adviser. And this weekend David Attenborough described himself as "profoundly disturbed" to have discovered that his cloned voice had been used to deliver partisan US news bulletins. Now experts have warned that voice-cloning is outpacing the law as technologists hone previously clunky voice generators into models capable of emulating the subtlest pauses and breathing of human intonation. Dr Dominic Lees, an expert in AI in film and television who is advising a UK parliamentary committee, told the Guardian on Monday: "Our privacy and copyright laws aren't up to date with what this new technology presents, so there's very little that David Attenborough can do." Lees is advising the House of Commons culture, media and sport select committee in an inquiry that will look at the ethical use of AI in film-making. He also convenes the Synthetic Media Research Network, whose members include the firm making an AI version of the late chatshow interviewer Michael Parkinson, which will result in an eight-part unscripted series, Virtually Parkinson, with new guests. That voice-cloning project is being done with the consent of Parkinson's family and estate. "The government definitely needs to look at [voice cloning], because it's a major issue for fraud," Lee said. "It needs the stick of government regulation in order to deter [misuse] ... we can't allow it to be a free-for-all." AI voice cloning scams were up 30% in the UK in the last year, according to research by NatWest bank this month. Another lender, Starling bank, found 28% of people had been targeted by an AI voice-cloning scam at least once in the past year. Voice cloning is also reportedly being used by fraudsters to perpetrate a version of the "hi mum" text scam, in which fraudsters pose as children needing their parent to send funds urgently. On already fuzzy telephone lines, detecting that a pleading child is a scammer's clone can be hard. Consumers are advised to check by hanging up and calling back on a trusted number. People whose voices are cloned without their consent find it more than a nuisance. Attenborough told the BBC on Sunday: "Having spent a lifetime trying to speak what I believe to be the truth, I am profoundly disturbed to find that these days my identity is being stolen by others and greatly object to them using it to say what they wish." When a new voice option on OpenAI's latest AI model, ChatGPT-4o, featured tones that were very close to those of the actor Scarlett Johansson, she said she was shocked and angered as the voice "sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference". The rise of cloned voices raises the question of what they miss about real human tones. Lees said: "The big problem is that AI doesn't understand emotion and how that changes how a word or a phrase might have emotional impact, and how you vary the voice to represent that." The voiceover industry, which provides voices for adverts, animations and instructional training, is having to respond quickly to technological advances. Joe Lewis, the head of audio at the Voiceover Gallery in London, which has provided real human voices for adverts for Specsavers and National Express, said it had already cloned the voices of some of its artists. He said AI seemed to work best with English male voices, perhaps because that reflected the bias in the type of recordings that had been used to train the algorithm, but he cautioned that in general "there's something about the way it is generated that makes you less attentive". "When the AI [voice] breathes, it is a very repetitive breath," he said. "The breaths are in the right place, but they don't feel natural ... [But] can it get to the point when it's really perfect? I don't see why not, but to get to the full emotional spectrum is a long way off."
[2]
I cloned my voice with AI and even my wife can't tell the difference
Listening to your own voice saying words you've never said before is an unsettling experience, but in the AI future which we're living through right now in 2024, it's almost unsurprising. Of course, AI can now clone your voice and make it sound just like you! It's almost expected, isn't it? What is surprising, to me at least, is how easy it is to do. You can access an AI voice cloner for free online, and clone your voice, then get it to say anything you want in just a few minutes. The training takes just 30 seconds, then you're good to go. There are no real security checks or restrictions on what you can do with that voice once you've trained it either. So, you could make it swear, or threaten somebody. There seem to be hardly any guardrails. If you type in 'AI Voice Cloner' into a Google search bar you'll be spoiled for choice. A lot of the voice cloners require you to sign up for a monthly fee before they will clone your voice, but quite a few of them have a free option. I tried a few of the free choices and some of them, despite promising unparalleled accuracy, produced a robotic version of my voice that was going to fool nobody. No, I had a higher goal in mind: I wanted to produce a clone of my voice that would fool my wife. I eventually settled on Speechify to clone my voice, since it combined ease of use, full access to the voice cloner, and a 30-second training time. Once you've made a free account on Speechify you simply talk to your microphone for 30 seconds or longer to train your AI voice. Once you've done that you can type in some text and hit the Generate button to hear the words spoken back to you in your own voice. If you're concerned about security, Speechify has a pretty detailed privacy statement, and it does say that it will never sell your information and is committed to protecting the privacy of your data. So, your uploaded voice should be for only you to use. I thought what I created was pretty convincing, but I needed to see what my wife thought. I crept up behind her and played a sample clip of 'me' and... well ok, she laughed because she could tell it was coming out of my MacBook's speakers, but she was impressed. "Actually", she said, "I think it sounds like you, but better". And that is the benefit of cloning your voice. It doesn't make mistakes when it talks. There are no 'ums' and 'ahs' and it gets everything right the first time. If I think about how many times I've had to record and re-record the intros to my podcasts because I couldn't get it quite right, I can see an obvious application for an AI voice cloner. But that's also a danger in AI voice cloning because you can get the fake voice to say just about anything. While scams that involve stealing your voice are one level of concern, the security implications have ramifications that go even beyond the grave. Recently the legendary late British talk show host, Michael Parkinson, surprised everybody by announcing that he was launching a new podcast called Virtually Parkinson. Thanks to the miracles of AI his voice would be interviewing people in real time once again. In Parkinson's case, his estate is fully behind the podcast, but what if permission has not been given? David Attenborough, the grandfather of the BBC's natural history programming recently expressed unease at an AI version of his voice, describing it as "disturbing". We live in an age where AI can create podcasts without any human interaction and even AI sports presenters are starting to appear. So, in a way, we shouldn't be surprised that it's so easy for AI to clone our voices, but the implications could be profound. With AI giving celebrities (or rather, their estates) the option to continue working long after they have shuffled off this mortal coil, the future for both celebrities and individuals suddenly seems very uncertain.
[3]
If you can't trust the voice of David Attenborough, what can you trust?
The world's best-loved naturalist has had his voice cloned - and misused - by AI. Soon, we won't believe anything we hear unless we are in the same room as the speaker It sounds too fanciful and too outrageous to be true, but nothing is too outrageous for the world the tech bros have bequeathed us. The BBC has revealed that various websites and YouTube channels are using AI to clone the voice of David Attenborough and get him to say things - about Russia, about the US election - that surely he would never say. It's not the first time it has happened to a celebrity - Scarlett Johansson refused to license her voice to ChatGPT and accused them of creating it anyway, in a character called Sky. ChatGPT's developer, OpenAI, said Sky was "a different professional actress using her own natural speaking voice", but it pulled the voice "out of respect for Ms Johansson". Elsewhere, lawyers continue to tussle, using precedents that pre-date the existence of AI by several decades, which is to say with one hand tied behind their back. The hoax Attenborough is a different category of sinister, however. Johansson is a lot of things, all of them excellent, but she is not the global authority on important things that are true. Attenborough may not be the last true embodiment of trust in a compromised world, but I row back from that assertion only because I fear it is UK-centric. I stand by this: if you can't hear his voice and believe it, then you can't hear or believe anything. Red flags went up around AI voice generation over the potential for scamming - the risk that soon it would be able to scrape your kid's voice off their TikTok output, create a believable hostage tape and have you emptying your bank account into the ether before you realised said kid was in his bedroom the whole time. It seemed plausible in the sense that people will do any number of stupid things if they hear a loved one in peril, but it was implausible - to me, at least - because, as credible as the voice might be, I think AI would fall down in the believability of the text. It does a terrible job of sounding human even when it tries simply to describe a sunset. The idea that it could mimic the tone of one of my children seems absurd; I would smell a rat at the words "Hi, Mum" (way too much preamble) and laugh out loud if one of them said "please", even supposedly with a gun to their head. And yet everything, everywhere, knows you better than you think. My Instagram "for you" feed is entirely buff guys trying to do home workouts while pit bulls impede them. I never even talk about any of those interests! Apparently, though, it wasn't the work of Hercules to guess, just as it wouldn't be that hard to guess that a teenager is sarcastic and has no manners. Even revivified, that hoax-kidnap anxiety is still nothing compared with the faking of the naturalist. Technology untethered from values destroys those values, but with or without values it can also destroy itself. Forensic DNA is a case study. Detection has become so good that, if you shake hands with someone, then pick up a coffee cup and drive 200 miles to a service station at which a robbery-murder takes place, the person you shook hands with could be placed at the scene. The accuracy, rather than making DNA more useful, has negated it completely; we are back in a world where only an alibi will do, AKA the 19th century. (I got that nugget from a true crime podcast, rather than a police force, so it may not yet be true everywhere. I don't want anyone to panic and start a minute-by-minute diary of their whereabouts.) But if AI voice generation becomes good enough to destroy trust in the most trusted, it quickly destroys trust in everything but a voice you can see coming out of a human, hurtling us back many centuries, to a world where we believed a limited number of family members, a few vetted associates and - on a really good day, if we had met him before - a messenger. In fake Attenborough, the scam of all scams, we have been casually mugged of modern communication.
Share
Share
Copy Link
AI-powered voice cloning technology is advancing rapidly, raising concerns about fraud, privacy, and legal implications. Celebrities like David Attenborough and Scarlett Johansson have been targeted, prompting calls for updated regulations.
The rapid advancement of AI-powered voice cloning technology has sparked concerns among celebrities, experts, and consumers alike. Recent incidents involving high-profile figures have highlighted the potential for misuse and fraud, outpacing current legal frameworks 123.
Several prominent personalities have fallen victim to unauthorized voice cloning:
The technology behind voice cloning has become increasingly sophisticated and accessible:
The rapid development of voice cloning technology has exposed gaps in existing legal frameworks:
The widespread availability of voice cloning technology presents various risks:
As AI voice cloning technology continues to evolve, several key considerations emerge:
As the technology progresses, society faces the challenge of maintaining trust in vocal communication while harnessing the potential benefits of AI voice cloning in various industries.
Reference
[1]
Renowned broadcaster Sir David Attenborough expresses deep concern over AI-generated clones of his voice, highlighting the ethical implications and potential misuse of this technology in the entertainment industry.
4 Sources
4 Sources
AI-powered voice cloning scams are becoming increasingly prevalent, with 28% of adults falling victim. Banks and experts warn of the sophisticated techniques used by scammers to exploit social media content and empty bank accounts.
6 Sources
6 Sources
A Consumer Reports study reveals that most popular AI voice cloning tools lack meaningful safeguards against fraud and abuse, raising concerns about potential misuse and impersonation scams.
5 Sources
5 Sources
A recent experiment by ABC News Verify highlights the ease and affordability of creating AI-generated voice clones, raising concerns about the impact of deepfakes on democracy and personal identity. The article explores the inadequacies of current copyright laws and proposes the establishment of "personality rights" as a potential solution.
2 Sources
2 Sources
OpenAI's ChatGPT introduces an advanced voice mode, sparking excitement and raising privacy concerns. The AI's ability to mimic voices and form emotional bonds with users has led to mixed reactions from experts and users alike.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved