Curated by THEOUTPOST
On Mon, 10 Mar, 4:05 PM UTC
7 Sources
[1]
Consumer Reports finds popular voice cloning tools lack safeguards | TechCrunch
Several popular voice cloning tools on the market don't have "meaningful" safeguards to prevent fraud or abuse, according to a new study from Consumer Reports. Consumer Reports probed voice cloning products from six companies -- Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify -- for mechanisms that might make it more difficult for malicious users to clone someone's voice without their permission. The publication found that only two, Descript and Resemble AI, took steps to combat misuse. Others required only that users check a box confirming that they had the legal right to clone a voice or make a similar self-attestation. Grace Gedye, policy analyst at Consumer Reports, said that AI voice cloning tools have the potential to "supercharge" impersonation scams if adequate safety measures aren't put in place. "Our assessment shows that there are basic steps companies can take to make it harder to clone someone's voice without their knowledge -- but some companies aren't taking them," Gedye said in a statement.
[2]
Most AI voice cloning tools aren't safe from scammers, Consumer Reports finds
Consumer Reports assessed the most leading voice cloning tools, including Descript and ElevenLabs. Here's the verdict. AI voice cloning technology has made remarkable advances in the last few years, reaching the ability to create realistic-sounding audio from just a few seconds of a sample. Although this has many positive applications -- such as audiobooks, marketing materials, and more -- the technology can also be exploited for elaborate scams, fraud, and other harmful applications. To learn more about the safeguards currently in place for these products, Consumer Reports assessed six of the leading voice cloning tools: Descript, ElevenLans, Lovo, PlayHT, Resemble AI, and Speechify. Specifically, Consumer Reports were looking for proper safeguards that prevent the cloning of someone's voice without their knowledge. Also: Got a suspicious E-ZPass text? It's a trap - how to spot the scam The results found that four of the six products -- from ElevenLabs, Speechify, PlayHT, and Lovo -- did not have the technical mechanisms necessary to prevent cloning someone's voice without their knowledge or to limit the AI cloning to only the user's voice. Instead, the protection was limited to a box users had to check off, confirming they had the legal right to clone the voice. The researchers found that Descript and Resemble AI were the only companies with additional steps in place that made it more challenging for customers to do non-consensual cloning. Descript asked the user to read and record a consent statement and used that audio to generate the clone. Resemble AI takes a different approach, ensuring that the first voice clone created is based on audio recorded in real time. Neither method is impenetrable, as a user could hit play on another AI-cloned snippet or an existing video on a different device. A common use of non-consensual cloning is scamming people. For example, a popular attack involves cloning the voice of a family member and then using that recording to contact a loved one to request that money be sent to help them out of a dire situation. Because the victim thinks they are hearing the voice of a family member in distress, they are more likely to send whatever funds are necessary without questioning the situation. Also: Tax scams are getting sneakier - 10 ways to protect yourself before it's too late Voice cloning has also been used to impact voters' decisions in upcoming elections, as seen in the 2024 election when someone cloned former President Joe Biden's voice to discourage people from showing up to the voting polls. Consumer Reports also found that Speechify, Lovo, PlayHT, and Descript only required an email and name for a user to create an account. Consumer Reports recommends that these companies also collect customers' credit card information to trace fraudulent audio back to the bad actor. Other Consumer Reports recommendations include mechanisms to ensure the ownership of the voice, such as reading off a unique script, watermarking AI-generated audio, creating a tool that detects AI-generated images, detecting and preventing the cloning of the voice of influential or public figures, and prohibiting audio containing scam phrases. The biggest departure from the current system would be Consumer Report's proposal to have someone supervise voice cloning instead of the current do-it-yourself method. Consumer Reports also said there should be an emphasis on making the necessary actors understand their liability should the voice model be misused in a contractual agreement. Also: How Cisco, LangChain, and Galileo aim to contain 'a Cambrian explosion of AI agents' Consumer Reports believes companies have a contractual obligation under Section 5 of the Federal Trade Commission Act to protect their products from being used for harm, which can only be done by adding more protections. If you receive an urgent call from someone you know demanding money, don't panic. Use another device to directly contact that person to verify the request. If you cannot make contact with that person, you can also ask the caller questions to verify their identity. For a full list of how to protect yourself from AI scam calls, check out ZDNET's advice here.
[3]
Consumer Reports calls out poor AI voice-cloning safeguards
Study finds 4 out of 6 providers don't do enough to stop impersonation Four out of six companies offering AI voice cloning software fail to provide meaningful safeguards against the misuse of their products, according to research conducted by Consumer Reports. The nonprofit publication evaluated the AI voice cloning services from six companies: Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. It found ElevenLabs, Speechify, PlayHT, and Lovo "required only that researchers check a box confirming that they had the legal right to clone the voice or make a similar self-attestation." To establish an account, Speechify, Lovo, PlayHT, and Descript only required users to provide a name and email address. "I actually think there's a good argument that can be made that what some of these companies are offering runs afoul of existing consumer protection laws," said Grace Gedye, citing Section 5 of the FTC Act and various state laws. Gedye, a policy analyst at Consumer Reports and author of the AI voice cloning report [PDF], acknowledged that open source voice cloning software complicates matters, but said that even so, it's worthwhile to try to encourage American companies to do a better job protecting consumers. Descript, ElevenLabs, Speechify, PlayHT, and Lovo did not immediately respond to requests for comment. Several of these firms defended their business practices in response to questions posed by Consumer Reports in November 2024. Speech synthesis has been the focus of research for decades, but only recently, thanks to advances in machine learning, has voice cloning become convincing, easy to use, and widely accessible. The software has a variety of legitimate uses such as generating narration for audio books, enabling speech from those unable to speak, and customer support, to the extent customers tolerate it. But it can also be easily misused. Lyrebird was the canary in the coal mine. In 2017, the Canada-based startup (since acquired by Descript) released audio clips featuring the voices of Donald Trump, Barack Obama, and Hillary Clinton, saying things they hadn't actually said. It was a proof of concept for what's now a real problem - reproducing other people's voices for deceptive purposes, or audio deepfakes. According to a 2023 US Federal Trade Commission's Consumer Sentinel Network report [PDF], that year there were more than 850,000 impostor scams, about a fifth of which resulted in monetary losses totaling $2.7 billion. While an unknown but presumably small portion of these involved AI voice cloning software, reports of misuse of the technology have become more common. Last year, for example, police in Baltimore, Maryland, arrested the former athletic director of a high school for allegedly impersonating the school's principal using voice cloning software to make it sound as if the principal had made racist, antisemitic remarks. And the voice cloning report cites numerous testimonials from hundreds of consumers who told the publication about their experience with impersonation phone calls in response to a February 2024 solicitation. The concern raised in Gedye's report is that some of these companies specifically market their software for deception. "PlayHT, a voice cloning company, lists 'pranks' as a use case for its AI voice tools in a company blog post," the report says. "Speechify, another AI voice company, also suggests prank phone calls as a use case for its tools. 'There's no better way to prank your friends than by pretending you're someone else.'" That concern is evident among some large commercial AI vendors. Microsoft, for example, has chosen not to publicly release its VALL-E 2 project, citing the risk of potential misuses "such as spoofing voice identification or impersonating a specific speaker." Similarly, OpenAI has limited access to its Voice Engine for speech synthesis. The US Federal Trade Commission last year finalized a rule that prohibits AI impersonation of governments and businesses. It subsequently proposed to extend that ban to prohibit the impersonation of individuals, but no further progress appears to have been made toward that end. Given the current US administration's efforts to eliminate regulatory bodies like the Consumer Financial Protection Bureau, Gedye said state-level regulation might be more likely than further federal intervention. "We've seen a lot of interest from states on working on the issue of AI specifically," said Gedye. "Most of what I do is work on state-level AI policy and there's a bunch of ambitious legislators who want to work on this issue. I think [state Attorneys General] are also interested in protecting their constituents from the harms of emerging technology. Maybe you have challenges at the federal level right now for consumer protection, although I hope that scams and impersonation are particularly nonpartisan issues." ®
[4]
I cloned my voice in seconds using a free AI app, and we really need to talk about speech synthesis
That voice you hear - even one you recognize - might not be real, and you may have no way of knowing. Voice synthesis is not a new phenomenon, but a growing number of freely available apps are putting this powerful voice-cloning capability in the hands of ordinary people, and the ramifications could be far reaching and unstoppable. A recent Consumer Reports study that looked at half a dozen such tools puts the risks in stark relief. Platforms like ElevenLabs, Speechify, Resemble AI, and others use powerful speech synthesis models to analyze and recreate voices, and sometimes with little-to-no safeguards in place. Some try - Descript, for example, asks for recorded voice consent before the system will recreate a voice signature. But others are not so careful. I found an app called PlayKit from Play.ht that will let you clone a voice for free for three days and then charges you $5.99 a week. The paywall is in theory something of a barrier against potential misuse - except that I was able to clone a voice without starting the trial. The app whisks you through setup and then presents some pre-made voice clones, including ones for President Donald Trump and Elon Musk (yes, you can make the President say things like, 'I think DEI should be supported and expanded around the world"). But at the top is a 'Clone a voice' option. All I had to do was select a video from my photos library and upload it. Videos must be at least 30 seconds long (but not longer than a minute) and in English. I could have chosen one with anyone in it and, if I had, say, filmed a clip of a George Clooney interview, I could have uploaded that (more on that later). The system quickly analyzed the audio. The app doesn't tell you if this is being done locally or in the cloud, but I'll assume the latter, since such powerful models rarely work locally on a mobile device (see ChatGPT in Apple Intelligence). I saved my voice clone with my name so that I could select it again from the list of cloned voices. When I want my clone to say something in my voice, I simply type in the text and hit a big Generate button. That process usually takes 10 to 15 seconds. The voices PlayKit generates, including mine, are eerily accurate. If I have one criticism, it's that the tone and emotion are a bit off. Cloned me sounds the same whether it's talking about what to pick up for dinner or saying it's been in a terrible car crash. Even exclamation points do not change the expression. And yet, I could see people being fooled by this. Remember, anyone with access to 30 seconds of video of you speaking could effectively clone your voice and then use it as they wish. Sure, they'd have to eventually pay $5.99 a week to keep using it, but if someone is planning a financial scam, they might think it's worth it. Platforms like this that do not require explicit permission for voice cloning are sure to proliferate, and my concern is that there are no safeguards or regulations in sight. Services like Descript, which require audio consent from the clone target, are outliers. Play.ht claims that it protects people's voice rights. Here's an excerpt from its Ethical AI page: Our platform values intellectual property rights and personal ownership. Users are permitted to clone only their own voices or those for which they have explicit permission. This strict policy is designed to prevent any potential copyright infringement and uphold a high standard of respect and responsibility. It's a high-minded promise, but the reality is that I started recording 30-second clips of famous movie monologues by Benedict Cumberbatch and Al Pacino, and in less than a minute, had usable voice clones for both actors. What's needed here is global AI regulation, but that needs agreement and cooperation at the government level, and right now that's not forthcoming. In 2023, then-President Joe Biden signed an Executive Order on AI that sought in part to offer some regulatory guidance (he followed up with another AI related order early this year). The Trump administration is allergic to government regulation (and any Biden executive order) and quickly revoked it. The problem is that it has yet to propose anything to replace it. It seems the new plan is to hope that AI companies will be good digital citizens, and at least try to do no harm. Unfortunately, most of these companies are like weapons manufacturers. They're not harming people directly - no one who makes a voice cloner is calling your aging uncle and convincing him with your voice clone that he urgently needs to wire you of thousands of dollars - but some people who are using their AI weapons are. There's no easy solution for what I fear will become a voice-cloning crisis, but I would suggest that you no longer outright trust the voices you hear in videos, on the phone, or in voice messages. If you're in any doubt, contact the relevant person directly. In the meantime, I hope that more voice platforms insist on voice and / or documented permission before they allow users to clone anyone's voice.
[5]
AI voice-cloning scams: A persistent threat with limited guardrails
Why it matters: That tech can have legitimate accessibility and automation benefits -- but it can also be an easy-to-use tool for scammers. Despite that threat, many products' guardrails can be easily sidestepped, a new assessment found. Zoom in: A study out this week from Consumer Reports found many leading voice-cloning technology products lacked significant safeguards to prevent fraud or misuse. By the numbers: While the Federal Trade Commission does not have specific data on voice-cloning scams, over 845,000 imposter scams were reported in the U.S. in 2024. The intrigue: Scams and spoofs using AI voice cloning and deepfake technology also often impersonate well-known individuals, like celebrities, CEOs and politicians. What they're saying: Such scams on social media platforms are only growing, and voice cloning "is far more mature" and widely accessible today than facial cloning technology, Sood says. Philadelphia attorney Gary Schildhorn detailed to a Senate panel in 2023 how he almost became the victim of a voice-cloning imposter scam, when he received a call from his "son," who tearfully told him he was in a car accident with a pregnant woman and was in jail. The Consumer Reports assessment recommended mitigation practices that include requiring unique audio consent statements and watermarking AI-generated audio. Yes, but: Steve Grobman, McAfee's chief technology officer, acknowledges it's not practical in a digital world to expect everyone to erase their voice from the internet. The bottom line: Grobman highlighted the legitimate, powerful benefits voice cloning tech can have: providing a voice for those who may not be able to speak, bridging language divides and saving time and resources.
[6]
AI voice cloning software has flimsy guardrails, report finds
It's easy to bypass steps that voice cloning services have taken to prevent nonconsensual voice cloning, according to the report. Most leading artificial intelligence voice cloning programs have no meaningful barriers to stop people from nonconsensually impersonating others, a Consumer Reports investigation found. Voice cloning AI technology has made remarkable strides in recent years, and many services can effectively mimic a person's cadence with only a few seconds of sample audio. A flashpoint moment came during the Democratic primaries last year, when robocalls of a fake Joe Biden spammed the phones of voters telling them not to vote. The political consultant who admitted to masterminding the scheme was fined $6 million, and the Federal Communications Commission has since banned AI-generated robocalls. A new survey of the six leading publicly available AI voice cloning tools found that five have easily bypassable safeguards, making it simple to clone a person's voice without their consent. Deepfake audio detection software often struggles to tell the difference between real and synthetic voices. Generative AI, which mimics human qualities like their appearance, writing and voices, is a new and rapidly evolving technology, and the industry has few federal regulations. Most ethical and safety checks in the industry at large are self-imposed. President Joe Biden had included some safety demands in his executive order on AI, which he signed in 2023, though President Donald Trump revoked that order when he took office. Voice cloning technology works by taking an audio sample of a person speaking and then extrapolating that person's voice into a synthetic audio file. Without safeguards in place, anyone who registers an account can simply upload audio of an individual speaking, like from a TikTok or YouTube video, and have the service imitate them. Four of the services -- ElevenLabs, Speechify, PlayHT and Lovo -- simply require checking a box saying that the person whose voice is being cloned had given authorization. Another service, Resemble AI, requires recording audio in real time, rather than allowing a person to just upload a recording. But Consumer Reports was able to easily circumvent that restriction by simply playing an audio recording from a computer. Only the sixth service, Descript, had a somewhat effective safeguard. It requires a would-be cloner to record a specific consent statement, which is difficult to falsify except through cloning through another service. All six services are available to the public via their websites. Only Eleven Labs and Resemble AI cost money -- respectively $5 and $1 -- to create a custom voice clone. The others are free. Some of the companies claim that abuse of their tool can have serious negative consequences. "We recognize the potential for misuse of this powerful tool and have implemented robust safeguards to prevent the creation of deepfakes and protect against voice impersonation," a spokesperson for Resemble AI told NBC News in an emailed statement. There are legitimate uses for AI voice cloning, including helping people with disabilities and creating audio translations of people speaking in different languages. But there is also enormous potential for harm, said Sarah Myers West, the co-executive director of the AI Now Institute, a think tank that focuses on the consequences of AI policy. "This could obviously be used for fraud, scams and disinformation, for example impersonating institutional figures," West told NBC News. There is little research on the scope of how often AI is used in audio-based scams. In so-called grandparent scams, a criminal makes a phone call to a person claiming an emergency involving a family member, like they have been kidnapped, arrested or injured. The Federal Trade Commission has warned that such scams may use AI, though the scams predate the technology. Cloned voices have been used to create music without the depicted artist's permission, as happened with a viral 2023 song that falsely seemed to be by Drake and the Weeknd, and some musicians have struggled to control their image when other people release music with their voices.
[7]
Consumer Reports: AI voice cloning tools have almost no security checks
Consumer Reports reveals that several popular voice cloning tools lack adequate safeguards against fraud or abuse, highlighting potential risks associated with AI voice technology. The study examined products from six companies: Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. The investigation found that only Descript and Resemble AI have implemented meaningful measures to prevent misuse. Other tools merely require users to confirm they have the legal right to clone a voice, often through self-attestation. Grace Gedye, a policy analyst at Consumer Reports, warned that without proper safety mechanisms, AI voice cloning tools could "supercharge" impersonation scams. AI voice cloning technology has advanced significantly, capable of mimicking a person's speech with minimal audio samples. A notable incident occurred during the Democratic primaries last year, where robocalls featuring a fake Joe Biden misled voters. This resulted in a $6 million fine for the political consultant behind the scheme, and the Federal Communications Commission subsequently banned AI-generated robocalls. The analysis of the six AI voice cloning tools indicated that five have bypassable safeguards, making it easy to clone voices without consent. Deepfake audio detection software often struggles to distinguish between genuine and synthetic voices, complicating the issue. Generative AI, which imitates human characteristics such as voice, has limited federal regulation, with most ethical practices driven by the companies themselves. An executive order signed by President Biden in 2023 included safety demands for AI, but a later revocation by former President Trump dismantled those provisions. Sesame's AI voice is so real, it's unsettling Voice cloning technology utilizes audio samples from individuals to create synthetic voices. Without safeguards, anyone can upload audio from various platforms, such as TikTok or YouTube, and have the service replicate that voice. Four of the examined services -- ElevenLabs, Speechify, PlayHT, and Lovo -- simply require users to check a box asserting authorization for the voice clone. Resemble AI, while insisting on real-time audio recording, was circumvented by Consumer Reports, which played recorded audio during verification. Only Descript offered a somewhat effective safeguard, requiring users to record a specific consent statement. This method is difficult to falsify, except when using another service to clone the voice. All six services are publicly accessible on their respective websites, with ElevenLabs and Resemble AI charging fees of $5 and $1, respectively, for creating custom voice clones, while the others are free to use. Some companies acknowledged the potential for abuse and reported having implemented stronger safeguards to prevent deepfake creation and voice impersonation. There are legitimate applications for AI voice cloning, such as aiding individuals with disabilities and providing audio translations. However, risks remain significant. Sarah Myers West, co-executive director of the AI Now Institute, noted that this technology could facilitate fraud, scams, and disinformation, including the impersonation of influential figures. Research on the prevalence of AI in audio scams is limited. The Federal Trade Commission has indicated that AI may be employed in "grandparent scams," where criminals impersonate family members in distress. Additionally, some musicians have faced challenges due to cloned voices being utilized for unauthorized music production, as exemplified by a viral 2023 song falsely attributed to Drake and the Weeknd.
Share
Share
Copy Link
A recent Consumer Reports study finds that popular AI voice cloning tools lack sufficient safeguards against fraud and misuse, raising concerns about potential scams and privacy violations.
A recent study by Consumer Reports has revealed that several popular AI voice cloning tools lack adequate safeguards to prevent fraud and misuse. The investigation, which examined products from six companies, found that only two implemented meaningful measures to combat potential abuse 1.
Consumer Reports evaluated voice cloning tools from Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. The study found that:
The lack of robust safeguards raises concerns about the potential for voice cloning technology to be used in various scams and fraudulent activities:
The accessibility of these tools is alarming. Some applications, like PlayKit from Play.ht, allow users to clone voices with minimal barriers:
Consumer Reports and other experts suggest several measures to improve security:
The regulatory environment for AI voice cloning remains uncertain:
Some companies are taking a more cautious approach:
However, the proliferation of open-source voice cloning software complicates efforts to control the technology's spread and use 3.
Reference
[3]
[4]
AI-powered voice cloning technology is advancing rapidly, raising concerns about fraud, privacy, and legal implications. Celebrities like David Attenborough and Scarlett Johansson have been targeted, prompting calls for updated regulations.
3 Sources
3 Sources
AI-powered voice cloning scams are becoming increasingly prevalent, with 28% of adults falling victim. Banks and experts warn of the sophisticated techniques used by scammers to exploit social media content and empty bank accounts.
6 Sources
6 Sources
As AI technology advances, scammers are using sophisticated tools to create more convincing frauds. Learn about the latest AI-enabled scams and how to safeguard yourself during the holidays.
7 Sources
7 Sources
A report by Recorded Future suggests ElevenLabs' AI voice generation technology was likely used in a Russian influence operation targeting European support for Ukraine, highlighting the dual-edged nature of advanced AI tools.
2 Sources
2 Sources
As deepfake technology becomes more sophisticated, tech companies are developing advanced detection tools to combat the growing threat of AI-generated scams and disinformation.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved