Consumer Reports Study Reveals Inadequate Safeguards in AI Voice Cloning Tools

7 Sources

Share

A recent Consumer Reports study finds that popular AI voice cloning tools lack sufficient safeguards against fraud and misuse, raising concerns about potential scams and privacy violations.

News article

Consumer Reports Uncovers Lack of Safeguards in AI Voice Cloning Tools

A recent study by Consumer Reports has revealed that several popular AI voice cloning tools lack adequate safeguards to prevent fraud and misuse. The investigation, which examined products from six companies, found that only two implemented meaningful measures to combat potential abuse

1

.

Key Findings of the Study

Consumer Reports evaluated voice cloning tools from Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. The study found that:

  1. Four out of six companies (ElevenLabs, Speechify, PlayHT, and Lovo) relied solely on users checking a box to confirm their legal right to clone a voice

    2

    .
  2. Only Descript and Resemble AI implemented additional steps to make non-consensual voice cloning more challenging

    3

    .
  3. Some companies, like PlayHT and Speechify, even suggested using their tools for pranks, potentially encouraging misuse

    3

    .

Potential Risks and Scams

The lack of robust safeguards raises concerns about the potential for voice cloning technology to be used in various scams and fraudulent activities:

  1. Impersonation scams, where criminals clone a family member's voice to request money urgently

    2

    .
  2. Election interference, as demonstrated by a recent incident involving former President Joe Biden's cloned voice

    2

    .
  3. Defamation, exemplified by a case where a school principal's voice was allegedly cloned to make racist remarks

    3

    .

Ease of Access and Use

The accessibility of these tools is alarming. Some applications, like PlayKit from Play.ht, allow users to clone voices with minimal barriers:

  1. Users can create voice clones with just 30 seconds of audio

    4

    .
  2. Some platforms only require an email and name to create an account

    2

    .
  3. Voice clones can be generated quickly, often in 10 to 15 seconds

    4

    .

Recommendations and Potential Solutions

Consumer Reports and other experts suggest several measures to improve security:

  1. Implementing mechanisms to ensure voice ownership, such as reading unique scripts

    2

    .
  2. Watermarking AI-generated audio

    5

    .
  3. Creating tools to detect AI-generated audio

    2

    .
  4. Prohibiting the cloning of influential or public figures' voices

    2

    .
  5. Collecting credit card information to trace potential misuse

    2

    .

Regulatory Landscape

The regulatory environment for AI voice cloning remains uncertain:

  1. The US Federal Trade Commission finalized a rule prohibiting AI impersonation of governments and businesses

    3

    .
  2. A proposal to extend the ban to individual impersonation is under consideration

    3

    .
  3. State-level regulation may be more likely in the current political climate

    3

    .

Industry Response and Ethical Considerations

Some companies are taking a more cautious approach:

  1. Microsoft has chosen not to release its VALL-E 2 project publicly due to potential misuse concerns

    3

    .
  2. OpenAI has limited access to its Voice Engine for speech synthesis

    3

    .

However, the proliferation of open-source voice cloning software complicates efforts to control the technology's spread and use

3

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo