ElevenLabs' AI Voice Tech Implicated in Russian Disinformation Campaign

2 Sources

Share

A report by Recorded Future suggests ElevenLabs' AI voice generation technology was likely used in a Russian influence operation targeting European support for Ukraine, highlighting the dual-edged nature of advanced AI tools.

News article

AI Voice Technology Misused in Russian Disinformation Campaign

A recent report by Recorded Future, a Massachusetts-based threat intelligence company, has revealed that AI voice generation technology, likely including that of startup ElevenLabs, was used in a Russian influence operation dubbed "Operation Undercut"

1

. The campaign aimed to undermine European support for Ukraine by creating misleading "news" videos with AI-generated voiceovers in multiple European languages.

Campaign Details and AI Involvement

The disinformation campaign targeted European audiences with videos attacking Ukrainian politicians and questioning the effectiveness of military aid to Ukraine. One example claimed that "even jammers can't save American Abrams tanks," suggesting the futility of sending advanced armor to Ukraine

1

. Recorded Future's researchers used ElevenLabs' own AI Speech Classifier to confirm the likely use of their technology in creating these voiceovers.

Sophistication of AI-Generated Content

The AI-generated voices spoke convincingly in English, French, German, and Polish without discernible foreign accents, showcasing the technology's capability to produce native-sounding speech

2

. This contrasted with some videos that used human voiceovers with noticeable Russian accents, inadvertently highlighting the effectiveness of the AI-generated content.

Attribution and Impact

Recorded Future attributed the campaign to the Social Design Agency, a Russia-based organization sanctioned by the U.S. government in March for running a network of websites impersonating legitimate European news outlets

1

. Despite the sophisticated use of AI, the report concluded that the overall impact on European public opinion was minimal.

ElevenLabs' Response and Previous Incidents

ElevenLabs did not respond to requests for comment on this specific incident. However, the company has faced similar controversies before, including the alleged use of its technology in a robocall impersonating President Joe Biden during a primary election in January 2024

1

. In response to such incidents, ElevenLabs has implemented new safety features, including automatically blocking voices of politicians.

Ethical Concerns and Company Policies

The incident raises significant ethical concerns about the potential misuse of AI voice technology. ElevenLabs explicitly bans "unauthorized, harmful, or deceptive impersonation" and employs various tools for enforcement, including automated and human moderation

1

. However, the effectiveness of these measures in preventing misuse remains a subject of debate.

Broader Implications for AI Voice Technology

While this incident highlights the potential dangers of AI voice generation, it also underscores the technology's power and sophistication. The ability to quickly produce convincing voiceovers in multiple languages demonstrates the tool's potential for legitimate uses in content creation and localization

2

. As AI voice technology continues to advance, balancing its benefits with necessary safeguards against misuse will remain a critical challenge for developers, policymakers, and users alike.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo