ElevenLabs' AI Voice Tech Implicated in Russian Disinformation Campaign

Curated by THEOUTPOST

On Thu, 12 Dec, 12:02 AM UTC

2 Sources

Share

A report by Recorded Future suggests ElevenLabs' AI voice generation technology was likely used in a Russian influence operation targeting European support for Ukraine, highlighting the dual-edged nature of advanced AI tools.

AI Voice Technology Misused in Russian Disinformation Campaign

A recent report by Recorded Future, a Massachusetts-based threat intelligence company, has revealed that AI voice generation technology, likely including that of startup ElevenLabs, was used in a Russian influence operation dubbed "Operation Undercut" 1. The campaign aimed to undermine European support for Ukraine by creating misleading "news" videos with AI-generated voiceovers in multiple European languages.

Campaign Details and AI Involvement

The disinformation campaign targeted European audiences with videos attacking Ukrainian politicians and questioning the effectiveness of military aid to Ukraine. One example claimed that "even jammers can't save American Abrams tanks," suggesting the futility of sending advanced armor to Ukraine [1]. Recorded Future's researchers used ElevenLabs' own AI Speech Classifier to confirm the likely use of their technology in creating these voiceovers.

Sophistication of AI-Generated Content

The AI-generated voices spoke convincingly in English, French, German, and Polish without discernible foreign accents, showcasing the technology's capability to produce native-sounding speech 2. This contrasted with some videos that used human voiceovers with noticeable Russian accents, inadvertently highlighting the effectiveness of the AI-generated content.

Attribution and Impact

Recorded Future attributed the campaign to the Social Design Agency, a Russia-based organization sanctioned by the U.S. government in March for running a network of websites impersonating legitimate European news outlets [1]. Despite the sophisticated use of AI, the report concluded that the overall impact on European public opinion was minimal.

ElevenLabs' Response and Previous Incidents

ElevenLabs did not respond to requests for comment on this specific incident. However, the company has faced similar controversies before, including the alleged use of its technology in a robocall impersonating President Joe Biden during a primary election in January 2024 [1]. In response to such incidents, ElevenLabs has implemented new safety features, including automatically blocking voices of politicians.

Ethical Concerns and Company Policies

The incident raises significant ethical concerns about the potential misuse of AI voice technology. ElevenLabs explicitly bans "unauthorized, harmful, or deceptive impersonation" and employs various tools for enforcement, including automated and human moderation [1]. However, the effectiveness of these measures in preventing misuse remains a subject of debate.

Broader Implications for AI Voice Technology

While this incident highlights the potential dangers of AI voice generation, it also underscores the technology's power and sophistication. The ability to quickly produce convincing voiceovers in multiple languages demonstrates the tool's potential for legitimate uses in content creation and localization [2]. As AI voice technology continues to advance, balancing its benefits with necessary safeguards against misuse will remain a critical challenge for developers, policymakers, and users alike.

Continue Reading
ChatGPT's New Voice Mode: A Technological Marvel or a

ChatGPT's New Voice Mode: A Technological Marvel or a Privacy Concern?

OpenAI's ChatGPT introduces an advanced voice mode, sparking excitement and raising privacy concerns. The AI's ability to mimic voices and form emotional bonds with users has led to mixed reactions from experts and users alike.

Wired logoLaptopMag logoTechRadar logoThe Financial Express logo

5 Sources

AI Voice Cloning: Celebrities Targeted and Legal Challenges

AI Voice Cloning: Celebrities Targeted and Legal Challenges Emerge

AI-powered voice cloning technology is advancing rapidly, raising concerns about fraud, privacy, and legal implications. Celebrities like David Attenborough and Scarlett Johansson have been targeted, prompting calls for updated regulations.

The Guardian logoTechRadar logoThe Guardian logo

3 Sources

Google's NotebookLM: Revolutionizing Content Creation with

Google's NotebookLM: Revolutionizing Content Creation with AI-Generated Podcasts

Google's NotebookLM, an AI-powered study tool, has gained viral attention for its Audio Overview feature, which creates engaging AI-generated podcasts from various content sources.

Analytics India Magazine logoMIT Technology Review logoMIT Technology Review logoWired logo

5 Sources

OpenAI Warns of Potential Emotional Attachment to ChatGPT's

OpenAI Warns of Potential Emotional Attachment to ChatGPT's Voice Mode

OpenAI expresses concerns about users forming unintended social bonds with ChatGPT's new voice feature. The company is taking precautions to mitigate risks associated with emotional dependence on AI.

International Business Times logoEntrepreneur logoQuartz logoThe Financial Express logo

10 Sources

OpenAI Confirms ChatGPT Abuse by Hackers for Malware and

OpenAI Confirms ChatGPT Abuse by Hackers for Malware and Election Interference

OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.

Bleeping Computer logoTom's Hardware logoTechRadar logoArs Technica logo

15 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2024 TheOutpost.AI All rights reserved