Curated by THEOUTPOST
On Thu, 12 Dec, 12:02 AM UTC
2 Sources
[1]
ElevenLabs' AI voice generation 'very likely' used in a Russian influence operation
One recent campaign was "very likely" helped by commercial AI voice generation products, including tech publicly released by the hot startup ElevenLabs, according to a recent report from Massachusetts-based threat intelligence company Recorded Future. The report describes a Russian-tied campaign designed to undermine Europe's support for Ukraine, dubbed "Operation Undercut," that prominently used AI-generated voiceovers on fake or misleading "news" videos. The videos, which targeted European audiences, attacked Ukrainian politicians as corrupt or questioned the usefulness of military aid to Ukraine, among other themes. For example, one video touted that "even jammers can't save American Abrams tanks," referring to devices used by US tanks to deflect incoming missiles - reinforcing the point that sending high-tech armor to Ukraine is pointless. The report states that the video creators "very likely" used voice-generated AI, including ElevenLabs tech, to make their content appear more legitimate. To verify this, Recorded Future's researchers submitted the clips to ElevenLabs' own AI Speech Classifier, which provides the ability for anyone to "detect whether an audio clip was created using ElevenLabs," and got a match. ElevenLabs did not respond to requests for comment. Although Recorded Future noted the likely use of several commercial AI voice generation tools, it did not name any others besides ElevenLabs. The usefulness of AI voice generation was inadvertently showcased by the influence campaign's own orchestrators, who - rather sloppily - released some videos with real human voiceovers that had "a discernible Russian accent." In contrast, the AI-generated voiceovers spoke in multiple European languages like English, French, German, and Polish, with no foreign-soundings accents. According to Recorded Future, AI also allowed for the misleading clips to be quickly released in multiple languages spoken in Europe like English, German, French, Polish, and Turkish (incidentally, all languages supported by ElevenLabs.) Recorded Future attributed the activity to the Social Design Agency, a Russia-based organization that the U.S. government sanctioned this March for running " a network of over 60 websites that impersonated genuine news organizations in Europe, then used bogus social media accounts to amplify the misleading content of the spoofed websites." All this was done "on behalf of the Government of the Russian Federation," the U.S. State Department said at the time. The overall impact of the campaign on public opinion in Europe was minimal, Recorded Future concluded. This isn't the first time ElevenLabs' products have been singled out for alleged misuse. The company's tech was behind a robocall impersonating President Joe Biden that urged voters not to go out and vote during a primary election in January 2024, a voice fraud detection company concluded, according to Bloomberg. In response, ElevenLabs said it released new safety features like automatically blocking voices of politicians. ElevenLabs bans "unauthorized, harmful, or deceptive impersonation" and says it uses various tools to enforce this, such as both automated and human moderation. ElevenLabs has experienced explosive growth since its founding in 2022. It recently grew ARR to $80 million from $25 million less than a year earlier, and may soon be valued at $3 billion, TechCrunch previously reported. Its investors include Andreessen Horowitz and former Github CEO Nat Friedman.
[2]
The Latest AI Voice Developments Show the Perils and Power of New Tech
According to a report from a Massachusetts-based threat intelligence company called Recorded Future, which helps alert organizations to cyber threats and "see them first so they can prioritize, pinpoint, and act to prevent attacks," a recent digital misinformation campaign "very likely" used voices generated by ElevenLabs' systems. News site TechCrunch said the campaign was dubbed "Operation Undercut" and aimed at harming European support for Ukraine. The Russian propaganda initiative used AI-made speeches played on top of fake news videos that tried to portray Ukrainian politicians as corrupt, and even threatened the safety of the "American Abrams tanks" which have been used by forces on the defensive frontline. RecordedLabs actually used a tool made by ElevenLabs to check if a recording of a spoken voice contained AI-generated material, and verified it was likely made by the company's AI. Though other AI voice systems may have been used, the report only mentions ElevenLabs, TechCrunch notes. The bad actors used ElevenLabs tech to generate very convincing content that sounded like English, French, German and Polish native speakers, TechCrunch notes -- bizarrely highlighting the power of the generative AI systems in question. The issue here is simple: ElevenLabs is a buzzy company because its tech is so impressive, and so useful. Though its terms and conditions specifically forbid "unauthorized, harmful, or deceptive impersonation" and it has security protocols in place to try to prevent this sort of use, anyone who pays for a voice on the site could devise a way to use the resulting content maliciously. It's not the company's fault that it's being used this way, of course. As controversial and complex as digital voice tech can be, another company has just highlighted that for some content creators and business users, this particular use of AI can be an amazing boon. YouTube turned on automatic foreign-language dubbing features for creators who make and share videos "focused on knowledge and information."
Share
Share
Copy Link
A report by Recorded Future suggests ElevenLabs' AI voice generation technology was likely used in a Russian influence operation targeting European support for Ukraine, highlighting the dual-edged nature of advanced AI tools.
A recent report by Recorded Future, a Massachusetts-based threat intelligence company, has revealed that AI voice generation technology, likely including that of startup ElevenLabs, was used in a Russian influence operation dubbed "Operation Undercut" 1. The campaign aimed to undermine European support for Ukraine by creating misleading "news" videos with AI-generated voiceovers in multiple European languages.
The disinformation campaign targeted European audiences with videos attacking Ukrainian politicians and questioning the effectiveness of military aid to Ukraine. One example claimed that "even jammers can't save American Abrams tanks," suggesting the futility of sending advanced armor to Ukraine [1]. Recorded Future's researchers used ElevenLabs' own AI Speech Classifier to confirm the likely use of their technology in creating these voiceovers.
The AI-generated voices spoke convincingly in English, French, German, and Polish without discernible foreign accents, showcasing the technology's capability to produce native-sounding speech 2. This contrasted with some videos that used human voiceovers with noticeable Russian accents, inadvertently highlighting the effectiveness of the AI-generated content.
Recorded Future attributed the campaign to the Social Design Agency, a Russia-based organization sanctioned by the U.S. government in March for running a network of websites impersonating legitimate European news outlets [1]. Despite the sophisticated use of AI, the report concluded that the overall impact on European public opinion was minimal.
ElevenLabs did not respond to requests for comment on this specific incident. However, the company has faced similar controversies before, including the alleged use of its technology in a robocall impersonating President Joe Biden during a primary election in January 2024 [1]. In response to such incidents, ElevenLabs has implemented new safety features, including automatically blocking voices of politicians.
The incident raises significant ethical concerns about the potential misuse of AI voice technology. ElevenLabs explicitly bans "unauthorized, harmful, or deceptive impersonation" and employs various tools for enforcement, including automated and human moderation [1]. However, the effectiveness of these measures in preventing misuse remains a subject of debate.
While this incident highlights the potential dangers of AI voice generation, it also underscores the technology's power and sophistication. The ability to quickly produce convincing voiceovers in multiple languages demonstrates the tool's potential for legitimate uses in content creation and localization [2]. As AI voice technology continues to advance, balancing its benefits with necessary safeguards against misuse will remain a critical challenge for developers, policymakers, and users alike.
OpenAI's ChatGPT introduces an advanced voice mode, sparking excitement and raising privacy concerns. The AI's ability to mimic voices and form emotional bonds with users has led to mixed reactions from experts and users alike.
5 Sources
AI-powered voice cloning technology is advancing rapidly, raising concerns about fraud, privacy, and legal implications. Celebrities like David Attenborough and Scarlett Johansson have been targeted, prompting calls for updated regulations.
3 Sources
Google's NotebookLM, an AI-powered study tool, has gained viral attention for its Audio Overview feature, which creates engaging AI-generated podcasts from various content sources.
5 Sources
OpenAI expresses concerns about users forming unintended social bonds with ChatGPT's new voice feature. The company is taking precautions to mitigate risks associated with emotional dependence on AI.
10 Sources
OpenAI reports multiple instances of ChatGPT being used by cybercriminals to create malware, conduct phishing attacks, and attempt to influence elections. The company has disrupted over 20 such operations in 2024.
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved