Extremist groups use AI voice cloning and deepfakes to supercharge propaganda and recruitment

Reviewed byNidhi Govil

6 Sources

Share

Militant organizations from neo-Nazis to the Islamic State are deploying AI voice cloning and generative AI tools to recreate historical speeches and translate propaganda into multiple languages. National security experts warn these technologies enable even poorly resourced groups to produce sophisticated content at scale, helping them recruit new members and expand their reach across social media platforms.

AI Voice Cloning Transforms How Extremist Groups Spread Ideology

Extremist groups ranging from neo-Nazis to the Islamic State are leveraging AI voice cloning and generative AI tools to amplify their propaganda efforts, according to national security experts and terrorism researchers. The technology enables these organizations to recreate historical speeches, translate content across languages, and produce multimedia narratives at a scale previously unimaginable

1

.

"The adoption of AI-enabled translation by terrorists and extremists marks a significant evolution in digital propaganda strategies," said Lucas Webber, a senior threat intelligence analyst at Tech Against Terrorism and research fellow at the Soufan Center

1

. Earlier methods relied on human translators or basic machine translation, but advanced tools now produce seamless, contextually accurate translations that preserve tone, emotion, and ideological intensity across multiple languages.

Neo-Nazi groups have proven particularly prolific in adopting this technology. Several English-language versions of Adolf Hitler's speeches created using AI voice cloning have garnered tens of millions of streams across X, Instagram, TikTok, and other platforms

1

. According to the Global Network on Extremism and Technology (GNet), extremist content creators feed archival speeches from the Third Reich era into voice cloning services, specifically ElevenLabs, which then process them to mimic Hitler speaking in English.

Militant Groups Deploy Deepfake Images and Audio for Recruitment

The Islamic State has actively embraced AI to create deepfake audio recordings of its own leaders reciting scripture and to rapidly translate messages into multiple languages, according to researchers at SITE Intelligence Group, which tracks extremist activities

2

. Pro-Islamic State media outlets on encrypted networks are "using AI to create text-to-speech renditions of ideological content from official publications," transforming text-based propaganda into engaging multimedia narratives

1

.

Source: Euronews

Source: Euronews

A user posting on a pro-Islamic State website last month urged supporters to integrate AI into their operations, writing: "One of the best things about AI is how easy it is to use. Some intelligence agencies worry that AI will contribute to recruiting. So make their nightmares into reality"

2

.

Militant groups began using AI as soon as programs like ChatGPT became widely accessible, increasingly deploying generative AI programs to create realistic-looking photos and video

3

. When manipulated by social media algorithms, this fake content helps recruit new believers, confuses adversaries, and spreads disinformation at unprecedented scale. Two years ago, such groups spread fake images of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings, spurring outrage and polarization while obscuring the war's actual horrors

5

.

Neo-Nazi Accelerationists Create AI-Generated Audiobooks

Neo-Nazi accelerationists who plot acts of terrorism to provoke societal collapse have turned to these tools to spread updated versions of their hyper-violent messaging. In late November, a prominent neo-Nazi influencer with heavy presence on X and Telegram created an AI-generated audiobook of Siege, an insurgency manual written by American neo-Nazi James Mason that became required reading for terrorist organizations like the Base and Atomwaffen Division

1

.

"Using a custom voice model of Mason, I re-created every newsletter and most of the attached newspaper clippings as in the original published newsletters," the influencer stated

1

. Joshua Fisher-Birch, a terrorism analyst at the Counter Extremism Project, noted that Siege has "cultlike status among some in the online extreme right" and promotes lone actor violence.

Cybersecurity Experts Warn of Escalating AI Risks

"For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who now serves as CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact"

2

.

While such groups lag behind China, Russia or Iran and still view more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent now CEO at Darktrace Federal, the risks remain too high to ignore

4

. Hackers already use synthetic audio and video for phishing campaigns, impersonating senior business or government leaders to gain access to sensitive networks. They can also use AI to write malicious code or automate aspects of enhancing cyberattacks.

More concerning is the possibility that militant groups may attempt to use AI to help produce biological or chemical weapons, compensating for lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment released earlier this year

5

.

Lawmakers Push for Action on Malicious Application of AI

Mark Warner, Senator of Virginia and top Democrat on the Senate Intelligence Committee, emphasized that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether extremists, criminal hackers or foreign spies

4

. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner stated.

During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI

3

. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year, reflecting growing concern among lawmakers about counterterrorism challenges in the age of accessible artificial intelligence

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo