Extremist groups use AI to generate deepfakes and recruit members as national security threat grows

Reviewed byNidhi Govil

3 Sources

Share

Militant groups including Islamic State are experimenting with AI to produce deepfake images, spread propaganda, and recruit new members at scale. National security experts warn that even poorly resourced extremist groups can now leverage accessible AI tools like ChatGPT to create realistic fake content, automate cyberattacks, and translate messages across languages, raising urgent concerns about the malicious use of AI.

Extremist Groups Leverage AI to Scale Operations

Extremist groups are actively experimenting with AI, transforming how militant organizations recruit members and spread propaganda. A post on a pro-Islamic State group website last month explicitly urged supporters to integrate AI into their operations, stating that "one of the best things about AI is how easy it is to use" and encouraging followers to "make their nightmares into reality" by using AI for recruitment

1

. National security experts and intelligence agencies have warned that AI risks posed by these groups are escalating as the technology becomes more accessible and powerful.

Source: Euronews

Source: Euronews

For loose-knit, poorly resourced extremist groups, AI offers a force multiplier effect. John Laliberte, a former vulnerability researcher at the National Security Agency and now CEO of cybersecurity firm ClearVector, explains that "with AI, even a small group that doesn't have a lot of money is still able to make an impact"

2

. This democratization of sophisticated technology means that militant groups using AI can now compete with better-funded adversaries in the information warfare space.

Islamic State AI Experimentation Produces Deepfake Images and Videos

Militant groups began using AI as soon as programs like ChatGPT became widely accessible in late 2022. Since then, they have increasingly deployed generative AI programs to create realistic-looking photos and videos. The Islamic State has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities

3

.

When manipulated by social media algorithms, this AI generated propaganda can help recruit new believers, confuse or frighten adversaries, and spread disinformation at a scale unimaginable just a few years ago. Two years ago, such groups spread fake images of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings, spurring outrage and polarization while obscuring the war's actual horrors

1

. Violent groups in the Middle East used these deepfake images and videos for recruitment, as did antisemitic hate groups in the U.S. and elsewhere.

After an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia last year, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits

2

. This pattern demonstrates how extremist groups now deploy AI for recruitment immediately following attacks to capitalize on heightened media attention.

AI Enhanced Cyberattacks and Emerging Threats

Beyond propaganda and recruitment, hackers are already using synthetic audio and video for phishing campaigns, attempting to impersonate senior business or government leaders to gain access to sensitive networks. Bad actors can also use AI to write malicious code or automate aspects of cyberattacks

3

. Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, notes that while such groups lag behind China, Russia, or Iran and still view more sophisticated uses of AI as "aspirational," the risks are too high to ignore as cheap, powerful AI expands

2

.

More concerning is the possibility that militant groups may attempt to use AI to help produce biological or chemical weapons, compensating for a lack of technical expertise. This national security threat was included in the Department of Homeland Security's updated Homeland Threat Assessment released earlier this year

1

. Fowler observed that "ISIS got on Twitter early and found ways to use social media to their advantage. They are always looking for the next thing to add to their arsenal" .

Legislative Response to Counter Malicious Use of AI

Lawmakers are responding to this national security threat with several proposals. Mark Warner, Senator of Virginia and the top Democrat on the Senate Intelligence Committee, emphasized that the U.S. must make it easier for AI developers to enable information sharing about how their products are being used by bad actors, whether they are extremists, criminal hackers, or foreign spies

2

. Warner stated that "it has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors" .

During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI

1

. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year

2

. These measures signal growing recognition that the malicious use of AI by extremist groups demands coordinated policy responses and enhanced collaboration between government agencies and AI developers to mitigate emerging threats.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo