6 Sources
6 Sources
[1]
Extremists are using AI voice cloning to supercharge propaganda. Experts say it's helping them grow
Researchers warn generative tools are helping militant groups from neo-Nazis to the Islamic State spread ideology While the artificial intelligence boom is upending sections of the music industry, voice generating bots are also becoming a boon to another unlikely corner of the internet: extremist movements that are using them to recreate the voices and speeches of major figures in their milieu, and experts say it is helping them grow. "The adoption of AI-enabled translation by terrorists and extremists marks a significant evolution in digital propaganda strategies," said Lucas Webber, a senior threat intelligence analyst at Tech Against Terrorism and a research fellow at the Soufan Center. Webber specializes in monitoring the online tools of terrorist groups and extremists around the world. "Earlier methods relied on human translators or rudimentary machine translation, often limited by language fidelity and stylistic nuance," he said. "Now, with the rise of advanced generative AI tools, these groups are able to produce seamless, contextually accurate translations that preserve tone, emotion, and ideological intensity across multiple languages." On the neo-Nazi far-right, adoption of AI-voice cloning software has already become particularly prolific, with several English-language versions of Adolf Hitler's speeches garnering tens of millions of streams across X, Instagram, TikTok, and other apps. According to a recent research post by the Global Network on Extremism and Technology (GNet), extremist content creators have turned to voice cloning services, specifically ElevenLabs, and feed them archival speeches from the era of the Third Reich, which are then processed into mimicking Hitler in English. Neo-Nazi accelerationists, the kinds who plot acts of terrorism against western governments to provoke a societal collapse, have also turned to these tools to spread more updated versions of their hyper-violent messaging. For example, Siege, an insurgency manual written by American neo-Nazi and proscribed terrorist James Mason that became the veritable bible to organizations like the Base and the now-defunct Atomwaffen Division, was transformed into an audiobook in late November. "For the last several months I have been involved in making an audiobook of Siege by James Mason," said a prominent neo-Nazi influencer with a heavy presence on X and Telegram, who stitched together the audiobook with the help of AI tools. "Using a custom voice model of Mason, I re-created every newsletter and most of the attached newspaper clippings as in the original published newsletters." The influencer lauded the power of having Mason's writing from "pre-internet America" and turning it into a modern-day voice. "But to hear the startling accuracy of predictions made through the early eighties really puts a milestone on the road and it changed my view of our shared cause on a fundamental level," he said. At its height in 2020, the Base held a book club on Siege, which was an instrumental influence on several members who discussed its benefits in a hypothetical war against the US government. A nationwide FBI counterterrorism probe eventually swept up over a dozen of its members on various terrorism related charges in the same year. "The creator of the audiobook has previously released similar AI content; however, Siege has a more notorious history," said Joshua Fisher-Birch, a terrorism analyst at the Counter Extremism Project, "due to its cultlike status among some in the online extreme right, promotion of lone actor violence, and being required reading by several neo-Nazi groups that openly endorse terrorism and whose members have committed violent criminal acts". Webber says pro-Islamic State media outlets on encrypted networks are currently and actively "using AI to create text-to-speech renditions of ideological content from official publications", to supercharge the spread of their messaging by transforming "text-based propaganda into engaging multimedia narratives". Jihadist terrorist groups have found utility in AI for translations of extremist teachings from Arabic into easily digestible, multilingual content. In the past, American-imam turned al-Qaeda operative Anwar al-Awlaki, would personally have to voice English lectures for recruitment propaganda in the anglosphere. The CIA and FBI have repeatedly cited the influence of al-Awlaki's voice as a key contagion in the spread of al-Qaeda's message. On Rocket.Chat - the preferred communications platform of the Islamic State, which it uses to communicate with its followers and recruits - a user posted a video clip in October with slick graphics and Japanese subtitling, remarking on the difficulties of doing that without the advent of AI. "Japanese would be an extremely hard language to translate from its original state to English while keeping its eloquence," said the pro-Islamic State user. "It should be known that I do not use artificial intelligence for any related media, with some exceptions regarding audio." So far, not just the Islamic State, but groups across the ideological spectrum, have begun using free AI applications, namely OpenAI's chatbot, ChatGPT, to amplify their overall activities. The Base and adjacent groups have used it for the creation of imagery, while also acknowledging, as far back as 2023, the use of these tools to streamline planning and researching. Counterterrorism authorities have always viewed the internet and technological advancements as a persistent game of catch-up when it comes to keeping pace with the terror groups who exploit them. Already the Base, the Islamic State and other extremists have leveraged emergent technologies like crypto to anonymously fundraise and share files for 3D printed firearms.
[2]
Making nightmares into reality: AI finds fans in the Islamic State, other militant and terrorist other groups worldwide | Fortune
As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren't sure exactly what to do with it. For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. "One of the best things about AI is how easy it is to use," the user wrote in English. "Some intelligence agencies worry that AI will contribute (to) recruiting," the user continued. "So make their nightmares into reality." IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it's not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups -- or even an individual bad actor with a web connection -- AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. "For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact." Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video. When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS' evolving use of AI. Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Lawmakers have floated several proposals, saying there's an urgent need to act. Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
[3]
Extremist groups use AI to produce fake images and recruit new members
As the rest of the world rushes to harness the power of artificial intelligence (AI), militant groups are also experimenting with the technology, even if they are unsure exactly what to do with it. For extremist organisations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. "One of the best things about AI is how easy it is to use," the user wrote in English. "Some intelligence agencies worry that AI will contribute [to] recruiting," the user continued. "So make their nightmares into reality." IS, which had seized territory in Iraq and Syria years ago but is now a decentralised alliance of militant groups that share a violent ideology, realised years ago that social media could be a potent tool for recruitment and disinformation, so it's not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups -- or even an individual bad actor with a web connection -- AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. "For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact." How extremist groups are experimenting Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and videos. When manipulated by social media algorithms, this fake content can help recruit new believers, confuse or frighten adversaries, and spread propaganda on a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war, depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarisation while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS has also created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS's evolving use of AI. 'Aspirational' -- for now Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They can also use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Countering a growing threat Lawmakers have floated several proposals, saying there's an urgent need to act. Mark Warner, senator of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the US must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the US House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said August Pfluger, the representative of Texas and the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
[4]
National security experts warn extremist groups are experimenting with AI. Here's how
How extremist groups are experimenting Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video. When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS' evolving use of AI. 'Aspirational' -- for now Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Countering a growing threat Lawmakers have floated several proposals, saying there's an urgent need to act. Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
[5]
Militant groups are experimenting with AI, and the risks are expected to grow
WASHINGTON -- As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren't sure exactly what to do with it. For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. "One of the best things about AI is how easy it is to use," the user wrote in English. "Some intelligence agencies worry that AI will contribute (to) recruiting," the user continued. "So make their nightmares into reality." IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it's not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups -- or even an individual bad actor with a web connection -- AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. "For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact." Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video. When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS' evolving use of AI. Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Lawmakers have floated several proposals, saying there's an urgent need to act. Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
[6]
Militant Groups Are Experimenting With AI, and the Risks Are Expected to Grow
WASHINGTON (AP) -- As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren't sure exactly what to do with it. For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. "One of the best things about AI is how easy it is to use," the user wrote in English. "Some intelligence agencies worry that AI will contribute (to) recruiting," the user continued. "So make their nightmares into reality." IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it's not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups -- or even an individual bad actor with a web connection -- AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. "For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact." How extremist groups are experimenting Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video. When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS' evolving use of AI. 'Aspirational' -- for now Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Countering a growing threat Lawmakers have floated several proposals, saying there's an urgent need to act. Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
Share
Share
Copy Link
Militant organizations from neo-Nazis to the Islamic State are deploying AI voice cloning and generative AI tools to recreate historical speeches and translate propaganda into multiple languages. National security experts warn these technologies enable even poorly resourced groups to produce sophisticated content at scale, helping them recruit new members and expand their reach across social media platforms.
Extremist groups ranging from neo-Nazis to the Islamic State are leveraging AI voice cloning and generative AI tools to amplify their propaganda efforts, according to national security experts and terrorism researchers. The technology enables these organizations to recreate historical speeches, translate content across languages, and produce multimedia narratives at a scale previously unimaginable
1
."The adoption of AI-enabled translation by terrorists and extremists marks a significant evolution in digital propaganda strategies," said Lucas Webber, a senior threat intelligence analyst at Tech Against Terrorism and research fellow at the Soufan Center
1
. Earlier methods relied on human translators or basic machine translation, but advanced tools now produce seamless, contextually accurate translations that preserve tone, emotion, and ideological intensity across multiple languages.Neo-Nazi groups have proven particularly prolific in adopting this technology. Several English-language versions of Adolf Hitler's speeches created using AI voice cloning have garnered tens of millions of streams across X, Instagram, TikTok, and other platforms
1
. According to the Global Network on Extremism and Technology (GNet), extremist content creators feed archival speeches from the Third Reich era into voice cloning services, specifically ElevenLabs, which then process them to mimic Hitler speaking in English.The Islamic State has actively embraced AI to create deepfake audio recordings of its own leaders reciting scripture and to rapidly translate messages into multiple languages, according to researchers at SITE Intelligence Group, which tracks extremist activities
2
. Pro-Islamic State media outlets on encrypted networks are "using AI to create text-to-speech renditions of ideological content from official publications," transforming text-based propaganda into engaging multimedia narratives1
.
Source: Euronews
A user posting on a pro-Islamic State website last month urged supporters to integrate AI into their operations, writing: "One of the best things about AI is how easy it is to use. Some intelligence agencies worry that AI will contribute to recruiting. So make their nightmares into reality"
2
.Militant groups began using AI as soon as programs like ChatGPT became widely accessible, increasingly deploying generative AI programs to create realistic-looking photos and video
3
. When manipulated by social media algorithms, this fake content helps recruit new believers, confuses adversaries, and spreads disinformation at unprecedented scale. Two years ago, such groups spread fake images of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings, spurring outrage and polarization while obscuring the war's actual horrors5
.Neo-Nazi accelerationists who plot acts of terrorism to provoke societal collapse have turned to these tools to spread updated versions of their hyper-violent messaging. In late November, a prominent neo-Nazi influencer with heavy presence on X and Telegram created an AI-generated audiobook of Siege, an insurgency manual written by American neo-Nazi James Mason that became required reading for terrorist organizations like the Base and Atomwaffen Division
1
."Using a custom voice model of Mason, I re-created every newsletter and most of the attached newspaper clippings as in the original published newsletters," the influencer stated
1
. Joshua Fisher-Birch, a terrorism analyst at the Counter Extremism Project, noted that Siege has "cultlike status among some in the online extreme right" and promotes lone actor violence.Related Stories
"For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who now serves as CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact"
2
.While such groups lag behind China, Russia or Iran and still view more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent now CEO at Darktrace Federal, the risks remain too high to ignore
4
. Hackers already use synthetic audio and video for phishing campaigns, impersonating senior business or government leaders to gain access to sensitive networks. They can also use AI to write malicious code or automate aspects of enhancing cyberattacks.More concerning is the possibility that militant groups may attempt to use AI to help produce biological or chemical weapons, compensating for lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment released earlier this year
5
.Mark Warner, Senator of Virginia and top Democrat on the Senate Intelligence Committee, emphasized that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether extremists, criminal hackers or foreign spies
4
. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner stated.During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI
3
. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year, reflecting growing concern among lawmakers about counterterrorism challenges in the age of accessible artificial intelligence2
.Summarized by
Navi
[1]
[4]
1
Policy and Regulation

2
Technology

3
Technology
