3 Sources
3 Sources
[1]
Extremist groups use AI to produce fake images and recruit new members
As the rest of the world rushes to harness the power of artificial intelligence (AI), militant groups are also experimenting with the technology, even if they are unsure exactly what to do with it. For extremist organisations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. "One of the best things about AI is how easy it is to use," the user wrote in English. "Some intelligence agencies worry that AI will contribute [to] recruiting," the user continued. "So make their nightmares into reality." IS, which had seized territory in Iraq and Syria years ago but is now a decentralised alliance of militant groups that share a violent ideology, realised years ago that social media could be a potent tool for recruitment and disinformation, so it's not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups -- or even an individual bad actor with a web connection -- AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. "For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact." How extremist groups are experimenting Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and videos. When manipulated by social media algorithms, this fake content can help recruit new believers, confuse or frighten adversaries, and spread propaganda on a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war, depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarisation while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS has also created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS's evolving use of AI. 'Aspirational' -- for now Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They can also use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Countering a growing threat Lawmakers have floated several proposals, saying there's an urgent need to act. Mark Warner, senator of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the US must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the US House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said August Pfluger, the representative of Texas and the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
[2]
Militant groups are experimenting with AI, and the risks are expected to grow
WASHINGTON -- As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren't sure exactly what to do with it. For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. "One of the best things about AI is how easy it is to use," the user wrote in English. "Some intelligence agencies worry that AI will contribute (to) recruiting," the user continued. "So make their nightmares into reality." IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it's not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups -- or even an individual bad actor with a web connection -- AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. "For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact." Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video. When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS' evolving use of AI. Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Lawmakers have floated several proposals, saying there's an urgent need to act. Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
[3]
Militant Groups Are Experimenting With AI, and the Risks Are Expected to Grow
WASHINGTON (AP) -- As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren't sure exactly what to do with it. For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned. Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. "One of the best things about AI is how easy it is to use," the user wrote in English. "Some intelligence agencies worry that AI will contribute (to) recruiting," the user continued. "So make their nightmares into reality." IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it's not surprising that the group is testing out AI, national security experts say. For loose-knit, poorly resourced extremist groups -- or even an individual bad actor with a web connection -- AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence. "For any adversary, AI really makes it much easier to do things," said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. "With AI, even a small group that doesn't have a lot of money is still able to make an impact." How extremist groups are experimenting Militant groups began using AI as soon as programs like ChatGPT became widely accessible. In the years since, they have increasingly used generative AI programs to create realistic-looking photos and video. When strapped to social media algorithms, this fake content can help recruit new believers, confuse or frighten enemies and spread propaganda at a scale unimaginable just a few years ago. Such groups spread fake images two years ago of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings. The images spurred outrage and polarization while obscuring the war's actual horrors. Violent groups in the Middle East used the photos to recruit new members, as did antisemitic hate groups in the U.S. and elsewhere. Something similar happened last year after an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia. In the days after the shooting, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits. IS also has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities and has investigated IS' evolving use of AI. 'Aspirational' -- for now Such groups lag behind China, Russia or Iran and still view the more sophisticated uses of AI as "aspirational," according to Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, a cybersecurity firm that works with the federal government. But the risks are too high to ignore and are likely to grow as the use of cheap, powerful AI expands, he said. Hackers are already using synthetic audio and video for phishing campaigns, in which they try to impersonate a senior business or government leader to gain access to sensitive networks. They also can use AI to write malicious code or automate some aspects of cyberattacks. More concerning is the possibility that militant groups may try to use AI to help produce biological or chemical weapons, making up for a lack of technical expertise. That risk was included in the Department of Homeland Security's updated Homeland Threat Assessment, released earlier this year. "ISIS got on Twitter early and found ways to use social media to their advantage," Fowler said. "They are always looking for the next thing to add to their arsenal." Countering a growing threat Lawmakers have floated several proposals, saying there's an urgent need to act. Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said, for instance, that the U.S. must make it easier for AI developers to share information about how their products are being used by bad actors, whether they are extremists, criminal hackers or foreign spies. "It has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors," Warner said. During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year. Guarding against the malicious use of AI is no different from preparing for more conventional attacks, said Rep. August Pfluger, R-Texas, the bill's sponsor. "Our policies and capabilities must keep pace with the threats of tomorrow," he said.
Share
Share
Copy Link
Militant groups including Islamic State are experimenting with AI to produce deepfake images, spread propaganda, and recruit new members at scale. National security experts warn that even poorly resourced extremist groups can now leverage accessible AI tools like ChatGPT to create realistic fake content, automate cyberattacks, and translate messages across languages, raising urgent concerns about the malicious use of AI.
Extremist groups are actively experimenting with AI, transforming how militant organizations recruit members and spread propaganda. A post on a pro-Islamic State group website last month explicitly urged supporters to integrate AI into their operations, stating that "one of the best things about AI is how easy it is to use" and encouraging followers to "make their nightmares into reality" by using AI for recruitment
1
. National security experts and intelligence agencies have warned that AI risks posed by these groups are escalating as the technology becomes more accessible and powerful.
Source: Euronews
For loose-knit, poorly resourced extremist groups, AI offers a force multiplier effect. John Laliberte, a former vulnerability researcher at the National Security Agency and now CEO of cybersecurity firm ClearVector, explains that "with AI, even a small group that doesn't have a lot of money is still able to make an impact"
2
. This democratization of sophisticated technology means that militant groups using AI can now compete with better-funded adversaries in the information warfare space.Militant groups began using AI as soon as programs like ChatGPT became widely accessible in late 2022. Since then, they have increasingly deployed generative AI programs to create realistic-looking photos and videos. The Islamic State has created deepfake audio recordings of its own leaders reciting scripture and used AI to quickly translate messages into multiple languages, according to researchers at SITE Intelligence Group, a firm that tracks extremist activities
3
.When manipulated by social media algorithms, this AI generated propaganda can help recruit new believers, confuse or frighten adversaries, and spread disinformation at a scale unimaginable just a few years ago. Two years ago, such groups spread fake images of the Israel-Hamas war depicting bloodied, abandoned babies in bombed-out buildings, spurring outrage and polarization while obscuring the war's actual horrors
1
. Violent groups in the Middle East used these deepfake images and videos for recruitment, as did antisemitic hate groups in the U.S. and elsewhere.After an attack claimed by an IS affiliate killed nearly 140 people at a concert venue in Russia last year, AI-crafted propaganda videos circulated widely on discussion boards and social media, seeking new recruits
2
. This pattern demonstrates how extremist groups now deploy AI for recruitment immediately following attacks to capitalize on heightened media attention.Beyond propaganda and recruitment, hackers are already using synthetic audio and video for phishing campaigns, attempting to impersonate senior business or government leaders to gain access to sensitive networks. Bad actors can also use AI to write malicious code or automate aspects of cyberattacks
3
. Marcus Fowler, a former CIA agent who is now CEO at Darktrace Federal, notes that while such groups lag behind China, Russia, or Iran and still view more sophisticated uses of AI as "aspirational," the risks are too high to ignore as cheap, powerful AI expands2
.More concerning is the possibility that militant groups may attempt to use AI to help produce biological or chemical weapons, compensating for a lack of technical expertise. This national security threat was included in the Department of Homeland Security's updated Homeland Threat Assessment released earlier this year
1
. Fowler observed that "ISIS got on Twitter early and found ways to use social media to their advantage. They are always looking for the next thing to add to their arsenal" .Related Stories
Lawmakers are responding to this national security threat with several proposals. Mark Warner, Senator of Virginia and the top Democrat on the Senate Intelligence Committee, emphasized that the U.S. must make it easier for AI developers to enable information sharing about how their products are being used by bad actors, whether they are extremists, criminal hackers, or foreign spies
2
. Warner stated that "it has been obvious since late 2022, with the public release of ChatGPT, that the same fascination and experimentation with generative AI the public has had would also apply to a range of malign actors" .During a recent hearing on extremist threats, House lawmakers learned that IS and al-Qaida have held training workshops to help supporters learn to use AI
1
. Legislation that passed the U.S. House last month would require homeland security officials to assess the AI risks posed by such groups each year2
. These measures signal growing recognition that the malicious use of AI by extremist groups demands coordinated policy responses and enhanced collaboration between government agencies and AI developers to mitigate emerging threats.Summarized by
Navi
1
Technology

2
Technology

3
Policy and Regulation
