Curated by THEOUTPOST
On Fri, 23 Aug, 4:01 PM UTC
3 Sources
[1]
The dark side of AI: Islamic State supporters turn to artificial intelligence to bolster online support
Days after a deadly Islamic State attack on a Russian concert hall in March, a man clad in military fatigues and a helmet appeared in an online video, celebrating the assault in which more than 140 people were killed. "The Islamic State delivered a strong blow to Russia with a bloody attack, the fiercest that hit it in years," the man said in Arabic, according to the SITE Intelligence Group, an organisation that tracks and analyses such online content. But the man in the video, which the Thomson Reuters Foundation was not able to view independently, was not real - he was created using artificial intelligence, according to SITE and other online researchers. Federico Borgonovo, a researcher at the Royal United Services Institute, a London-based think tank, traced the AI-generated video to an IS supporter active in the group's digital ecosystem. This person had combined statements, bulletins, and data from Islamic State's official news outlet to create the video using AI, Borgonovo explained. Although Islamic State has been using AI for some time, Borgonovo said the video was an "exception to the rules" because the production quality was high even if the content was not as violent as in other online posts. "It's quite good for an AI product. But in terms of violence and the propaganda itself, it's average," he said, noting however that the video showed how IS supporters and affiliates can ramp up production of sympathetic content online. Digital experts say groups like IS and far-right movements are increasingly using AI online and testing the limits of safety controls on social media platforms. A January study by the Combating Terrorism Center at West Point said AI could be used to generate and distribute propaganda, to recruit using AI-powered chatbots, to carry out attacks using drones or other autonomous vehicles, and to launch cyber-attacks. "Many assessments of AI risk, and even of generative AI risks specifically, only consider this particular problem in a cursory way," said Stephane Baele, professor of international relations at UCLouvain in Belgium. "Major AI firms, who genuinely engaged with the risks of their tools by publishing sometimes lengthy reports mapping them, pay scant attention to extremist and terrorist uses." Regulation governing AI is still being crafted around the world and pioneers of the technology have said they will strive to ensure it is safe and secure. Tech giant Microsoft, for example, has developed a Responsible AI Standard that aims to base AI development on six principles including fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. In a special report earlier this year, SITE Intelligence Group's founder and executive director Rita Katz wrote that a range of actors from members of militant group al Qaeda to neo-Nazi networks were capitalising on the technology. "It's hard to understate what a gift AI is for terrorists and extremist communities, for which media is lifeblood," she wrote. At the height of its powers in 2014, Islamic State claimed control over large parts of Syria and Iraq, imposing a reign of terror in the areas it controlled. Media was a prominent tool in the group's arsenal, and online recruitment has long been vital to its operations. Despite the collapse of its self-declared caliphate in 2017, its supporters and affiliates still preach their doctrine online and try to persuade people to join their ranks. Last month, a security source told Reuters that France had identified a dozen ISIS-K handlers, based in countries around Afghanistan, who have a strong online presence and are trying to convince young men in European countries, who are interested in joining up with the group overseas, to instead carry out domestic attacks. ISIS-K is a resurgent wing of Islamic State, named after the historical region of Khorasan that included parts of Iran, Afghanistan and Central Asia. Analysts fear that AI may facilitate and automate the work of such online recruiters. Daniel Siegel, an investigator at social media research firm Graphika, said his team came across chatbots that mimicked dead or incarcerated Islamic State militants. He told the Thomson Reuters Foundation that it was unclear if the source of the bots was the Islamic State or its supporters, but the risk they posed was still real. "Now (IS affiliates) can build these real relationships with bots that represent a potential future where a chatbot could encourage them to engage in kinetic violence," Siegel said. Siegel interacted with some of the bots as part of his research and he found their answers to be generic, but he said that could change as AI tech develops. "One of the things I am worried about as well is how synthetic media will enable these groups to blend their content that previously existed in silos into our mainstream culture," he added. That is already happening: Graphika tracked videos of popular cartoon characters, like Rick and Morty and Peter Griffin, singing Islamic State anthems on different platforms. "What this allows the group or sympathisers or affiliates to do is target specific audiences because they know that the regular consumers of Sponge Bob or Peter Griffin or Rick and Morty, will be fed that content through the algorithm," Siegel said. Then there is the danger of IS supporters using AI tech to broaden their knowledge of illegal activities. For its January study, researchers at the Combating Terrorism Center at Westpoint attempted to bypass the security guards of Large Language Models (LLMs) and extract information that could be exploited by malicious actors. They crafted prompts that requested information on a range of activities from attack planning to recruitment and tactical learning, and the LLMs generated responses that were relevant half of the time. In one example that they described as "alarming", researchers asked an LLM to help them convince people to donate to Islamic State. "There, the model yielded very specific guidelines on how to conduct a fundraising campaign and even offered specific narratives and phrases to be used on social media," the report said. Joe Burton a professor of international security at Lancaster University, said companies were acting irresponsibly by rapidly releasing AI models as open-source tools. He questioned the efficacy of LLMs' safety protocols, adding that he was "not convinced" that regulators were equipped to enforce the testing and verification of these methods. "The factor to consider here is how much we want to regulate, and whether that will stifle innovation," Burton said. "The markets, in my view, shouldn't override safety and security, and I think - at the moment - that is what is happening."
[2]
Islamic State supporters turn to AI to bolster online support
Pro-Islamic State AI-generated video follows Moscow attack * Large Language Models vulnerable to exploitation * Experts say company safeguards, regulation lacking By Nazih Osseiran BEIRUT, - Days after a deadly Islamic State attack on a Russian concert hall in March, a man clad in military fatigues and a helmet appeared in an online video, celebrating the assault in which more than 140 people were killed. "The Islamic State delivered a strong blow to Russia with a bloody attack, the fiercest that hit it in years," the man said in Arabic, according to the SITE Intelligence Group, an organisation that tracks and analyses such online content. But the man in the video, which the Thomson Reuters Foundation was not able to view independently, was not real - he was created using artificial intelligence, according to SITE and other online researchers. Federico Borgonovo, a researcher at the Royal United Services Institute, a London-based think tank, traced the AI-generated video to an IS supporter active in the group's digital ecosystem. This person had combined statements, bulletins, and data from Islamic State's official news outlet to create the video using AI, Borgonovo explained. Although Islamic State has been using AI for some time, Borgonovo said the video was an "exception to the rules" because the production quality was high even if the content was not as violent as in other online posts. "It's quite good for an AI product. But in terms of violence and the propaganda itself, it's average," he said, noting however that the video showed how IS supporters and affiliates can ramp up production of sympathetic content online. Digital experts say groups like IS and far-right movements are increasingly using AI online and testing the limits of safety controls on social media platforms. A January study by the Combating Terrorism Center at West Point said AI could be used to generate and distribute propaganda, to recruit using AI-powered chatbots, to carry out attacks using drones or other autonomous vehicles, and to launch cyber-attacks. "Many assessments of AI risk, and even of generative AI risks specifically, only consider this particular problem in a cursory way," said Stephane Baele, professor of international relations at UCLouvain in Belgium. "Major AI firms, who genuinely engaged with the risks of their tools by publishing sometimes lengthy reports mapping them, pay scant attention to extremist and terrorist uses." Regulation governing AI is still being crafted around the world and pioneers of the technology have said they will strive to ensure it is safe and secure. Tech giant Microsoft, for example, has developed a Responsible AI Standard that aims to base AI development on six principles including fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. In a special report earlier this year, SITE Intelligence Group's founder and executive director Rita Katz wrote that a range of actors from members of militant group al Qaeda to neo-Nazi networks were capitalising on the technology. "It's hard to understate what a gift AI is for terrorists and extremist communities, for which media is lifeblood," she wrote. CHATBOTS AND CARTOONS At the height of its powers in 2014, Islamic State claimed control over large parts of Syria and Iraq, imposing a reign of terror in the areas it controlled. Media was a prominent tool in the group's arsenal, and online recruitment has long been vital to its operations. Despite the collapse of its self-declared caliphate in 2017, its supporters and affiliates still preach their doctrine online and try to persuade people to join their ranks. Last month, a security source told Reuters that France had identified a dozen ISIS-K handlers, based in countries around Afghanistan, who have a strong online presence and are trying to convince young men in European countries, who are interested in joining up with the group overseas, to instead carry out domestic attacks. ISIS-K is a resurgent wing of Islamic State, named after the historical region of Khorasan that included parts of Iran, Afghanistan and Central Asia. Analysts fear that AI may facilitate and automate the work of such online recruiters. Daniel Siegel, an investigator at social media research firm Graphika, said his team came across chatbots that mimicked dead or incarcerated Islamic State militants. He told the Thomson Reuters Foundation that it was unclear if the source of the bots was the Islamic State or its supporters, but the risk they posed was still real. "Now can build these real relationships with bots that represent a potential future where a chatbot could encourage them to engage in kinetic violence," Siegel said. Siegel interacted with some of the bots as part of his research and he found their answers to be generic, but he said that could change as AI tech develops. "One of the things I am worried about as well is how synthetic media will enable these groups to blend their content that previously existed in silos into our mainstream culture," he added. That is already happening: Graphika tracked videos of popular cartoon characters, like Rick and Morty and Peter Griffin, singing Islamic State anthems on different platforms. "What this allows the group or sympathisers or affiliates to do is target specific audiences because they know that the regular consumers of Sponge Bob or Peter Griffin or Rick and Morty, will be fed that content through the algorithm," Siegel said. EXPLOITING PROMPTS Then there is the danger of IS supporters using AI tech to broaden their knowledge of illegal activities. For its January study, researchers at the Combating Terrorism Center at Westpoint attempted to bypass the security guards of Large Language Models and extract information that could be exploited by malicious actors. They crafted prompts that requested information on a range of activities from attack planning to recruitment and tactical learning, and the LLMs generated responses that were relevant half of the time. In one example that they described as "alarming", researchers asked an LLM to help them convince people to donate to Islamic State. "There, the model yielded very specific guidelines on how to conduct a fundraising campaign and even offered specific narratives and phrases to be used on social media," the report said. Joe Burton a professor of international security at Lancaster University, said companies were acting irresponsibly by rapidly releasing AI models as open-source tools. He questioned the efficacy of LLMs' safety protocols, adding that he was "not convinced" that regulators were equipped to enforce the testing and verification of these methods. "The factor to consider here is how much we want to regulate, and whether that will stifle innovation," Burton said. "The markets, in my view, shouldn't override safety and security, and I think - at the moment - that is what is happening."
[3]
AI-generated Islamic State propaganda raises terrorism fears
Islamic State's use of AI to create propaganda heightens concerns over recruitment and extremist influence, experts warn. Days after a deadly Islamic State attack on a Russian concert hall in March, a man clad in military fatigues and a helmet appeared in an online video, celebrating the assault in which more than 140 people were killed. "The Islamic State delivered a strong blow to Russia with a bloody attack, the fiercest that hit it in years," the man said in Arabic, according to the SITE Intelligence Group, an organization that tracks and analyzes such online content. But the man in the video, which the Thomson Reuters Foundation was not able to view independently, was not real - he was created using artificial intelligence, according to SITE and other online researchers. Federico Borgonovo, a researcher at the Royal United Services Institute, a London-based think tank, traced the AI-generated video to an IS supporter active in the group's digital ecosystem. This person had combined statements, bulletins, and data from Islamic State's official news outlet to create the video using AI, Borgonovo explained. Although Islamic State has been using AI for some time, Borgonovo said the video was an "exception to the rules" because the production quality was high even if the content was not as violent as in other online posts. "It's quite good for an AI product. But in terms of violence and the propaganda itself, it's average," he said, noting however that the video showed how IS supporters and affiliates can ramp up production of sympathetic content online. Digital experts say groups like IS and far-right movements are increasingly using AI online and testing the limits of safety controls on social media platforms. A January study by the Combating Terrorism Center at West Point said AI could be used to generate and distribute propaganda, to recruit using AI-powered chatbots, to carry out attacks using drones or other autonomous vehicles, and to launch cyber-attacks. "Many assessments of AI risk, and even of generative AI risks specifically, only consider this particular problem in a cursory way," said Stephane Baele, professor of international relations at UCLouvain in Belgium. Stay updated with the latest news! Subscribe to The Jerusalem Post Newsletter Subscribe Now "Major AI firms, who genuinely engaged with the risks of their tools by publishing sometimes lengthy reports mapping them, pay scant attention to extremist and terrorist uses." Regulation governing AI is still being crafted around the world and pioneers of the technology have said they will strive to ensure it is safe and secure. Tech giant Microsoft, for example, has developed a Responsible AI Standard that aims to base AI development on six principles including fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. In a special report earlier this year, SITE Intelligence Group's founder and executive director Rita Katz wrote that a range of actors from members of militant group al Qaeda to neo-Nazi networks were capitalizing on the technology. "It's hard to understate what a gift AI is for terrorists and extremist communities, for which media is lifeblood," she wrote. At the height of its powers in 2014, Islamic State claimed control over large parts of Syria and Iraq, imposing a reign of terror in the areas it controlled. Media was a prominent tool in the group's arsenal, and online recruitment has long been vital to its operations. Despite the collapse of its self-declared caliphate in 2017, its supporters and affiliates still preach their doctrine online and try to persuade people to join their ranks. Last month, a security source told Reuters that France had identified a dozen ISIS-K handlers, based in countries around Afghanistan, who have a strong online presence and are trying to convince young men in European countries, who are interested in joining up with the group overseas, to instead carry out domestic attacks. ISIS-K is a resurgent wing of Islamic State, named after the historical region of Khorasan that included parts of Iran, Afghanistan and Central Asia. Analysts fear that AI may facilitate and automate the work of such online recruiters. Daniel Siegel, an investigator at social media research firm Graphika, said his team came across chatbots that mimicked dead or incarcerated Islamic State militants. He told the Thomson Reuters Foundation that it was unclear if the source of the bots was the Islamic State or its supporters, but the risk they posed was still real. "Now (IS affiliates) can build these real relationships with bots that represent a potential future where a chatbot could encourage them to engage in kinetic violence," Siegel said. Siegel interacted with some of the bots as part of his research and he found their answers to be generic, but he said that could change as AI tech develops. "One of the things I am worried about as well is how synthetic media will enable these groups to blend their content that previously existed in silos into our mainstream culture," he added. That is already happening: Graphika tracked videos of popular cartoon characters, like Rick and Morty and Peter Griffin, singing Islamic State anthems on different platforms. "What this allows the group or sympathizers or affiliates to do is target specific audiences because they know that the regular consumers of Sponge Bob or Peter Griffin or Rick and Morty, will be fed that content through the algorithm," Siegel said. EXPLOITING PROMPTS Then, there is the danger of IS supporters using AI tech to broaden their knowledge of illegal activities. For its January study, researchers at the Combating Terrorism Center at Westpoint attempted to bypass the security guards of Large Language Models (LLMs) and extract information that could be exploited by malicious actors. They crafted prompts that requested information on a range of activities from attack planning to recruitment and tactical learning, and the LLMs generated responses that were relevant half of the time. In one example that they described as "alarming," researchers asked an LLM to help them convince people to donate to Islamic State. "There, the model yielded very specific guidelines on how to conduct a fundraising campaign and even offered specific narratives and phrases to be used on social media," the report said. Joe Burton a professor of international security at Lancaster University, said companies were acting irresponsibly by rapidly releasing AI models as open-source tools. He questioned the efficacy of LLMs' safety protocols, adding that he was "not convinced" that regulators were equipped to enforce the testing and verification of these methods. "The factor to consider here is how much we want to regulate, and whether that will stifle innovation," Burton said. "The markets, in my view, shouldn't override safety and security, and I think - at the moment - that is what is happening."
Share
Share
Copy Link
Islamic State supporters are increasingly using artificial intelligence tools to create and disseminate propaganda, raising concerns about the potential misuse of AI technology for extremist activities.
In a concerning development, supporters of the Islamic State (IS) are turning to artificial intelligence (AI) to bolster their online presence and spread propaganda more effectively. This trend has caught the attention of researchers and security experts, who warn of the potential dangers of AI falling into the wrong hands 1.
IS supporters are utilizing AI tools to create and manipulate content, including images, audio, and text. These AI-generated materials are being used to produce propaganda videos, fake news articles, and even entire websites dedicated to spreading extremist ideologies. The use of AI allows these groups to produce high-quality content at a rapid pace, potentially reaching a wider audience and making it more challenging for authorities to counter their messaging 2.
Security experts have expressed alarm over this development, noting that AI tools could significantly enhance the capabilities of extremist groups. The ability to create convincing deepfakes, generate realistic text, and automate the production of propaganda materials poses a serious threat to online security and information integrity. There are fears that these AI-powered techniques could be used to recruit new members, spread disinformation, and incite violence 3.
The use of AI by extremist groups presents new challenges for content moderation on social media platforms. As AI-generated content becomes more sophisticated, it becomes increasingly difficult for automated systems and human moderators to distinguish between genuine and manipulated media. This development puts additional pressure on tech companies to develop more advanced detection methods and improve their content moderation strategies 1.
Governments and international organizations are beginning to recognize the need for a coordinated response to combat the misuse of AI by extremist groups. Efforts are being made to develop new technologies and strategies to detect and counter AI-generated propaganda. Additionally, there are calls for stricter regulations on AI development and usage to prevent these tools from falling into the hands of malicious actors 2.
The use of AI by IS supporters highlights a larger issue of how emerging technologies can be exploited for nefarious purposes. As AI continues to advance, it is likely that other extremist groups and bad actors will attempt to harness its power for their own agendas. This situation underscores the importance of responsible AI development and the need for ongoing vigilance in the face of evolving technological threats 3.
Reference
[1]
[2]
[3]
Malicious AI models like FraudGPT and WormGPT are becoming the latest tools for cybercriminals, posing significant risks to online security. These AI systems are being used to create sophisticated phishing emails, malware, and other cyber threats.
2 Sources
2 Sources
As AI revolutionizes cybersecurity, it presents both unprecedented threats and powerful defensive tools. This story explores the evolving landscape of AI-based attacks and the strategies businesses and cybersecurity professionals are adopting to counter them.
2 Sources
2 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
2 Sources
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
Major tech companies are lobbying the Trump administration for fewer AI regulations, reversing their previous stance on government oversight. This shift comes as Trump prioritizes AI development to compete with China.
5 Sources
5 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved