5 Sources
5 Sources
[1]
AI-Powered Disinformation Swarms Are Coming for Democracy
In 2016, hundreds of Russians filed into a modern office building on 55 Savushkina Street in St. Petersburg every day; they were part of the now-infamous troll farm known as the Internet Research Agency. Day and night, seven days a week, these employees would manually comment on news articles, post on Facebook and Twitter, and generally seek to rile up Americans about the then-upcoming presidential election. When the scheme was finally uncovered, there was widespread media coverage and Senate hearings, and social media platforms made changes in the way they verified users. But in reality, for all the money and resources poured into the IRA, the impact was minimal -- certainly compared to that of another Russia-linked campaign that saw Hilary Clinton's emails leaked just before the election. A decade on, while the IRA is no more, disinformation campaigns have continued to evolve, including the use of AI technology to create fake websites and deepfake videos. A new paper, published in Science on Thursday, predicts an imminent step-change in how disinformation campaigns will be conducted. Instead of hundreds of employees sitting at desks in St. Petersburg, the paper posits, one person with access to the latest AI tools will be able to command "swarms" of thousands of social media accounts, capable not only of crafting unique posts indistinguishable from human content, but of evolving independently and in real time -- all without constant human oversight. These AI swarms, the researchers believe, could deliver society-wide shifts in viewpoint that not only sway elections but ultimately bring about the end of democracy -- unless steps are taken now to prevent it. "Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level," the report says. "By adaptively mimicking human social dynamics, they threaten democracy." The paper was authored by 22 experts from across the globe, drawn from fields including computer science, artificial intelligence, and cybersecurity, as well as psychology, computational social science, journalism, and government policy. The pessimistic outlook on how AI technology will change the information environment is shared by other experts in the field who have reviewed the paper. "To target chosen individuals or communities is going to be much easier and powerful," says Lukasz Olejnik, a visiting senior research fellow at King's College London's Department of War Studies and the author of Propaganda: From Disinformation and Influence to Operations and Information Warfare. "This is an extremely challenging environment for a democratic society. We're in big trouble." Even those who are optimistic about AI's potential to help humans believe the paper highlights a threat that needs to be taken seriously. "AI-enabled influence campaigns are certainly within the current state of advancement of the technology, and as the paper sets out, this also poses significant complexity for governance measures and defense response," says Barry O'Sullivan, a professor at the School of Computer Science and IT at University College Cork. In recent months, as AI companies seek to prove they are worth the hundreds of billions of dollars that has been poured into them, many have pointed to the most recent crop of AI agents as evidence that the technology will finally live up to the hype. But the very same technology could soon be deployed, the authors argue, to disseminate disinformation and propaganda at a scale never before seen. The swarms the authors describe would consist of AI-controlled agents capable of maintaining persistent identities and, crucially, memory, allowing for the simulation of believable online identities. The agents would coordinate in order to achieve shared objectives, while at the same time creating individual personas and output to avoid detection. These systems would also be able to adapt in real time to respond to signals shared by the social media platforms and in conversation with real humans.
[2]
Next-generation AI 'swarms' will invade social media by mimicking human behavior and harassing real users, researchers warn
Social media users could find themselves swept up in a movement of AI's making. (Image credit: Andriy Onufriyenko via Getty Images) Swarms of artificial intelligence (AI) agents could soon invade social media platforms en masse to spread false narratives, harass users and undermine democracy, researchers warn. These "AI swarms" will form part of a new frontier in information warfare, capable of mimicking human behavior to avoid detection while creating the illusion of an authentic online movement, based on a commentary published Jan. 22 in the journal Science. Imagine finding that the views of your favourite online community are hardening around a position that was previously up for debate. Human instinct is often to follow the "wisdom" of the herd. But in this case, the herd could be secretly shepherded by an AI swarm operating on behalf of an unknown individual, group, political party, company or state actor. "Humans, generally speaking, are conformist," commentary co-author Jonas Kunst, a professor of communication at the BI Norwegian Business School in Norway, told Live Science. "We often don't want to agree with that, and people vary to a certain extent, but all things being equal, we do have a tendency to believe what most people do has certain value. That's something that can relatively easily be hijacked by these swarms." And if you don't get swept up with the herd, the swarm could also be a harassment tool to discourage arguments that undermine the AI's narrative, the researchers argued. For example, the swarm could emulate an angry mob to target an individual with dissenting views and drive them off the platform. The researchers don't give a timeline for the invasion of AI swarms, so it's unclear when the first agents will arrive on our feeds. However, they noted that swarms would be difficult to detect, and thus the extent to which they might have already been deployed is unknown. For many, signs of the growing influence of bots on social media are already evident, while the dead internet conspiracy theory -- that bots are responsible for the majority of online activity and content creation -- has been gaining traction over the last few years. Shepherding the flock The researchers warn that the emerging AI swarm risk is compounded by long-standing vulnerabilities in our digital ecosystems, already weakened by what they described as the "erosion of rational-critical discourse and a lack of shared reality among citizens." Anyone who uses social media will know that it's become a very divisive place. The online ecosystem is also already littered with automated bots -- non-human accounts following the commands of computer software that comprise more than half of all web traffic. Conventional bots are typically only capable of performing simple tasks over and over again, like posting the same incendiary message. They can still cause harm, spreading false information and inflating false narratives, but they're usually pretty easy to detect and rely on humans to be coordinated at scale. The next-generation AI swarms, on the other hand, are coordinated by large language models (LLMs) -- the AI systems behind popular chatbots. With an LLM at the helm, a swarm will be sophisticated enough to adapt to the online communities it infiltrates, installing collections of different personas that retain memory and identity, according to the commentary. "We talk about it as a kind of organism that is self-sufficient, that can coordinate itself, can learn, can adapt over time and, by that, specialize in exploiting human vulnerabilities," Kunst said. This mass manipulation is far from hypothetical. Last year, Reddit threatened legal action against researchers who used AI chatbots in an experiment to manipulate the opinions of four million users in its popular forum r/changemyview. According to the researchers' preliminary findings, their chatbots' responses were between three to six times more persuasive than those made by human users. A swarm could contain hundreds, thousands -- or even a million -- AI agents. Kunst noted that the number scales with computing power and would also be limited by restrictions that social media companies may introduce to combat the swarms. But it's not all about the number of agents. Swarms could target local community groups that would be suspicious of a sudden influx of new users. In this scenario, only a few agents would be deployed. The researchers also noted that because the swarms are more sophisticated than traditional bots, they can have more influence with fewer numbers. "I think the more sophisticated these bots are, the less you actually need," commentary lead author Daniel Schroeder, a researcher at the technology research organization SINTEF in Norway, told Live Science. Guarding against next-gen bots Agents boast an edge in debates with real users because they can post 24 hours a day, every day, for however long it takes for their narrative to take hold. The researchers added that in "cognitive warfare," AI's relentlessness and persistence can be weaponized against limited human efforts. Social media companies want real users on their platforms, not AI agents, so the researchers envisage that companies will respond to AI swarms with improved account authentication -- forcing users to prove they are real people. But the researchers also flagged some issues with this approach, arguing that it could discourage political dissent in countries where people rely on anonymity to speak out against governments. Authentic accounts can also be hijacked or acquired, which complicates things further. Still, the researchers noted that strengthening authentication would make it more difficult and costly for those wishing to deploy AI swarms. The researchers also proposed other swarm defenses, like scanning live traffic for statistically anomalous patterns that could represent AI swarms and the establishment of an "AI Influence Observatory" ecosystem, in which academic groups, NGOs and other institutions can study, raise awareness and respond to the AI swarm threat. In essence, the researchers want to get ahead of the issue before it can disrupt elections and other large events. "We are with a reasonable certainty warning about a future development that really might have disproportionate consequences for democracy, and we need to start preparing for that," Kunst said. "We need to be proactive instead of waiting for the first type of larger events being negatively influenced by AI swarms."
[3]
Experts warn of threat to democracy by 'AI bot swarms' infesting social media
Misinformation technology could be deployed at scale to disrupt 2028 US presidential election, AI researchers warn Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist, Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new "disruptive threat" posed by hard-to-detect, malicious "AI swarms" infesting social media and messaging channels. A would-be autocrat could use such swarms to persuade populations to accept cancelled elections or overturn results, they said, amid predictions the technology could be deployed at scale by the time of the US presidential election in 2028. The warnings, published today in Science, come alongside calls for coordinated global action to counter the risk, including "swarm scanners" and watermarked content to counter AI-run misinformation campaigns. Early versions of AI-powered influence operations have been used in the 2024 elections in Taiwan, India and Indonesia. "A disruptive threat is emerging: swarms of collaborative, malicious AI agents," the authors said. "These systems are capable of coordinating autonomously, infiltrating communities and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy." One leading expert in propaganda technology, Inga Trauthig, said the adoption of such advanced technology is likely to be slowed by politicians' reluctance to cede campaign control to AIs. Another cause for skepticism is concern that using such illicit techniques would not be worth the risk, given voters are still more influenced by offline material. The experts behind the warning include New York University's Gary Marcus, a prominent sceptic about the claimed potential of current AI models who calls himself a "generative AI realist", and Audrey Tang, Taiwan's first digital minister, who has warned: "Those in the pay of authoritarian forces are undermining electoral processes, weaponizing AI and employing our societal strengths against us." Others include David Garcia, professor for social and behavioural data science at the University of Konstanz, Sander van der Linden, a misinformation expert and director of Cambridge University's social decision-making lab, and Christopher Summerfield, AI researcher and professor of cognitive neuroscience at Oxford University. Together they say political leaders could deploy almost limitless numbers of AIs to masquerade as humans online and precisely infiltrate communities, learn their foibles over time and use increasingly convincing and carefully tailored falsehoods, to change population-wide opinions. The threat is being supercharged by advances in AIs' ability to pick up on the tone and content of discourse. They are increasingly able to mimic human dynamics, for example, by using appropriate slang and posting irregularly to avoid detection. Progress in the development of "agentic" AI also means the ability to autonomously plan and coordinate action. As well as operating across social media, they may use messaging channels and even write blogs or use email, depending on which channel the AI thinks best helps achieve an aim, said one of the authors, Daniel Thilo Schroeder, a research scientist at the SINTEF research institute in Oslo. "It's just frightening how easy these things are to vibe code and just have small bot armies that can actually navigate online social media platforms and email and use these tools," said Schroeder, who has been simulating swarms in laboratory conditions. "If these bots start to evolve into a collective and exchange information to solve a problem - in this case a malicious goal, namely analysing a community and finding a weak spot - then coordination will increase their accuracy and efficiency," said another of the authors, Jonas Kunst, professor of communication at the BI Norwegian Business School. "That is a really serious threat that we predict is going to materialise." In Taiwan, where voters are regularly targeted by Chinese propaganda, often unknowingly, AI bots have been increasing engagement with citizens on Threads and Facebook in the last two to three months, said Puma Shen, a Taiwanese Democratic Progressive Party MP and campaigner against Chinese disinformation. During discussions on political topics the AIs tend to provide "tonnes of information that you cannot verify", creating "information overload", Shen said. He said AIs might cite fake articles about how America will abandon Taiwan. Another recent trend is for the AI bots to stress to younger Taiwanese people that the China-Taiwan dispute is very complicated "so do not take sides if you have no knowledge". "It's not telling you that China's great, but it's [encouraging them] to be neutral," Shen told the Guardian. "This is very dangerous, because then you think people like me are radical." Amid signs the progress of AI technology is not as rapid as Silicon Valley companies like OpenAI and Anthropic have claimed, the Guardian asked independent AI experts to assess the swarm warnings. "In the election-heavy year of 2024 the capabilities were there for AI-driven microtargeting but we didn't see as much of that as scholars predicted," said Trauthig, an adviser to the International Panel on the Information Environment. "Most political propagandists I interview are still using older technologies and are not at this cutting edge." "It isn't fanciful," said Michael Wooldridge, professor of the foundations of AI at Oxford University. "I think it is entirely plausible that bad actors will try to mobilise virtual armies of LLM-powered agents to disrupt elections and manipulate public opinion, for example targeting large numbers of individuals on social media and other electronic media. It's technologically perfectly feasible ... the technology has got progressively better and much more accessible."
[4]
AI 'Swarms' Could Escalate Online Misinformation and Manipulation, Researchers Warn
The paper notes that existing platform safeguards may struggle to detect and contain these swarms. The era of easily detectable botnets is coming to an end, according to a new report published in Science on Thursday. In the study, researchers warned that misinformation campaigns are shifting toward autonomous AI swarms that can imitate human behavior, adapt in real time, and require little human oversight, complicating efforts to detect and stop them. Written by a consortium of researchers, including those from Oxford, Cambridge, UC Berkeley, NYU, and the Max Planck Institute, the paper describes a digital environment in which manipulation becomes harder to identify. Instead of short bursts tied to elections or politics, these AI campaigns can sustain a narrative over longer periods of time. "In the hands of a government, such tools could suppress dissent or amplify incumbents," the researchers wrote. "Therefore, the deployment of defensive AI can only be considered if governed by strict, transparent, and democratically accountable frameworks." A swarm is a group of autonomous AI agents that work together to solve problems or complete objectives more efficiently than a single system. The researchers said AI swarms build on existing weaknesses in social media platforms, where users are often insulated from opposing viewpoints. "False news has been shown to spread faster and more broadly than true news, deepening fragmented realities and eroding shared factual baselines," they wrote. "Recent evidence links engagement-optimized curation to polarization, with platform algorithms amplifying divisive content even at the expense of user satisfaction, further degrading the public sphere." That shift is already visible on major platforms, according to Sean Ren, a computer science professor at the University of Southern California and the CEO of Sahara AI, who said that AI-driven accounts are increasingly difficult to distinguish from ordinary users. "I think stricter KYC, or account identity validation, would help a lot here," Ren told Decrypt. "If it's harder to create new accounts and easier to monitor spammers, it becomes much more difficult for agents to use large numbers of accounts for coordinated manipulation." Earlier influence campaigns depended largely on scale rather than subtlety, with thousands of accounts posting identical messages simultaneously, which made detection comparatively straightforward. In contrast, the study said, AI swarms exhibit "unprecedented autonomy, coordination, and scale." Ren said content moderation alone is unlikely to stop these systems. The problem, he said, is how platforms manage identity at scale. Stronger identity checks and limits on account creation, he said, could make coordinated behavior easier to detect, even when individual posts appear human. "If the agent can only use a small number of accounts to post content, then it's much easier to detect suspicious usage and ban those accounts," he said. No simple fix The researchers concluded that there is no single solution to the problem, with potential options including improved detection of statistically anomalous coordination and greater transparency around automated activity, but say technical measures alone are unlikely to be sufficient. According to Ren, financial incentives also remain a persistent driver of coordinated manipulation attacks, even as platforms introduce new technical safeguards. "These agent swarms are usually controlled by teams or vendors who are getting monetary incentives from external parties or companies to do the coordinated manipulation," he said. "Platforms should enforce stronger KYC and spam detection mechanisms to identify and filter out agent manipulated accounts."
[5]
Experts Are Warning That 'AI Swarms' Could Spark Disruptions to Democracy
Forget troll farms. "AI swarms" could be the latest threat to democracy. AI researchers are already warning that upcoming election seasons could be rife with AI-powered misinformation campaigns capable of threatening democracy. "A disruptive threat is emerging: swarms of collaborative, malicious AI agents," according to a paper, published in the journal Science. "By adaptively mimicking human social dynamics, they threaten democracy." Authors of the paper include Nobel peace prize-winner Maria Ressa, Taiwan's first Minister of Digital Affairs Audrey Tang, and various experts in AI, misinformation, and more. They warn that these AI swarms are a natural evolution of influence campaigns like the Russian Internet Research Agency's 2016 Twitter operation. The paper notes that only about 1 percent of Twitter users accounted for 70 percent of exposure to the IRA's Twitter content due to technical limitations, or in part because humans were operating the system.
Share
Share
Copy Link
A consortium of 22 experts, including Nobel laureate Maria Ressa, warns that AI swarms—coordinated networks of autonomous AI agents—could infiltrate social media to spread misinformation campaigns at unprecedented scale. These systems can mimic human behavior, adapt in real time, and operate with minimal oversight, posing a serious threat to democracy.
A decade after Russian troll farms at the Internet Research Agency manually posted content to influence the 2016 U.S. presidential election, a far more sophisticated threat is emerging. According to a paper published in Science, AI swarms—networks of autonomous AI agents capable of mimicking human behavior—could soon manipulate public opinion at unprecedented scale
1
. The study, authored by 22 experts from fields including computer science, artificial intelligence, cybersecurity, and psychology, warns that these systems pose a serious threat to democracy by adaptively mimicking human social dynamics3
.
Source: Inc.
Unlike traditional botnets that rely on hundreds of employees sitting at desks posting identical messages, AI swarms can be controlled by a single person with access to large language models
1
. These systems coordinate autonomously to achieve shared objectives while creating individual personas to avoid detection, making them far more difficult to identify than conventional bots4
.
Source: Decrypt
The researchers describe AI swarms as collections of AI-controlled agents capable of maintaining persistent identities and memory, allowing for the simulation of believable online identities
1
. These autonomous AI agents can adapt in real time to respond to signals from social media platforms and conversations with real humans, making social media manipulation far more effective than previous influence campaigns2
.
Source: Live Science
A swarm could contain hundreds, thousands, or even a million AI agents, with the number scaling based on computing power and platform restrictions
2
. The threat is compounded by existing vulnerabilities in digital ecosystems, already weakened by what researchers describe as the "erosion of rational-critical discourse and a lack of shared reality among citizens"2
.Last year, Reddit threatened legal action against researchers who used AI chatbots to manipulate opinions of four million users in the forum r/changemyview. The researchers' preliminary findings showed their chatbots' responses were between three to six times more persuasive than those made by human users
2
.The consortium warning about AI-powered disinformation includes Nobel peace prize-winner Maria Ressa, Taiwan's first Minister of Digital Affairs Audrey Tang, and leading researchers from Berkeley, Harvard, Oxford, Cambridge, and Yale
3
. Audrey Tang has warned that "those in the pay of authoritarian forces are undermining electoral processes, weaponizing AI and employing our societal strengths against us"3
.The paper predicts this technology could be deployed at scale by the 2028 U.S. presidential election
3
. Early versions of AI-powered influence operations have already been used in the 2024 elections in Taiwan, India, and Indonesia3
.Daniel Thilo Schroeder, a research scientist at the SINTEF research institute in Oslo and one of the paper's authors, has been simulating swarms in laboratory conditions. "It's just frightening how easy these things are to vibe code and just have small bot armies that can actually navigate online social media platforms and email and use these tools," Schroeder said
3
.Related Stories
The shift from troll farms to AI swarms represents a fundamental change in how misinformation campaigns operate
5
. The 2016 Internet Research Agency Twitter operation saw only about 1 percent of Twitter users account for 70 percent of exposure to content, in part because humans were operating the system5
.By contrast, AI swarms exhibit "unprecedented autonomy, coordination, and scale," according to the study
4
. These systems can sustain narratives over longer periods rather than short bursts tied to elections, making online misinformation harder to identify and counter4
.Sean Ren, a computer science professor at the University of Southern California and CEO of Sahara AI, told Decrypt that AI-driven accounts are increasingly difficult to distinguish from ordinary users. "These agent swarms are usually controlled by teams or vendors who are getting monetary incentives from external parties or companies to do the coordinated manipulation," Ren said
4
.The researchers call for coordinated global action to counter the risk, including swarm scanners and watermarked content to detect AI-run misinformation campaigns
3
. However, they conclude there is no single solution, with potential options including improved detection of statistically anomalous coordination and greater transparency around automated activity4
.Ren suggests that stricter identity validation could help significantly. "If it's harder to create new accounts and easier to monitor spammers, it becomes much more difficult for agents to use large numbers of accounts for coordinated manipulation," he said
4
. Stronger identity checks and limits on account creation could make coordinated behavior easier to detect, even when individual posts appear human4
.The paper notes that existing platform safeguards may struggle to detect and contain these swarms
4
. "In the hands of a government, such tools could suppress dissent or amplify incumbents," the researchers wrote, adding that "the deployment of defensive AI can only be considered if governed by strict, transparent, and democratically accountable frameworks"4
.Summarized by
Navi
[2]
09 Oct 2024•Technology

18 Nov 2025•Science and Research

14 Oct 2024•Technology

1
Policy and Regulation

2
Business and Economy

3
Technology
