4 Sources
4 Sources
[1]
AI-Powered Disinformation Swarms Are Coming for Democracy
In 2016, hundreds of Russians filed into a modern office building on 55 Savushkina Street in St. Petersburg every day; they were part of the now-infamous troll farm known as the Internet Research Agency. Day and night, seven days a week, these employees would manually comment on news articles, post on Facebook and Twitter, and generally seek to rile up Americans about the then-upcoming presidential election. When the scheme was finally uncovered, there was widespread media coverage and Senate hearings, and social media platforms made changes in the way they verified users. But in reality, for all the money and resources poured into the IRA, the impact was minimal -- certainly compared to that of another Russia-linked campaign that saw Hilary Clinton's emails leaked just before the election. A decade on, while the IRA is no more, disinformation campaigns have continued to evolve, including the use of AI technology to create fake websites and deepfake videos. A new paper, published in Science on Thursday, predicts an imminent step-change in how disinformation campaigns will be conducted. Instead of hundreds of employees sitting at desks in St. Petersburg, the paper posits, one person with access to the latest AI tools will be able to command "swarms" of thousands of social media accounts, capable not only of crafting unique posts indistinguishable from human content, but of evolving independently and in real time -- all without constant human oversight. These AI swarms, the researchers believe, could deliver society-wide shifts in viewpoint that not only sway elections but ultimately bring about the end of democracy -- unless steps are taken now to prevent it. "Advances in artificial intelligence offer the prospect of manipulating beliefs and behaviors on a population-wide level," the report says. "By adaptively mimicking human social dynamics, they threaten democracy." The paper was authored by 22 experts from across the globe, drawn from fields including computer science, artificial intelligence, and cybersecurity, as well as psychology, computational social science, journalism, and government policy. The pessimistic outlook on how AI technology will change the information environment is shared by other experts in the field who have reviewed the paper. "To target chosen individuals or communities is going to be much easier and powerful," says Lukasz Olejnik, a visiting senior research fellow at King's College London's Department of War Studies and the author of Propaganda: From Disinformation and Influence to Operations and Information Warfare. "This is an extremely challenging environment for a democratic society. We're in big trouble." Even those who are optimistic about AI's potential to help humans believe the paper highlights a threat that needs to be taken seriously. "AI-enabled influence campaigns are certainly within the current state of advancement of the technology, and as the paper sets out, this also poses significant complexity for governance measures and defense response," says Barry O'Sullivan, a professor at the School of Computer Science and IT at University College Cork. In recent months, as AI companies seek to prove they are worth the hundreds of billions of dollars that has been poured into them, many have pointed to the most recent crop of AI agents as evidence that the technology will finally live up to the hype. But the very same technology could soon be deployed, the authors argue, to disseminate disinformation and propaganda at a scale never before seen. The swarms the authors describe would consist of AI-controlled agents capable of maintaining persistent identities and, crucially, memory, allowing for the simulation of believable online identities. The agents would coordinate in order to achieve shared objectives, while at the same time creating individual personas and output to avoid detection. These systems would also be able to adapt in real time to respond to signals shared by the social media platforms and in conversation with real humans.
[2]
Experts warn of threat to democracy by 'AI bot swarms' infesting social media
Misinformation technology could be deployed at scale to disrupt 2028 US presidential election, AI researchers warn Political leaders could soon launch swarms of human-imitating AI agents to reshape public opinion in a way that threatens to undermine democracy, a high profile group of experts in AI and online misinformation has warned. The Nobel peace prize-winning free-speech activist, Maria Ressa, and leading AI and social science researchers from Berkeley, Harvard, Oxford, Cambridge and Yale are among a global consortium flagging the new "disruptive threat" posed by hard-to-detect, malicious "AI swarms" infesting social media and messaging channels. A would-be autocrat could use such swarms to persuade populations to accept cancelled elections or overturn results, they said, amid predictions the technology could be deployed at scale by the time of the US presidential election in 2028. The warnings, published today in Science, come alongside calls for coordinated global action to counter the risk, including "swarm scanners" and watermarked content to counter AI-run misinformation campaigns. Early versions of AI-powered influence operations have been used in the 2024 elections in Taiwan, India and Indonesia. "A disruptive threat is emerging: swarms of collaborative, malicious AI agents," the authors said. "These systems are capable of coordinating autonomously, infiltrating communities and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy." One leading expert in propaganda technology, Inga Trauthig, said the adoption of such advanced technology is likely to be slowed by politicians' reluctance to cede campaign control to AIs. Another cause for skepticism is concern that using such illicit techniques would not be worth the risk, given voters are still more influenced by offline material. The experts behind the warning include New York University's Gary Marcus, a prominent sceptic about the claimed potential of current AI models who calls himself a "generative AI realist", and Audrey Tang, Taiwan's first digital minister, who has warned: "Those in the pay of authoritarian forces are undermining electoral processes, weaponizing AI and employing our societal strengths against us." Others include David Garcia, professor for social and behavioural data science at the University of Konstanz, Sander van der Linden, a misinformation expert and director of Cambridge University's social decision-making lab, and Christopher Summerfield, AI researcher and professor of cognitive neuroscience at Oxford University. Together they say political leaders could deploy almost limitless numbers of AIs to masquerade as humans online and precisely infiltrate communities, learn their foibles over time and use increasingly convincing and carefully tailored falsehoods, to change population-wide opinions. The threat is being supercharged by advances in AIs' ability to pick up on the tone and content of discourse. They are increasingly able to mimic human dynamics, for example, by using appropriate slang and posting irregularly to avoid detection. Progress in the development of "agentic" AI also means the ability to autonomously plan and coordinate action. As well as operating across social media, they may use messaging channels and even write blogs or use email, depending on which channel the AI thinks best helps achieve an aim, said one of the authors, Daniel Thilo Schroeder, a research scientist at the SINTEF research institute in Oslo. "It's just frightening how easy these things are to vibe code and just have small bot armies that can actually navigate online social media platforms and email and use these tools," said Schroeder, who has been simulating swarms in laboratory conditions. "If these bots start to evolve into a collective and exchange information to solve a problem - in this case a malicious goal, namely analysing a community and finding a weak spot - then coordination will increase their accuracy and efficiency," said another of the authors, Jonas Kunst, professor of communication at the BI Norwegian Business School. "That is a really serious threat that we predict is going to materialise." In Taiwan, where voters are regularly targeted by Chinese propaganda, often unknowingly, AI bots have been increasing engagement with citizens on Threads and Facebook in the last two to three months, said Puma Shen, a Taiwanese Democratic Progressive Party MP and campaigner against Chinese disinformation. During discussions on political topics the AIs tend to provide "tonnes of information that you cannot verify", creating "information overload", Shen said. He said AIs might cite fake articles about how America will abandon Taiwan. Another recent trend is for the AI bots to stress to younger Taiwanese people that the China-Taiwan dispute is very complicated "so do not take sides if you have no knowledge". "It's not telling you that China's great, but it's [encouraging them] to be neutral," Shen told the Guardian. "This is very dangerous, because then you think people like me are radical." Amid signs the progress of AI technology is not as rapid as Silicon Valley companies like OpenAI and Anthropic have claimed, the Guardian asked independent AI experts to assess the swarm warnings. "In the election-heavy year of 2024 the capabilities were there for AI-driven microtargeting but we didn't see as much of that as scholars predicted," said Trauthig, an adviser to the International Panel on the Information Environment. "Most political propagandists I interview are still using older technologies and are not at this cutting edge." "It isn't fanciful," said Michael Wooldridge, professor of the foundations of AI at Oxford University. "I think it is entirely plausible that bad actors will try to mobilise virtual armies of LLM-powered agents to disrupt elections and manipulate public opinion, for example targeting large numbers of individuals on social media and other electronic media. It's technologically perfectly feasible ... the technology has got progressively better and much more accessible."
[3]
AI 'Swarms' Could Escalate Online Misinformation and Manipulation, Researchers Warn
The paper notes that existing platform safeguards may struggle to detect and contain these swarms. The era of easily detectable botnets is coming to an end, according to a new report published in Science on Thursday. In the study, researchers warned that misinformation campaigns are shifting toward autonomous AI swarms that can imitate human behavior, adapt in real time, and require little human oversight, complicating efforts to detect and stop them. Written by a consortium of researchers, including those from Oxford, Cambridge, UC Berkeley, NYU, and the Max Planck Institute, the paper describes a digital environment in which manipulation becomes harder to identify. Instead of short bursts tied to elections or politics, these AI campaigns can sustain a narrative over longer periods of time. "In the hands of a government, such tools could suppress dissent or amplify incumbents," the researchers wrote. "Therefore, the deployment of defensive AI can only be considered if governed by strict, transparent, and democratically accountable frameworks." A swarm is a group of autonomous AI agents that work together to solve problems or complete objectives more efficiently than a single system. The researchers said AI swarms build on existing weaknesses in social media platforms, where users are often insulated from opposing viewpoints. "False news has been shown to spread faster and more broadly than true news, deepening fragmented realities and eroding shared factual baselines," they wrote. "Recent evidence links engagement-optimized curation to polarization, with platform algorithms amplifying divisive content even at the expense of user satisfaction, further degrading the public sphere." That shift is already visible on major platforms, according to Sean Ren, a computer science professor at the University of Southern California and the CEO of Sahara AI, who said that AI-driven accounts are increasingly difficult to distinguish from ordinary users. "I think stricter KYC, or account identity validation, would help a lot here," Ren told Decrypt. "If it's harder to create new accounts and easier to monitor spammers, it becomes much more difficult for agents to use large numbers of accounts for coordinated manipulation." Earlier influence campaigns depended largely on scale rather than subtlety, with thousands of accounts posting identical messages simultaneously, which made detection comparatively straightforward. In contrast, the study said, AI swarms exhibit "unprecedented autonomy, coordination, and scale." Ren said content moderation alone is unlikely to stop these systems. The problem, he said, is how platforms manage identity at scale. Stronger identity checks and limits on account creation, he said, could make coordinated behavior easier to detect, even when individual posts appear human. "If the agent can only use a small number of accounts to post content, then it's much easier to detect suspicious usage and ban those accounts," he said. No simple fix The researchers concluded that there is no single solution to the problem, with potential options including improved detection of statistically anomalous coordination and greater transparency around automated activity, but say technical measures alone are unlikely to be sufficient. According to Ren, financial incentives also remain a persistent driver of coordinated manipulation attacks, even as platforms introduce new technical safeguards. "These agent swarms are usually controlled by teams or vendors who are getting monetary incentives from external parties or companies to do the coordinated manipulation," he said. "Platforms should enforce stronger KYC and spam detection mechanisms to identify and filter out agent manipulated accounts."
[4]
Experts Are Warning That 'AI Swarms' Could Spark Disruptions to Democracy
Forget troll farms. "AI swarms" could be the latest threat to democracy. AI researchers are already warning that upcoming election seasons could be rife with AI-powered misinformation campaigns capable of threatening democracy. "A disruptive threat is emerging: swarms of collaborative, malicious AI agents," according to a paper, published in the journal Science. "By adaptively mimicking human social dynamics, they threaten democracy." Authors of the paper include Nobel peace prize-winner Maria Ressa, Taiwan's first Minister of Digital Affairs Audrey Tang, and various experts in AI, misinformation, and more. They warn that these AI swarms are a natural evolution of influence campaigns like the Russian Internet Research Agency's 2016 Twitter operation. The paper notes that only about 1 percent of Twitter users accounted for 70 percent of exposure to the IRA's Twitter content due to technical limitations, or in part because humans were operating the system.
Share
Share
Copy Link
A consortium of 22 global experts, including Nobel laureate Maria Ressa, warns that AI swarms—autonomous networks of AI agents—could manipulate public opinion at unprecedented scale. Published in Science, the research predicts these systems could be deployed by the 2028 US presidential election, potentially enabling autocrats to cancel elections or overturn results with minimal human oversight.
The era of manually operated disinformation campaigns is giving way to something far more sophisticated. A new paper published in Science warns that AI swarms—networks of autonomous AI agents capable of mimicking human social dynamics—pose an imminent threat to democracy
1
2
. Unlike the 2016 Russian Internet Research Agency operation, which required hundreds of employees working from 55 Savushkina Street in St. Petersburg, a single person with access to AI tools could now command thousands of AI-controlled social media accounts with minimal human oversight1
.Authored by 22 experts from fields spanning computer science, artificial intelligence, psychology, and government policy, the research includes Nobel peace prize-winner Maria Ressa and Taiwan's first Minister of Digital Affairs Audrey Tang
2
4
. The consortium warns that these AI-powered disinformation campaigns could be deployed at scale by the 2028 US presidential election, potentially enabling autocrats to persuade populations to accept cancelled elections or overturn results2
.
Source: Inc.
These systems exhibit capabilities that distinguish them from earlier botnets. AI swarms consist of AI agents that maintain persistent identities and memory, allowing for believable online personas that evolve independently and in real time
1
. They coordinate to achieve shared objectives while creating individual output to avoid detection, adapting their strategies based on signals from social media platforms and conversations with real humans1
.
Source: Decrypt
Daniel Thilo Schroeder, a research scientist at the SINTEF research institute in Oslo who has been simulating swarms in laboratory conditions, describes the ease with which these systems can be created: "It's just frightening how easy these things are to vibe code and just have small bot armies that can actually navigate online social media platforms and email and use these tools"
2
. Jonas Kunst, professor of communication at the BI Norwegian Business School, explains that when these bots exchange information to solve problems—such as analyzing a community and finding weak spots—their coordination increases accuracy and efficiency2
.Early versions of AI-powered influence operations have already appeared in the 2024 elections in Taiwan, India, and Indonesia
2
. In Taiwan, where voters face regular targeting by Chinese propaganda, AI bots have increased engagement with citizens on Threads and Facebook over the past two to three months, according to Puma Shen, a Taiwanese Democratic Progressive Party MP2
. These AI agents provide "tonnes of information that you cannot verify," creating information overload and telling younger Taiwanese people that the China-Taiwan dispute is too complicated to take sides2
.The researchers note that existing platform safeguards struggle to detect and contain coordinated manipulation because AI swarms build on existing weaknesses in social media platforms
3
. "False news has been shown to spread faster and more broadly than true news, deepening fragmented realities and eroding shared factual baselines," they wrote, adding that platform algorithms amplify divisive content even at the expense of user satisfaction3
.Related Stories
The researchers warn that political leaders could deploy almost limitless numbers of AI agents to masquerade as humans online, precisely infiltrate communities, learn their characteristics over time, and use increasingly convincing, tailored falsehoods to manipulate public opinion at a population-wide level
2
. "In the hands of a government, such tools could suppress dissent or amplify incumbents," the researchers wrote3
.Lukasz Olejnik, a visiting senior research fellow at King's College London's Department of War Studies and author of a book on propaganda, emphasizes the severity: "To target chosen individuals or communities is going to be much easier and powerful. This is an extremely challenging environment for a democratic society. We're in big trouble"
1
.Unlike earlier campaigns where only 1 percent of Twitter users accounted for 70 percent of exposure to the Internet Research Agency's content due to technical limitations, AI swarms can sustain narratives over longer periods and operate across multiple channels including social media, messaging apps, blogs, and email
3
4
. Sean Ren, a computer science professor at the University of Southern California and CEO of Sahara AI, notes that AI-driven accounts are increasingly difficult to distinguish from ordinary users3
.The researchers call for coordinated global action, including swarm scanners and watermarked content to counter these campaigns
2
. Ren advocates for stricter identity validation (KYC) and account monitoring: "If it's harder to create new accounts and easier to monitor spammers, it becomes much more difficult for agents to use large numbers of accounts for coordinated manipulation"3
. However, content moderation alone is unlikely to stop these systems, as the information environment continues to degrade and financial incentives drive coordinated manipulation attacks3
.Summarized by
Navi
09 Oct 2024•Technology

18 Nov 2025•Science and Research

18 Sept 2024

1
Policy and Regulation

2
Technology

3
Technology
