Curated by THEOUTPOST
On Wed, 9 Oct, 12:03 AM UTC
3 Sources
[1]
How foreign operations are manipulating social media to influence your views
Indiana University provides funding as a member of The Conversation US. Foreign influence campaigns, or information operations, have been widespread in the run-up to the 2024 U.S. presidential election. Influence campaigns are large-scale efforts to shift public opinion, push false narratives or change behaviors among a target population. Russia, China, Iran, Israel and other nations have run these campaigns by exploiting social bots, influencers, media companies and generative AI. At the Indiana University Observatory on Social Media, my colleagues and I study influence campaigns and design technical solutions - algorithms - to detect and counter them. State-of-the-art methods developed in our center use several indicators of this type of online activity, which researchers call inauthentic coordinated behavior. We identify clusters of social media accounts that post in a synchronized fashion, amplify the same groups of users, share identical sets of links, images or hashtags, or perform suspiciously similar sequences of actions. We have uncovered many examples of coordinated inauthentic behavior. For example, we found accounts that flood the network with tens or hundreds of thousands of posts in a single day. The same campaign can post a message with one account and then have other accounts that its organizers also control "like" and "unlike" it hundreds of times in a short time span. Once the campaign achieves its objective, all these messages can be deleted to evade detection. Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds. Generative AI One technique increasingly being used is creating and managing armies of fake accounts with generative artificial intelligence. We analyzed 1,420 fake Twitter - now X - accounts that used AI-generated faces for their profile pictures. These accounts were used to spread scams, disseminate spam and amplify coordinated messages, among other activities. We estimate that at least 10,000 accounts like these were active daily on the platform, and that was before X CEO Elon Musk dramatically cut the platform's trust and safety teams. We also identified a network of 1,140 bots that used ChatGPT to generate humanlike content to promote fake news websites and cryptocurrency scams. In addition to posting machine-generated content, harmful comments and stolen images, these bots engaged with each other and with humans through replies and retweets. Current state-of-the-art large language model content detectors are unable to distinguish between AI-enabled social bots and human accounts in the wild. Model misbehavior The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Therefore it is unclear, for example, whether online influence campaigns can sway election outcomes. Yet, it is vital to understand society's vulnerability to different manipulation tactics. In a recent paper, we introduced a social media model called SimSoM that simulates how information spreads through the social network. The model has the key ingredients of platforms such as Instagram, X, Threads, Bluesky and Mastodon: an empirical follower network, a feed algorithm, sharing and resharing mechanisms, and metrics for content quality, appeal and engagement. SimSoM allows researchers to explore scenarios in which the network is manipulated by malicious agents who control inauthentic accounts. These bad actors aim to spread low-quality information, such as disinformation, conspiracy theories, malware or other harmful messages. We can estimate the effects of adversarial manipulation tactics by measuring the quality of information that targeted users are exposed to in the network. We simulated scenarios to evaluate the effect of three manipulation tactics. First, infiltration: having fake accounts create believable interactions with human users in a target community, getting those users to follow them. Second, deception: having the fake accounts post engaging content, likely to be reshared by the target users. Bots can do this by, for example, leveraging emotional responses and political alignment. Third, flooding: posting high volumes of content. Our model shows that infiltration is the most effective tactic, reducing the average quality of content in the system by more than 50%. Such harm can be further compounded by flooding the network with low-quality yet appealing content, thus reducing quality by 70%. Curbing coordinated manipulation We have observed all these tactics in the wild. Of particular concern is that generative AI models can make it much easier and cheaper for malicious agents to create and manage believable accounts. Further, they can use generative AI to interact nonstop with humans and create and post harmful but engaging content on a wide scale. All these capabilities are being used to infiltrate social media users' networks and flood their feeds with deceptive posts. These insights suggest that social media platforms should engage in more - not less - content moderation to identify and hinder manipulation campaigns and thereby increase their users' resilience to the campaigns. The platforms can do this by making it more difficult for malicious agents to create fake accounts and to post automatically. They can also challenge accounts that post at very high rates to prove that they are human. They can add friction in combination with educational efforts, such as nudging users to reshare accurate information. And they can educate users about their vulnerability to deceptive AI-generated content. Open-source AI models and data make it possible for malicious agents to build their own generative AI tools. Regulation should therefore target AI content dissemination via social media platforms rather then AI content generation. For instance, before a large number of people can be exposed to some content, a platform could require its creator to prove its accuracy or provenance. These types of content moderation would protect, rather than censor, free speech in the modern public squares. The right of free speech is not a right of exposure, and since people's attention is limited, influence operations can be, in effect, a form of censorship by making authentic voices and opinions less visible.
[2]
How foreign operations are manipulating social media to influence people's views
At the Indiana University Observatory on Social Media, my colleagues and I study influence campaigns and design technical solutions -- algorithms -- to detect and counter them. State-of-the-art methods developed in our center use several indicators of this type of online activity, which researchers call inauthentic coordinated behavior. We identify clusters of social media accounts that post in a synchronized fashion, amplify the same groups of users, share identical sets of links, images or hashtags, or perform suspiciously similar sequences of actions. We have uncovered many examples of coordinated inauthentic behavior. For example, we found accounts that flood the network with tens or hundreds of thousands of posts in a single day. The same campaign can post a message with one account and then have other accounts that its organizers also control "like" and "unlike" it hundreds of times in a short time span. Once the campaign achieves its objective, all these messages can be deleted to evade detection. Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds. Generative AI One technique increasingly being used is creating and managing armies of fake accounts with generative artificial intelligence. We analyzed 1,420 fake Twitter -- now X -- accounts that used AI-generated faces for their profile pictures. These accounts were used to spread scams, disseminate spam and amplify coordinated messages, among other activities. We estimate that at least 10,000 accounts like these were active daily on the platform, and that was before X CEO Elon Musk dramatically cut the platform's trust and safety teams. We also identified a network of 1,140 bots that used ChatGPT to generate humanlike content to promote fake news websites and cryptocurrency scams. In addition to posting machine-generated content, harmful comments and stolen images, these bots engaged with each other and with humans through replies and retweets. Current state-of-the-art large language model content detectors are unable to distinguish between AI-enabled social bots and human accounts in the wild. Model misbehavior The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Therefore it is unclear, for example, whether online influence campaigns can sway election outcomes. Yet, it is vital to understand society's vulnerability to different manipulation tactics. In a recent paper, we introduced a social media model called SimSoM that simulates how information spreads through the social network. The model has the key ingredients of platforms such as Instagram, X, Threads, Bluesky and Mastodon: an empirical follower network, a feed algorithm, sharing and resharing mechanisms, and metrics for content quality, appeal and engagement. SimSoM allows researchers to explore scenarios in which the network is manipulated by malicious agents who control inauthentic accounts. These bad actors aim to spread low-quality information, such as disinformation, conspiracy theories, malware or other harmful messages. We can estimate the effects of adversarial manipulation tactics by measuring the quality of information that targeted users are exposed to in the network. We simulated scenarios to evaluate the effect of three manipulation tactics. First, infiltration: having fake accounts creates believable interactions with human users in a target community, getting those users to follow them. Second, deception: having the fake accounts post engaging content, likely to be reshared by the target users. Bots can do this by, for example, leveraging emotional responses and political alignment. Third, flooding: posting high volumes of content. Our model shows that infiltration is the most effective tactic, reducing the average quality of content in the system by more than 50%. Such harm can be further compounded by flooding the network with low-quality yet appealing content, thus reducing quality by 70%. Curbing coordinated manipulation We have observed all these tactics in the wild. Of particular concern is that generative AI models can make it much easier and cheaper for malicious agents to create and manage believable accounts. Further, they can use generative AI to interact nonstop with humans and create and post harmful but engaging content on a wide scale. All these capabilities are being used to infiltrate social media users' networks and flood their feeds with deceptive posts. These insights suggest that social media platforms should engage in more -- not less -- content moderation to identify and hinder manipulation campaigns and thereby increase their users' resilience to the campaigns. The platforms can do this by making it more difficult for malicious agents to create fake accounts and to post automatically. They can also challenge accounts that post at very high rates to prove that they are human. They can add friction in combination with educational efforts, such as nudging users to reshare accurate information. And they can educate users about their vulnerability to deceptive AI-generated content. Open-source AI models and data make it possible for malicious agents to build their own generative AI tools. Regulation should therefore target AI content dissemination via social media platforms rather than AI content generation. For instance, before a large number of people can be exposed to some content, a platform could require its creator to prove its accuracy or provenance. These types of content moderation would protect, rather than censor, free speech in the modern public squares. The right of free speech is not a right of exposure, and since people's attention is limited, influence operations can be, in effect, a form of censorship by making authentic voices and opinions less visible.
[3]
Foreign operations manipulate social media to influence your views
Foreign influence campaigns, or information operations, have been widespread in the run-up to the 2024 U.S. presidential election. Influence campaigns are large-scale efforts to shift public opinion, push false narratives or change behaviors among a target population. Russia, China, Iran, Israel and other nations have run these campaigns by exploiting social bots, influencers, media companies and generative AI. At the Indiana University Observatory on Social Media, my colleagues and I study influence campaigns and design technical solutions -- algorithms -- to detect and counter them. State-of-the-art methods developed in our center use several indicators of this type of online activity, which researchers call inauthentic coordinated behavior. We identify clusters of social media accounts that post in a synchronized fashion, amplify the same groups of users, share identical sets of links, images or hashtags, or perform suspiciously similar sequences of actions. We have uncovered many examples of coordinated inauthentic behavior. For example, we found accounts that flood the network with tens or hundreds of thousands of posts in a single day. The same campaign can post a message with one account and then have other accounts that its organizers also control "like" and "unlike" it hundreds of times in a short time span. Once the campaign achieves its objective, all these messages can be deleted to evade detection. Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds. Generative AI One technique increasingly being used is creating and managing armies of fake accounts with generative artificial intelligence. We analyzed 1,420 fake Twitter -- now X -- accounts that used AI-generated faces for their profile pictures. These accounts were used to spread scams, disseminate spam and amplify coordinated messages, among other activities. We estimate that at least 10,000 accounts like these were active daily on the platform, and that was before X CEO Elon Musk dramatically cut the platform's trust and safety teams. We also identified a network of 1,140 bots that used ChatGPT to generate humanlike content to promote fake news websites and cryptocurrency scams. In addition to posting machine-generated content, harmful comments and stolen images, these bots engaged with each other and with humans through replies and retweets. Current state-of-the-art large language model content detectors are unable to distinguish between AI-enabled social bots and human accounts in the wild. Model misbehavior The consequences of such operations are difficult to evaluate due to the challenges posed by collectingdata and carrying out ethical experiments that would influence online communities. Therefore it is unclear, for example, whether online influence campaigns can sway election outcomes. Yet, it is vital to understand society's vulnerability to different manipulation tactics. In a recent paper, we introduced a social media model called SimSoM that simulates how information spreads through the social network. The model has the key ingredients of platforms such as Instagram, X, Threads, Bluesky and Mastodon: an empirical follower network, a feed algorithm, sharing and resharing mechanisms, and metrics for content quality, appeal and engagement. SimSoM allows researchers to explore scenarios in which the network is manipulated by malicious agents who control inauthentic accounts. These bad actors aim to spread low-quality information, such as disinformation, conspiracy theories, malware or other harmful messages. We can estimate the effects of adversarial manipulation tactics by measuring the quality of information that targeted users are exposed to in the network. We simulated scenarios to evaluate the effect of three manipulation tactics. First, infiltration: having fake accounts create believable interactions with human users in a target community, getting those users to follow them. Second, deception: having the fake accounts post engaging content, likely to be reshared by the target users. Bots can do this by, for example, leveraging emotional responses and political alignment. Third, flooding: posting high volumes of content. Our model shows that infiltration is the most effective tactic, reducing the average quality of content in the system by more than 50%. Such harm can be further compounded by flooding the network with low-quality yet appealing content, thus reducing quality by 70%. Curbing coordinated manipulation We have observed all these tactics in the wild. Of particular concern is that generative AI models can make it much easier and cheaper for malicious agents to create and manage believable accounts. Further, they can use generative AI to interact nonstop with humans and create and post harmful but engaging content on a wide scale. All these capabilities are being used to infiltrate social media users' networks and flood their feeds with deceptive posts. These insights suggest that social media platforms should engage in more -- not less -- content moderation to identify and hinder manipulation campaigns and thereby increase their users' resilience to the campaigns. The platforms can do this by making it more difficult for malicious agents to create fake accounts and to post automatically. They can also challenge accounts that post at very high rates to prove that they are human. They can add friction in combination with educational efforts, such as nudging users to reshare accurate information. And they can educate users about their vulnerability to deceptive AI-generated content. Open-source AI models and data make it possible for malicious agents to build their own generative AI tools. Regulation should therefore target AI content dissemination via social media platforms rather then AI content generation. For instance, before a large number of people can be exposed to some content, a platform could require its creator to prove its accuracy or provenance. These types of content moderation would protect, rather than censor, free speech in the modern public squares. The right of free speech is not a right of exposure, and since people's attention is limited, influence operations can be, in effect, a form of censorship by making authentic voices and opinions less visible. Filippo Menczer is a professor of informatics and computer science at Indiana University. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions expressed in this commentary are solely those of the author.
Share
Share
Copy Link
Research reveals how foreign actors are using advanced AI techniques to create fake accounts and spread disinformation on social media platforms, potentially influencing public opinion and election outcomes.
In the lead-up to the 2024 U.S. presidential election, foreign influence campaigns have become increasingly prevalent, utilizing advanced technologies to sway public opinion and spread disinformation. Researchers at the Indiana University Observatory on Social Media have been studying these operations and developing algorithms to detect and counter them 123.
The researchers have identified several indicators of what they term "inauthentic coordinated behavior." This includes:
One striking example involves accounts flooding networks with tens or hundreds of thousands of posts in a single day. These campaigns can manipulate engagement metrics by having controlled accounts rapidly like and unlike posts, then delete the evidence to avoid detection 123.
Generative AI has emerged as a powerful tool for creating and managing fake accounts:
Alarmingly, current large language model content detectors struggle to distinguish between AI-enabled social bots and genuine human accounts 123.
To better understand the impact of these operations, researchers developed SimSoM, a social media model that simulates information spread through networks. The model incorporates key elements of popular platforms like Instagram, X, Threads, and Mastodon 123.
SimSoM allowed researchers to evaluate three primary manipulation tactics:
The model revealed that infiltration was the most effective tactic, reducing average content quality by over 50%. When combined with flooding, content quality dropped by 70% 123.
The research suggests that social media platforms should increase content moderation efforts to combat these threats. Recommended measures include:
The researchers argue that such moderation would protect free speech in modern public squares rather than censor it 123.
As open-source AI models become more accessible, the focus of regulation should shift to AI content dissemination on social platforms rather than content generation itself. This could involve requiring content creators to prove accuracy or provenance before reaching large audiences 2.
Reference
[1]
Artificial intelligence poses a significant threat to the integrity of the 2024 US elections. Experts warn about the potential for AI-generated misinformation to influence voters and disrupt the electoral process.
2 Sources
2 Sources
A University of Notre Dame study exposes the vulnerability of major social media platforms to AI bot infiltration, raising concerns about user safety and the need for stronger regulations.
3 Sources
3 Sources
Researchers at Concordia University conducted a simulation to explore how generative AI might affect election cycles, revealing potential challenges in distinguishing real from fake content and the limitations of AI detection tools.
2 Sources
2 Sources
Meta has identified and disrupted a Russian influence operation using AI-generated content to spread misinformation about the upcoming 2024 US election. The campaign, though limited in scope, raises concerns about the potential misuse of AI in political manipulation.
6 Sources
6 Sources
Research from Clemson University reveals a coordinated AI campaign using large language models to spread political propaganda on X, supporting Republican candidates and causes.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved