Foreign Influence Operations Exploit AI to Manipulate Social Media and Public Opinion

3 Sources

Share

Research reveals how foreign actors are using advanced AI techniques to create fake accounts and spread disinformation on social media platforms, potentially influencing public opinion and election outcomes.

News article

Foreign Influence Campaigns Exploit AI for Social Media Manipulation

In the lead-up to the 2024 U.S. presidential election, foreign influence campaigns have become increasingly prevalent, utilizing advanced technologies to sway public opinion and spread disinformation. Researchers at the Indiana University Observatory on Social Media have been studying these operations and developing algorithms to detect and counter them

1

2

3

.

Coordinated Inauthentic Behavior

The researchers have identified several indicators of what they term "inauthentic coordinated behavior." This includes:

  1. Synchronized posting across multiple accounts
  2. Amplification of specific user groups
  3. Sharing identical links, images, or hashtags
  4. Performing suspiciously similar action sequences

One striking example involves accounts flooding networks with tens or hundreds of thousands of posts in a single day. These campaigns can manipulate engagement metrics by having controlled accounts rapidly like and unlike posts, then delete the evidence to avoid detection

1

2

3

.

Generative AI: A New Frontier in Disinformation

Generative AI has emerged as a powerful tool for creating and managing fake accounts:

  • Researchers analyzed 1,420 fake Twitter (now X) accounts using AI-generated profile pictures
  • An estimated 10,000 such accounts were active daily before recent staff cuts at X
  • A network of 1,140 bots used ChatGPT to generate human-like content for promoting fake news and cryptocurrency scams

Alarmingly, current large language model content detectors struggle to distinguish between AI-enabled social bots and genuine human accounts

1

2

3

.

Simulating Social Media Manipulation

To better understand the impact of these operations, researchers developed SimSoM, a social media model that simulates information spread through networks. The model incorporates key elements of popular platforms like Instagram, X, Threads, and Mastodon

1

2

3

.

SimSoM allowed researchers to evaluate three primary manipulation tactics:

  1. Infiltration: Creating believable interactions to gain followers
  2. Deception: Posting engaging, shareable content
  3. Flooding: Overwhelming the network with high volumes of posts

The model revealed that infiltration was the most effective tactic, reducing average content quality by over 50%. When combined with flooding, content quality dropped by 70%

1

2

3

.

Countering Coordinated Manipulation

The research suggests that social media platforms should increase content moderation efforts to combat these threats. Recommended measures include:

  • Making fake account creation more difficult
  • Challenging high-volume posters to prove they are human
  • Adding friction to content sharing
  • Educating users about vulnerability to AI-generated deception

The researchers argue that such moderation would protect free speech in modern public squares rather than censor it

1

2

3

.

As open-source AI models become more accessible, the focus of regulation should shift to AI content dissemination on social platforms rather than content generation itself. This could involve requiring content creators to prove accuracy or provenance before reaching large audiences

2

.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo