Iran conflict unleashes AI-generated disinformation war as fake videos amass 21.9 million views

2 Sources

Share

The U.S.-Israel military strikes against Iran have triggered a massive wave of AI-generated content and social media disinformation. Fabricated visuals portraying fake missile strikes and sinking warships have collectively garnered over 21.9 million views on X alone. The platform responded by updating its Creator Revenue Sharing program policies to suspend users posting undisclosed AI-generated war content for 90 days.

AI-Generated Content Floods Social Media During Iran Conflict

The recent U.S.-Israel military strikes against Iran have sparked what experts are calling a narrative war, with disinformation and AI-generated content flooding social media platforms at an unprecedented scale. Fabricated visuals portraying Iran as more menacing than ground evidence suggests have collectively garnered more than 21.9 million views on X alone, according to misinformation watchdog NewsGuard

2

. The Iran conflict has become a testing ground for AI-assisted disinformation tactics that blend digitally manipulated images, video game footage, and completely fabricated content to shape public perception.

Source: France 24

Source: France 24

According to investigations by Wired and the BBC, hundreds of posts across social media platforms included misleading footage and photos, many posted in the immediate aftermath of missile strikes

1

. Clips of digital flight simulators were passed off as real-time operations footage, while out-of-context images of battleships and old videos of aerial missile attacks were repurposed. One post with more than 4 million views claimed to show ballistic missiles sailing over Dubai but actually depicted an Iranian attack on Tel Aviv from October 2024

1

. Another viral claim featured an image of a sinking naval aircraft carrier, alleging it showed a recent attack on the USS Abraham Lincoln in the Arabian Sea. NewsGuard confirmed the image actually showed the intentional sinking of the USS Oriskany nearly 20 years ago, yet the post was viewed more than 6 million times

1

.

Engagement Farming Drives Social Media Disinformation

The proliferation of false content stems from engagement farming accounts and bots, all competing to be the loudest, most clicked-on voice in the digital space. Some seek political and social influence, while others pursue financial gain through viral posts. Nearly all of the misleading posts documented were shared by premium subscriber accounts with blue checkmarks on X, including state-funded media outlets in Iran

1

. Sofia Rubinson, senior editor of NewsGuard's Reality Check newsletter, explained that undisclosed AI-generated content is "posted by anonymous accounts that tend to report on geopolitical conflicts" and are known for spreading exaggerated claims, usually from a pro-Iran perspective

1

. These super-spreaders then see their content picked up by accounts with larger followings, amplifying the reach exponentially.

The BBC documented completely AI-generated videos that had amassed nearly 100 million total views, shared by what the outlet identified as notorious super-spreaders of disinformation

1

. Moustafa Ayad from the Institute for Strategic Dialogue told AFP that "there is definitely a narrative war unfolding online," with goals ranging from rationalizing strikes across the Gulf to trumpeting Iranian military might

2

. Users prone to confirmation bias and reliant on digital news sources repeatedly fall victim to these tactics, making the information environment increasingly treacherous.

X Updates Creator Revenue Sharing Program Policies

The alarming scale of viral misinformation prompted X to edit its policies on Tuesday. The platform announced it would suspend users from its Creator Revenue Sharing program for 90 days if they post AI-generated content depicting armed conflict without labeling it as such

1

. X's head of product Nikita Bier stated that "during times of war, it is critical that people have access to authentic information on the ground," adding that current AI technologies make it "trivial to create content that can mislead people"

2

. The new policy represents a notable shift for a platform whose approach to content moderation has faced heavy criticism since Elon Musk completed his $44 billion acquisition in October 2022.

The policy change targets what X described as a threat to information authenticity amid the ongoing conflict. However, questions remain about enforcement capabilities and whether Community Note features can keep pace with the speed and volume of false content. Ari Abelson, co-founder of OpenOrigins, a media authenticity company fighting deepfakes, warned that "the fog of war is quickly becoming the slop of war as AI synthetic content creates infinite noise in information ecosystems"

2

.

Google's Reverse-Image Tool Shows Weaknesses

Adding to the challenges, a NewsGuard study revealed that Google's reverse-image tool has produced inaccurate AI-generated summaries of fabricated and misleading visuals tied to the Middle East conflict, exposing a "significant weakness in a widely used system for verifying the authenticity of images"

2

. This means even users attempting to verify information through traditional fact-checking methods may encounter AI-generated misinformation. AFP's fact-checkers have worked to debunk a series of claims by pro-Iranian accounts posting old videos to exaggerate damage from Tehran's missile strikes on Israel and Gulf states including the UAE and Saudi Arabia

2

.

On the opposing side, Iranian opposition outlets have pushed false narratives on X and Telegram, while fake social media accounts have sprung up impersonating senior Iranian leadership, according to the Institute for Strategic Dialogue

2

. As conflicts in Ukraine and Gaza have demonstrated, similar disinformation tactics have become standard practice across global conflicts. What makes the current situation particularly concerning is what Ayad describes as "the speed and scale of these representations," which drive much of the online confusion about what has been targeted and casualty counts

2

. With AI tools becoming more accessible and sophisticated, the challenge for social media platforms, fact-checkers, and users alike will only intensify as they navigate an information environment where authentic footage and fabricated content become increasingly difficult to distinguish.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo