AI Fakes About Iran War Create Unprecedented Crisis of Reality on Social Media Platforms

Reviewed byNidhi Govil

9 Sources

Share

AI-generated fake content about the Iran war has flooded social media platforms, with over 110 unique AI fakes identified in just two weeks. X's Grok and Google's Gemini are providing false information about real footage while fake videos of non-existent attacks rack up millions of views. The crisis marks a troubling shift where AI-powered disinformation is making it nearly impossible to distinguish real war footage from fabricated content.

AI Disinformation Overwhelms Iran War Coverage

Since U.S.-Israeli strikes against Iran began on February 28th, social media platforms have been engulfed by AI-generated fake content depicting scenes from the conflict that never occurred

1

. The New York Times identified over 110 unique AI fakes about the Iran war in just two weeks, collectively viewed millions of times across X, TikTok, and Facebook

3

. These AI-generated fake images and videos depict everything from screaming civilians amid explosions that never happened to decimated city streets that were never attacked.

Source: NYMag

Source: NYMag

Disinformation expert Tal Hagin warns that the proliferation of AI-based fake news is pushing society over the edge of a fact-based reality

1

. The scale represents a drastic increase in AI-generated fake imagery compared to previous conflicts, with nearly half of all viral falsehoods now involving generative AI tools

4

. Marc Owen Jones, an associate professor of media analytics at Northwestern University in Qatar, notes that compared to when the Ukraine war broke out, the current situation shows far more AI-related content than ever before

3

.

Fake AI Content on X and Platform Failures

X has become a primary battleground for this information warfare, with Grok repeatedly providing false information that makes the platform increasingly unhinged from reality

1

. When users asked Grok to verify authentic footage from the Minab school bombing, the AI assistant confidently confirmed false claims that the video was actually from a 2021 attack in Kabul, citing The New York Times, the Guardian, Al Jazeera, and Wikipedia as sources even though they contained images directly contradicting it . Grok was not simply wrong—it was confidently wrong, giving denialism machine authority through fabricated citations.

Google's Gemini has performed similarly poorly in fact-checking. When asked about an authentic photograph of freshly dug graves in Minab preparing to bury more than 100 schoolgirls, Gemini falsely claimed the image depicted a mass burial site in KahramanmaraÅŸ, Turkey, from the 2023 earthquake

4

. An international study in 2025 found about half of all AI-generated summaries had at least one significant sourcing or accuracy issue, with Gemini reaching 76%

4

.

Paid accounts bearing blue check marks have shared AI-generated fake content widely, including an image of a U.S. B-2 bomber supposedly shot down by Iran that was viewed 1 million times before deletion, and an image of Delta force members being captured that was seen 5 million times

1

. While X announced it would temporarily demonetize blue check mark accounts posting AI-generated videos of armed conflict without labels, non-AI misinformation continues flourishing on the platform.

AI Tools Creating Lifelike Simulations and Alternate Reality

Sophisticated AI tools now enable nearly anyone to create lifelike simulations of war for little to no cost

3

. The fabricated videos often depict war like an over-the-top Hollywood action movie, with enormous explosions resulting in mushroom clouds, sonic booms rippling across unnamed cities, and supposed hypersonic missiles leaving glowing streaks in the sky. One widely circulated fake video showed a shaky handheld scene from a Tel Aviv apartment balcony with the skyline pounded by missiles, viewed millions of times across platforms despite being AI-generated

3

.

Source: NYT

Source: NYT

The AI-generated fake imagery has essentially created an alternate reality more suited to social media, where exaggerated footage finds larger audiences

3

. Real footage of missile strikes is typically shot from far away at night, with missiles visible as little more than bright lights in the distance and explosions shown as plumes of smoke rather than fireballs. The contrast highlights how AI fakes are optimized for engagement rather than accuracy.

The AI Slop War and Eroding Fact-Based Reality

Experts have dubbed this phenomenon the AI slop war, where cheap, fast, and widely available AI video generation tools have created a powerful flattening effect

5

. The majority of AI videos about the war push pro-Iranian views, often to falsely demonstrate its military superiority, according to a study by Cyabra, a social media intelligence company

3

. Iranian officials have shared AI-generated content, including an AI-generated video of high-rise buildings in Bahrain on fire

1

. The Iranian embassy in Austria even illustrated real deaths of children with a fabricated image of a child's pink backpack covered in blood and dust, confirmed by Google's SynthID watermarking tool .

The fog of AI does not need every piece of content to be fabricated—it needs the question "Is this real?" to become close to unanswerable . This creates online chaos and confusion where real photographs of real civilian casualties are called fake, and where fake images are used to illustrate real deaths. The day before strikes began, an AI image on social media planted the notion that Iran hides military equipment in schools. The next day, when Shajareh Tayyebeh elementary school in Minab was hit, killing at least 175 people including many children, audiences were already primed by propaganda .

Source: The Atlantic

Source: The Atlantic

Impact on Fact-Checking and Information Warfare

Shayan Sardarizadeh, a senior journalist at BBC Verify, reports that AI now makes up a large portion of all misinformation the team debunks, with nearly half or more of all viral falsehoods now involving generative AI

4

. This represents a massive shift from early weeks of the Gaza or Ukraine wars, when most fake posts were old videos or repurposed video game footage. X's reward program incentivizes sensational content, and today's AI tools make it easy to produce such content, resulting in a complete breakdown of reality during breaking news situations

1

.

The crisis unfolds faster than any institution, newsroom, fact-checker, photo wire service, or platform can process . People increasingly rely on AI summaries for news, with 65% of people reporting regularly seeing AI summaries of information, and the portion using generative AI to get information doubling in the past year

4

. Yet these tools routinely fail at basic verification tasks, wasting investigative time and risking that real atrocities will be denied. The combination of AI-powered disinformation, platform failures in verification, and commercial incentives for sensational content has created an environment where distinguishing truth from false narratives becomes increasingly difficult for audiences trying to understand what is actually happening in the conflict.

[2]

The Atlantic

|

The Fog of AI

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo