2 Sources
2 Sources
[1]
As the U.S. wages war with Iran, social media users face worsening disinformation
Disinformation engagement farming is getting worse with the help of AI. Credit: Jason Armond / Contributor / Los Angeles Times via Getty Images Before the dust had settled on the ruins of the Shajareh Tayyebeh school -- a casualty of the recent U.S.-Israel military strikes against Iran, and one which resulted in the deaths of up to 168 adults and children -- people were already engagement-farming online. Clips of digital flight simulators were passed off as real-time ops footage, while out-of-context images of battleships and old videos of aerial missile attacks were repurposed to sell users a tale of Iranian dominance. AI-edited content proliferated. According to experts, the posts had accumulated hundreds of millions of views in just a handful of days. The growing number of viral posts -- and the potential for even more to pop up as users earned cash for the viral falsehoods -- was alarming enough to prompt X to edit its policies on misinformation. As of yesterday, X says it will suspend users from its Creator Revenue Sharing program if they post AI-generated content depicting armed conflict without labeling it as such. And not even Google searches are safe from misinformation these days. The proliferation of digital misinformation is the product of a web of bots and engagement farming accounts, all with the shared goal of being the loudest, most clicked-on account in the room. Some hope to win political and social influence, others just want the money. Meanwhile, users, prone to confirmation bias and a reliance on digital news sources, repeatedly fall victim to their racket. Engagement farming, no longer just exchanging the currency of memes and clickbait, has become a dangerous, politically fraught game. Recent posts engaging in active disinformation about the conflict in Iran primarily involve exaggerating the scale and success of Iranian counterattacks, experts explain. A recent investigation by Wired documented hundreds of posts across Elon Musk's X that included misleading footage and photos -- including AI-manipulated content -- or promoted false claims about the scale of the attacks, many of which were posted in the immediate aftermath of missile strikes. A post with more than 4 million views claimed to show ballistic missiles sailing over Dubai, but actually depicted an Iranian attack on Tel Aviv in Oct. 2024. Another with more than 375,000 impressions shows a fictitious before-and-after image of the shelled compound of assassinated Iranian leader Ali Hosseini Khamenei. According to Wired, nearly all of the posts were shared by premium subscriber accounts with blue checkmarks, including state-funded media outlets in Iran. As in previous military conflicts, accounts have also attempted to pass off video game footage as verified news clips, including AI-manipulated images of downed F-35 fighter jets ripped from flight simulator games. The images have been shared across TikTok, some with links to Russian influence operations, the BBC reported. In addition to out-of-context footage and misleading content, the BBC also documented a handful of completely AI-generated videos that had amassed nearly 100 million total views, shared by what the outlet calls notorious "super-spreaders" of disinformation. A report from misinformation watchdog NewsGuard also chronicled a cadre of users sharing viral posts circulating false claims of targeted military strikes against U.S. and Israeli strongholds, predominately using repurposed video footage and out of context or completely recontextualized images of destruction. "[These videos] are posted by anonymous accounts that tend to report on geopolitical conflicts. These are accounts that are known to NewsGuard for spreading exaggerated claims, usually from a pro-Iran perspective," said Sofia Rubinson, senior editor of NewsGuard's Reality Check newsletter and co-author of the report. From there, Rubinson explains, other accounts with larger followings pick up and spread the false claims. For example, hours after initial reports of the U.S.'s military strikes in Iran, users on X began reposting an image of a sinking naval aircraft carrier. Users claimed that it showed a recent attack on the battleship USS Abraham Lincoln in the Arabian Sea. The U.S. military's Central Command issued a statement refuting the claim that same day. NewsGuard confirmed the image actually showed the intentional sinking of the USS Oriskany that took place nearly 20 years ago. The claim was shared by unverified "news" accounts and even Kenyan parliamentary member Peter Salasya. Salasya's post has been viewed more than 6 million times. Multiple accounts, including Salasya's, shared another video allegedly showing Israel's Dimona nuclear power plant under siege by air. The video racked up hundreds of thousands of impressions across anti-Israel and pro-Iran pages -- an X Community Note now appears below the video on Salasya's page, clarifying the images are of a March 2017 attack in Balaklia, Ukraine. NewsGuard found that such posts have already garnered at least 21.9 million views across X. Posts inducing fear of domestic retaliatory attacks have also circulated online, including an unverified list of U.S. cities alleged to be top targets for Iranian sleeper cells -- the list appears to have been written in Apple's Notes app. The acceleration of advanced generative AI and relaxed moderation policies across social media platforms has exacerbated an online misinformation crisis, experts have warned. Particularly over recent months, including during the U.S.-led capture of Venezuelan leader Nicolas Maduro, NewsGuard researchers have noticed a pattern in online disinformation emerging over periods of breaking news. "People now have a shorter window for the lapse between an event occurring and authentic visuals coming out of the media," explained Rubinson. To put it more bluntly: Users are losing their patience, used to an online environment where information is usually right at your fingertips. These brief periods, or voids, between breaking news reports and confirmed video or photos become fertile ground for disinformation bots and engagement farmers, Rubinson says. They also threaten to reinforce conspiratorial thinking -- that mainstream news outlets are keeping information from the public, for example -- and lend themselves to a user's own confirmation bias. Political conflict is particularly rife for the spreading of such misinformation, which is in turn strengthened by active disinformation campaigns from both sides of armed conflict. Researchers have found that a lack of proximity to events makes it easier to believe out of context or exaggerated information. "It's an attempt to fill this fog of war," said Rubsinson. "It can be very overwhelming for people. They want to make sense of it, and visuals are a good way for us to process what is going on in war when we can't comprehend the scale of these conflicts." This becomes a greater problem as individuals increasingly use social media platforms as sole sources for news and as previously reliable fact-checking tools, including straightforward Google searches, become more unreliable. AI chatbots and search have become embedded into the very fiber of real world crisis events, as users turn to them real time fact checkers. Rubinson said that nearly every X post NewsGuard analyzed included the same reply: "@Grok is this true?" But AI assistants and platform chatbots, including X's Grok, are notoriously unreliable at disseminating and verifying breaking news. They are also inconsistent at applying their own platforms' moderation policies. The BBC found that Grok erroneously verified recent AI-generated images depicting Iranian military movements, for example. According to a second report by NewsGuard published March 3, Google AI-powered Search Summaries have repeated misleading claims about the U.S.-Iran conflict when prompted with reverse image searches. For example, NewsGuard researchers uploaded a frame from a video shared online claiming to show the destruction of a CIA outpost in Dubai. Google's AI summary verified the story, writing: "The image shows a fire at a high-rise residential building in Dubai, UAE, reportedly occurring on March 1, 2026, following regional tensions. ... Conflicting reports emerged regarding the cause, with some sources mentioning a drone strike and others referring to the building as a specific intelligence facility." The video actually depicts a 2015 residential fire in the city of Sharjah. Security experts have sounded alarm bells over such "AI information threats," including AI tools used to generate and amplify misleading content. A report by the UK Centre for Emerging Technology and Security suggests the worsening information environment may pose existential threats to public safety, national security, and democracy without direct intervention. Meanwhile, civilians and journalists on the ground in Iran are fighting back against a near total internet blackout, following a massive push by the Trump administration and its ally Elon Musk to get Starlink internet connections to those on the ground. Bad actors, on the other hand, are still finding their way through the block and back onto sites like X.
[2]
'Narrative war': disinformation surges as conflict roils Middle East
Washington (United States) (AFP) - Recycled images, video game footage passed off as missile strikes, and AI-generated combat visuals: the US-Israeli assault on Iran has unleashed a torrent of online disinformation that analysts are calling a war of narratives. Since US and Israeli strikes over the weekend ignited a regional conflict, a parallel information war has erupted, with supporters on both sides flooding social media with falsehoods that often spread faster than the facts on the ground. AFP's fact-checkers have debunked a series of claims by pro-Iranian accounts posting old videos to exaggerate the damage from Tehran's missile strikes on Israel and Gulf states including the UAE and Saudi Arabia. "There is definitely a narrative war unfolding online," Moustafa Ayad, from the Institute for Strategic Dialogue (ISD), told AFP. "Whether it was to rationalize the strikes across the Gulf, or to trumpet Iranian military might in the face of the Israeli and US strikes, the goals seem to be wear down 'enemies.'" On the other end of the divide, Iranian opposition outlets have pushed false narratives on X and Telegram blaming a missile strike on an Iranian girls' school on the Iranian government itself, researchers said. ISD also cautioned that fake social media accounts have sprung up impersonating senior Iranian leadership. Meanwhile, video game clips repurposed as Iranian missile strikes and AI-generated images of US warships being sunk, including the USS Abraham Lincoln, have garnered millions of views across major platforms. Similar disinformation tactics have also been reported in other global conflicts including Ukraine and Gaza. "It is really the speed and scale of these representations that is astounding, driving much of the online confusion of what has been targeted, or casualty counts for instance," said Ayad. Such fabricated visuals -- portraying Iran as more menacing than evidence from the ground suggests -- have collectively garnered more than 21.9 million views on the Elon Musk-owned X alone, according to the disinformation watchdog NewsGuard. 'Fog of war' X on Tuesday announced it would suspend creators from its revenue sharing program for 90 days if they post AI-generated videos of armed conflicts without disclosing they were artificially made. The policy change targets what the company described as a threat to information authenticity amid the ongoing war against Iran. "During times of war, it is critical that people have access to authentic information on the ground," X's head of product Nikita Bier said, adding that current AI technologies make it "trivial to create content that can mislead people." The new AI disclosure policy represents a notable pivot for a platform whose approach to content moderation has been heavily criticized since Musk completed his $44 billion acquisition of the site in October 2022. "The fog of war is quickly becoming the slop of war as AI synthetic content creates infinite noise in information ecosystems," said Ari Abelson, co-founder of OpenOrigins, a media authenticity company that fights deepfakes. "As we witness yet another immensely impactful global conflict unfolding in Iran, it's important for us to all understand how our media ecosystem is shifting." In what could further stoke online chaos, a NewsGuard study showed that Google's reverse-image tool has produced inaccurate AI-generated summaries of fabricated and misleading visuals tied to the Middle East conflict. This exposes a "significant weakness in a widely used system for verifying the authenticity of images," the watchdog said. There was no immediate comment from Google. The United States and Israel launched the attack on Saturday and quickly killed Iran's supreme leader, Ayatollah Ali Khamenei, two days after US envoys had been speaking to Iran in Geneva on a nuclear accord. Since then, Iran has expanded its retaliatory missile and drone barrage across the Middle East, hitting on Tuesday a US consulate and base as the United States and Israel said they had pummeled key sites inside Tehran.
Share
Share
Copy Link
The U.S.-Israel military strikes against Iran have triggered a massive wave of AI-generated content and social media disinformation. Fabricated visuals portraying fake missile strikes and sinking warships have collectively garnered over 21.9 million views on X alone. The platform responded by updating its Creator Revenue Sharing program policies to suspend users posting undisclosed AI-generated war content for 90 days.
The recent U.S.-Israel military strikes against Iran have sparked what experts are calling a narrative war, with disinformation and AI-generated content flooding social media platforms at an unprecedented scale. Fabricated visuals portraying Iran as more menacing than ground evidence suggests have collectively garnered more than 21.9 million views on X alone, according to misinformation watchdog NewsGuard
2
. The Iran conflict has become a testing ground for AI-assisted disinformation tactics that blend digitally manipulated images, video game footage, and completely fabricated content to shape public perception.
Source: France 24
According to investigations by Wired and the BBC, hundreds of posts across social media platforms included misleading footage and photos, many posted in the immediate aftermath of missile strikes
1
. Clips of digital flight simulators were passed off as real-time operations footage, while out-of-context images of battleships and old videos of aerial missile attacks were repurposed. One post with more than 4 million views claimed to show ballistic missiles sailing over Dubai but actually depicted an Iranian attack on Tel Aviv from October 20241
. Another viral claim featured an image of a sinking naval aircraft carrier, alleging it showed a recent attack on the USS Abraham Lincoln in the Arabian Sea. NewsGuard confirmed the image actually showed the intentional sinking of the USS Oriskany nearly 20 years ago, yet the post was viewed more than 6 million times1
.The proliferation of false content stems from engagement farming accounts and bots, all competing to be the loudest, most clicked-on voice in the digital space. Some seek political and social influence, while others pursue financial gain through viral posts. Nearly all of the misleading posts documented were shared by premium subscriber accounts with blue checkmarks on X, including state-funded media outlets in Iran
1
. Sofia Rubinson, senior editor of NewsGuard's Reality Check newsletter, explained that undisclosed AI-generated content is "posted by anonymous accounts that tend to report on geopolitical conflicts" and are known for spreading exaggerated claims, usually from a pro-Iran perspective1
. These super-spreaders then see their content picked up by accounts with larger followings, amplifying the reach exponentially.The BBC documented completely AI-generated videos that had amassed nearly 100 million total views, shared by what the outlet identified as notorious super-spreaders of disinformation
1
. Moustafa Ayad from the Institute for Strategic Dialogue told AFP that "there is definitely a narrative war unfolding online," with goals ranging from rationalizing strikes across the Gulf to trumpeting Iranian military might2
. Users prone to confirmation bias and reliant on digital news sources repeatedly fall victim to these tactics, making the information environment increasingly treacherous.The alarming scale of viral misinformation prompted X to edit its policies on Tuesday. The platform announced it would suspend users from its Creator Revenue Sharing program for 90 days if they post AI-generated content depicting armed conflict without labeling it as such
1
. X's head of product Nikita Bier stated that "during times of war, it is critical that people have access to authentic information on the ground," adding that current AI technologies make it "trivial to create content that can mislead people"2
. The new policy represents a notable shift for a platform whose approach to content moderation has faced heavy criticism since Elon Musk completed his $44 billion acquisition in October 2022.The policy change targets what X described as a threat to information authenticity amid the ongoing conflict. However, questions remain about enforcement capabilities and whether Community Note features can keep pace with the speed and volume of false content. Ari Abelson, co-founder of OpenOrigins, a media authenticity company fighting deepfakes, warned that "the fog of war is quickly becoming the slop of war as AI synthetic content creates infinite noise in information ecosystems"
2
.Related Stories
Adding to the challenges, a NewsGuard study revealed that Google's reverse-image tool has produced inaccurate AI-generated summaries of fabricated and misleading visuals tied to the Middle East conflict, exposing a "significant weakness in a widely used system for verifying the authenticity of images"
2
. This means even users attempting to verify information through traditional fact-checking methods may encounter AI-generated misinformation. AFP's fact-checkers have worked to debunk a series of claims by pro-Iranian accounts posting old videos to exaggerate damage from Tehran's missile strikes on Israel and Gulf states including the UAE and Saudi Arabia2
.On the opposing side, Iranian opposition outlets have pushed false narratives on X and Telegram, while fake social media accounts have sprung up impersonating senior Iranian leadership, according to the Institute for Strategic Dialogue
2
. As conflicts in Ukraine and Gaza have demonstrated, similar disinformation tactics have become standard practice across global conflicts. What makes the current situation particularly concerning is what Ayad describes as "the speed and scale of these representations," which drive much of the online confusion about what has been targeted and casualty counts2
. With AI tools becoming more accessible and sophisticated, the challenge for social media platforms, fact-checkers, and users alike will only intensify as they navigate an information environment where authentic footage and fabricated content become increasingly difficult to distinguish.Summarized by
Navi
21 Jun 2025•Technology

15 Jan 2026•Entertainment and Society

03 Mar 2026•Policy and Regulation

1
Business and Economy

2
Policy and Regulation

3
Health
