2 Sources
2 Sources
[1]
The Fog of AI
The spread of fake imagery of the Iran war is helping to make the question Is this real? all but unanswerable. On February 27, an AI-generated image appeared on Instagram purporting to show heavy military equipment stationed inside Karimian Elementary School in Isfahan, Iran. The post, shared by accounts including the Free Union of Iranian Workers, an independent labor union operating inside Iran whose leaders have been jailed by the regime, read: "This is not a military zone! It's Karimian Elementary." The image carried a visible Google Gemini watermark, indicating that it had been created by the software. The school posted a rebuttal, noting that the equipment could not physically fit on the premises. Iranian-diaspora fact-checkers confirmed that the image was fabricated. The next day, Shajareh Tayyebeh, a girls' elementary school in the southern city of Minab, was hit in the first wave of strikes on Iran. Iranian authorities reported at least 175 people dead, many of them children. The exact death toll has not been independently confirmed, but a New York Times investigation verified that the school had been hit by a precision strike at the same time as attacks on an adjacent naval base, and a preliminary investigation by the American military concluded that U.S. forces were most likely responsible. The school sat on the grounds of the Iranian navy's Asef Brigade barracks, an active military base. The building had been converted from military use, and served children from military and civilian families. In short: The day before the strikes began, an AI image on social media planted the notion that the regime hides military equipment in schools. The next day, a real school -- once part of a military compound but walled off from it since 2016, according to Human Rights Watch -- was destroyed. The fake was wrong about Karimian, but by the time the Minab strike happened, audiences were primed to believe that a school was a legitimate military target, not the site of a civilian catastrophe. Layer by layer, an accumulation of AI imagery circulated on social media that made it difficult to establish what happened to these children. This is the fog that AI has introduced to the war in Iran. This isn't a war where AI fakes fool everyone nor where detection tools catch everything. We live in a world where real photographs of real civilian deaths are called fake, and where fake images are used to illustrate real deaths. Where correct identification of one fake image is used to cast doubt on real images, where incorrect detection is authoritative, and where all of it happens faster than any institution, newsroom, fact-checker, photo wire service, or platform can process. The fog of AI does not need every piece of content to be fabricated. It needs the question Is this real? to become close to unanswerable. When video of the Minab devastation circulated, claims spread on X, Telegram, and Instagram that the footage was actually from Peshawar, Pakistan. Fact-checkers intervened, this time to defend the authenticity of real footage, having debunked the fake imagery about a different school the day before. Users on X, many of them diaspora accounts opposed to the regime, claimed that the footage depicted the May 2021 bombing of the Sayed ul-Shuhada school in Kabul. Another user asked Grok to verify the post and Grok agreed with the false claim, citing The New York Times, the Guardian, Al Jazeera, and Wikipedia as sources even though they contained images directly contradicting it. Then open-source intelligence analysts geolocated the footage to coordinates matching the school. Grok was not simply wrong; it was confidently wrong. Asked to verify a real video, the AI confirmed a false claim that it supported with fabricated citations, giving denialism machine authority. Ali Breland: Dubai's army of influencers gets back in line Meanwhile, Iran undermined the documentation of the tragedy. The Iranian embassy in Austria denounced the Minab strike and accused Europe of complicity in the "death of our collective soul." The post included a photograph of a child's pink backpack covered in blood and dust. SynthID, Google's watermarking tool, confirmed that the image had been generated by Google's AI. The regime illustrated the deaths of real children with a fabricated image. The identification of that fake photo now furnishes an alibi for those who want to deny the real bombing. The Iranian regime has long dismissed evidence of its violence and crimes by calling the documentation fabricated, staged, and foreign produced. Now a similar accusatory reflex has migrated to opposition media and diaspora accounts. Yet children were killed, even if there is false propaganda about their deaths. That the regime has an interest in publicizing these deaths does not mean that the deaths did not happen. Mourners in Minab buried the schoolgirls and staff on March 3. Iran's foreign minister, Abbas Araghchi, posted a photograph on X of the burial site that was viewed 3 million times. Within hours, a diaspora account claimed that the image had been recycled from a Jakarta cemetery where COVID victims were buried in July 2021. The claim named the cemetery, the date, and the photographer, but none of that information was supported by reverse image search, metadata analysis, or other fact-checking. A verified account posted a "claim versus fact" graphic that said: "Iran releases AI altered photo of graves being dug for 160 girls." At the same time, an account calling for "transparent investigation to ensure accountability" illustrated the real tragedy with an AI-generated image of parents mourning over shrouded bodies, further contaminating the evidentiary record that the post was trying to defend. The New York Times visual-investigations team geolocated the burial site to Minab's Hermud Cemetery. Satellite imagery showed that the graves were dug on Monday in a previously untouched section of ground, consistent with a Saturday bombing and a Tuesday funeral. A New York Times journalist noted on X that the image was not AI generated. To learn that the regime staged an elaborate, televised funeral for children killed by foreign strikes produces in many Iranians, inside and outside the country, a rage that I understand. The protests that started on December 28, 2025, and reached their intensity on January 8 and 9, were answered with what is thought to have been massacres of thousands of protesters, including children. Parents went through great pains to retrieve their children's bodies. When bodies were returned, families were sometimes asked to pay exorbitant fees, to agree to conditions denying burials or dignified funerals, or forced to concede that the dead were members of the security forces and had been killed by "terrorists." But the resentment at the regime's selective grief does not make the graves empty. It does not make the children un-real. And it does not justify dismissing evidence of their deaths with a two-letter accusation: "AI." Both formulations, the denials of the bombing and the uses of the bombing for propaganda, begin and end in the same place: Evidence has ceased to function as it should. Jonathan Lemire: Trump isn't even trying to sell this war One hundred seventy-five people were reportedly buried in Minab, most of them children. Nearly every actor in this conflict, from every direction, has made it difficult to establish that these children lived, that they were killed, and that someone is responsible. In Minab, the fact of these children's deaths has been documented, verified, and geolocated. None of it has been enough to prevent the doubt from spreading faster than the evidence.
[2]
Cascade of A.I. Fakes About War With Iran Causes Chaos Online
A torrent of fake videos and images generated by artificial intelligence have overrun social networks during the first weeks of the war in Iran. The videos -- showing huge explosions that never happened, decimated city streets that were never attacked or troops protesting the war who do not exist -- have added a chaotic and confusing layer to the conflict online. The New York Times identified over 110 unique A.I.-generated images and videos from the past two weeks about the war in the Middle East. The fakes covered every aspect of the fighting: They falsely depicted screaming Israelis cowering as explosions ripped through Tel Aviv, Iranians mourning their dead and American military vessels bombarded with missiles and torpedoes. Collectively, they were seen millions of times online through networks like X, TikTok and Facebook, and countless more times within private messaging apps popular in the region and around the world. The Times identified the A.I. content by checking for both obvious signs -- such as depictions of buildings that do not exist, garbled text and behaviors or movements that defy expectations -- and for invisible watermarks embedded within the files. The posts were also checked with multiple A.I. detector tools and compared with reports from news organizations. A sophisticated new wave of A.I. tools makes the fakes possible, enabling nearly anyone to create lifelike simulations of war that can deceive the naked eye for little to no cost. Similar content has spread in other conflicts, including the war between Ukraine and Russia. But this war has multiple fronts, and that has led to a proliferation of fake content since the United States and Israel first attacked Iran, according to experts. "Even compared to when the Ukraine war broke out, things now are very different," said Marc Owen Jones, an associate professor of media analytics at Northwestern University in Qatar. "We're probably seeing far more A.I.-related content now than we ever have before." Overall, the A.I. fakes included ... The content has become a potent informational weapon for Tehran as it seeks to shake the public's tolerance for war by depicting scenes of devastation and destruction across the region. The majority of A.I. videos about the war push pro-Iranian views, often to falsely demonstrate its military superiority and sophistication, according to a study of online activity by Cyabra, a social media intelligence company. "The use of A.I. images of places in the Gulf -- being burnt or damaged -- becomes more important in Iran's playbook," Mr. Jones said, "because it allows them to give a sense that this war is more destructive and maybe more costly for America's allies than it might actually be." In one of the most circulated fake videos found online, a shaky handheld scene seemingly shot from an apartment balcony in Tel Aviv shows the skyline pounded with missiles as an Israeli flag sits in the foreground. The video was viewed millions of times across platforms and was picked up by social media influencers and fringe news websites, according to a review of social media activity by The Times. The Israeli flag in the foreground was one telltale sign that the video was A.I.-generated, experts said. To generate such videos, creators who use A.I. tools will typically write simple text instructions describing, for example, a shaky handheld video of a missile strike on Israel. The A.I. tools will then often include an Israeli flag or the Star of David to fulfill such a request. Several other A.I. videos included the flag. There is ample genuine footage of the war being shared online, too, with cellphones and social platforms giving a real-time view of the conflict. Many of those images and videos are more subdued than the scenes made by A.I. tools. Real footage of missile strikes was often shot from far away, typically at night, with missiles visible as little more than bright lights in the distance. Explosions in real videos are more often shown as plumes of smoke, not as fireballs, with bystanders rushing to film the scene only after the munitions meet their target. Some A.I. videos and images, by contrast, have falsely depicted war like an over-the-top Hollywood action movie, with enormous explosions resulting in mushroom clouds, sonic booms that ripple across unnamed cities and supposed hypersonic missiles that leave glowing streaks in the sky. Real footage is sometimes enhanced by A.I. tools to make explosions appear larger and more devastating, further blurring the line between what is real and fake. The A.I. footage has essentially created an alternate reality more suited to social media, experts said, where the exaggerated footage is more likely to find an audience. In one instance, the A.I. fakes played an outsize role in the debate online and between governments over the fate of the U.S.S. Abraham Lincoln, an aircraft carrier deployed to the region. Iran's Islamic Revolutionary Guards Navy initially suggested on March 1 that they had successfully attacked the ship, possibly sinking it. That led to a deluge of A.I.-generated fakes depicting the ship or those like it on fire. Iranian users celebrated the footage online as evidence that their country's counteroffensive was rattling the U.S.-Israeli alliance. The United States later said that the attack was unsuccessful and that the ship was unharmed. Dozens of other A.I. images and videos made no effort to hide that they were fake, acting instead as a new form of digital propaganda that brought to life the political arguments typically made by governments or their propaganda arms. Those included flattering depictions of world leaders as powerful men, or dehumanizing depictions of opposition leaders. One collection of clearly fictional videos offered a view of the Shajarah Tayyebeh elementary school, which was destroyed by the United States in an apparent errant missile strike on Feb. 28, according to a preliminary inquiry. At least 175 people were killed, most of them children, according to Iranian officials. The A.I.-generated videos unfolded like short films, showing school girls playing outside before an American fighter jet launches missiles. Social media companies have done little to combat the scourge of A.I. videos that overwhelmed their platforms last year after OpenAI released Sora, a video-generating app that allowed anyone to create realistic fakes through a simple app. (The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.) Though videos generated by many A.I. tools can include both visible and invisible watermarks labeling them as fake, those are easy to remove or obscure. Only a few of the videos identified by The Times contained such watermarks. Elon Musk's X, which has taken a broadly permissive approach to allowing misinformation on its platform, announced last week that it would suspend accounts from receiving revenue from the platform for 90 days if they posted A.I.-generated content of "armed conflict" without labeling it as such, in a bid to stop users from profiting off the falsehoods. But many of the Iranian-linked accounts identified by Cyabra appeared far more focused on spreading its messages than making money. "This is a natural front for Iran to try and exploit and it feels like this is one of the reasons it is so voluminous," said Valerie Wirtschafter, a fellow at the Brookings Institution studying foreign policy and A.I. "It's actually a tool of war."
Share
Share
Copy Link
Over 110 AI-generated images and videos depicting fake Iran war scenes have spread across social media platforms, viewed millions of times. The cascade of A.I. fakes shows fabricated explosions, destroyed cities, and military strikes that never happened, making it nearly impossible to distinguish real footage from fabricated content and weaponizing confusion itself.
A flood of AI-generated fake imagery has transformed the information landscape surrounding the Iran war, with The New York Times identifying over 110 unique AI fakes circulated across social media in just two weeks
2
. These fabricated videos and images, viewed millions of times on platforms like X, TikTok, and Facebook, depict explosions that never occurred, decimated streets never attacked, and troops who don't exist. The misinformation crisis has created what experts describe as a fog of war where the fundamental question "Is this real?" has become nearly unanswerable1
.
Source: NYT
The cascade of A.I. fakes covers every dimension of the conflict, falsely showing screaming Israelis as Tel Aviv explodes, Iranians mourning fabricated casualties, and American military vessels under attack. According to Marc Owen Jones, an associate professor of media analytics at Northwestern University in Qatar, "Even compared to when the Ukraine war broke out, things now are very different. We're probably seeing far more A.I.-related content now than we ever have before"
2
.The dangerous interplay between AI fakes and reality became devastatingly clear in late February. On February 27, an AI-generated image appeared on Instagram showing military equipment inside Karimian Elementary School in Isfahan, Iran, complete with a visible Google Gemini watermark
1
. Fact-checkers quickly confirmed the fabrication. Yet the next day, Shajareh Tayyebeh, a girls' elementary school in Minab, was destroyed in strikes that killed at least 175 people, many of them children. The school sat on grounds of an Iranian naval base, having been converted from military use.
Source: The Atlantic
This sequence created a perverse priming effect: fake war footage about schools as military targets circulated one day, then a real school was struck the next. The AI-generated fake imagery was wrong about Karimian, but audiences were already conditioned to view schools as legitimate military targets rather than sites of civilian catastrophe. When authentic video of the Minab devastation circulated, claims immediately spread that the footage actually showed locations in Pakistan or Afghanistan. Fact-checkers found themselves in the surreal position of defending the authenticity of real footage after having just debunked fake imagery about a different school.
The technological safeguards designed to identify AI-generated content have proven inadequate against the scale and sophistication of fake videos flooding social networks. When users asked Grok to verify footage from Minab, the AI confidently confirmed false claims that the video showed a 2021 Kabul bombing, citing The New York Times, the Guardian, Al Jazeera, and Wikipedia as sourcesβeven though those sources contained images directly contradicting the claim . Grok wasn't simply wrong; it provided fabricated citations with machine authority, demonstrating how detection tools themselves can amplify disinformation.
Even when watermarks work as intended, they create new problems. The Iranian embassy posted a photograph of a blood-covered child's backpack to document the Minab tragedy. Google's SynthID watermarking tool confirmed the image was AI-generated . The regime illustrated real deaths with fabricated imagery, and the identification of that fake photo now provides ammunition for those denying the actual bombing occurred.
Related Stories
Tehran has deployed AI-generated content as a potent informational weapon to shape public perception of the conflict. According to Cyabra, a social media intelligence company, the majority of AI videos about the war push pro-Iranian views, often falsely demonstrating Iran's military superiority
2
. Jones notes that "The use of A.I. images of places in the Gulfβbeing burnt or damagedβbecomes more important in Iran's playbook, because it allows them to give a sense that this war is more destructive and maybe more costly for America's allies than it might actually be."One widely circulated fake video purportedly shot from a Tel Aviv balcony shows missiles pounding the skyline with an Israeli flag in the foreground. The video garnered millions of views across platforms and was amplified by social media influencers and fringe news websites
2
. The flag itself was a telltale sign of AI generationβcreators using AI tools write simple text instructions, and the systems often include national symbols to fulfill such requests.The AI-generated fake imagery creates an alternate reality tailored for social media virality, depicting war like an exaggerated Hollywood production. Fake videos show enormous explosions with mushroom clouds, sonic booms rippling across cities, and hypersonic missiles leaving glowing streaksβscenes far more dramatic than genuine footage
2
. Real missile strikes, typically filmed from distance at night, show munitions as bright lights with explosions appearing as smoke plumes rather than fireballs. Some authentic footage has even been enhanced by AI tools to make explosions appear larger, further blurring the line between real and fabricated content.This isn't a scenario where AI fakes fool everyone or where fact-checkers catch everything. The fog of AI operates through accumulationβlayer upon layer of fabricated content that makes establishing truth nearly impossible. Correct identification of one fake image casts doubt on real images. Real photographs of civilian deaths are dismissed as fabrications. The speed of propagation outpaces every institution, newsroom, and platform's ability to respond. The Iranian regime, which has long dismissed evidence of its violence as fabricated and foreign-produced, now finds this accusatory reflex adopted by opposition media and diaspora accounts
1
. Sophisticated AI tools enable nearly anyone to create lifelike war simulations for little to no cost, transforming propaganda from a state-controlled enterprise into a distributed operation where authenticity itself becomes the casualty.Summarized by
Navi
[1]
04 Mar 2026β’Technology

21 Jun 2025β’Technology

15 Jan 2026β’Entertainment and Society

1
Technology

2
Technology

3
Business and Economy
