X suspends creators from revenue program for unlabeled AI-generated armed conflict videos

Reviewed byNidhi Govil

11 Sources

Share

X announced it will temporarily demonetize creators who post AI-generated videos of armed conflict without proper disclosure. The policy targets unlabeled AI-generated war footage that flooded the platform following recent U.S. and Israeli airstrikes in Iran, with fake videos racking up tens of millions of views and spreading misinformation during critical moments.

X Introduces New Policy Against Unlabeled AI-Generated War Footage

X announced a significant policy shift targeting creators who post AI-generated content depicting armed conflict without proper disclosure. Nikita Bier, X's head of product, revealed on March 3 that users who share unlabeled AI-generated war footage will face a 90-day suspension from the platform's Creator Revenue Sharing Program

1

. The decision comes as social media platforms, including those operated by Meta, have been flooded with fake battle scenes following the recent Iran conflict

5

.

Source: ET

Source: ET

The policy applies specifically to creators enrolled in X's monetization program who post AI-generated videos of armed conflicts. Repeat offenders will face a permanent suspension from the revenue-sharing initiative

2

. Bier emphasized that "during times of war, it is critical that people have access to authentic information on the ground," noting that with today's video generation tools, "it is trivial to create content that can mislead people"

1

.

How X Will Combat Misinformation Through Detection and Community Notes

To enforce AI content disclosure requirements, X will rely on a combination of detection methods. The platform plans to identify violations through Community Notes, its crowd-sourced fact-checking system, as well as by detecting metadata and other signals from generative AI tools

3

. This dual approach aims to catch both obvious AI-generated synthetic content and more sophisticated fakes that might slip past casual observation.

The enforcement mechanism will demonetize accounts rather than remove them entirely from the platform. According to the policy, X will suspend violators from earning advertising revenue but won't prevent them from continuing to post

4

. This approach reflects the platform's attempt to balance content authenticity concerns with its commitment to relatively open speech policies under Elon Musk's ownership.

Epidemic of Fake War Footage Drives Policy Change

The policy change responds to a massive wave of disinformation that erupted after the United States and Israel launched airstrikes in Iran. One AI-generated video purporting to show Iranian missiles pursuing and shooting down a U.S. jet was viewed 70 million times, according to BBC Verify

5

. Another viral video showed fake missiles slamming into the ground near the Dome of the Rock in Jerusalem, complete with a computer-generated voice saying "Oh my god, here they come"

4

.

Source: Gizmodo

Source: Gizmodo

Fake war footage isn't new to social media, but AI has supercharged the problem

4

. Full Fact, a UK fact-checking organization, noted it is "increasingly seeing AI turbocharge the spread of misinformation on social media," pointing to fake images of aircraft carriers and the Burj Khalifa on fire, as well as fabricated images supposedly showing the body of Ayatollah Ali Khamenei

5

.

Limited Scope Raises Questions About Policy Effectiveness

The policy's narrow focus has drawn attention to significant gaps in X's approach. It applies only to AI-generated videos of armed conflicts, not AI content in general, and only affects creators enrolled in the Creator Revenue Sharing Program

2

. Non-monetized accounts can continue posting unlabeled AI-generated war footage without consequence.

Critics note the policy doesn't address other forms of misleading content. Videos from video games passed off as real combat footage, old war clips misrepresented as current events, and AI-generated political misinformation will remain outside the policy's reach

1

3

. Even supposedly respectable figures have been caught sharing fake content, with Fox News host Bret Baier and Texas Governor Greg Abbott both sharing misleading war videos before deleting their posts

3

.

Revenue Incentives and the Broader Misinformation Challenge

X's Creator Revenue Sharing Program allows users to earn hundreds of dollars monthly by building followings approaching 100,000 people, creating strong incentives for shocking viral posts

5

. Critics argue the program incentivizes sensationalized content, including clickbait and outrage-generating posts, while maintaining lax content controls

1

.

Source: Cointelegraph

Source: Cointelegraph

The platform is separately testing a broader AI labeling toggle that would let users mark any post as containing synthetic content, though X hasn't shared a timeline for that feature

2

. X already watermarks images and videos generated by its Grok chatbot, but Grok itself has proven unreliable as a fact-checker, with users reporting it confirmed fake videos as authentic

3

. As AI-generated content becomes increasingly difficult to distinguish from authentic footage, the effectiveness of X's limited enforcement approach remains uncertain, particularly as the platform continues to serve as ground zero for misinformation whenever breaking news unfolds.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo