Oversight Board demands Meta create dedicated rules for AI-generated content after viral deepfake

Reviewed byNidhi Govil

3 Sources

Share

Meta's Oversight Board is calling for a complete overhaul of how the company handles AI-generated content. The demand follows a fake AI video showing alleged damage in Haifa during the Israel-Iran conflict that garnered over 700,000 views. The board says Meta's current approach—relying on user self-disclosure and fact-checkers—cannot keep pace with the scale and speed of AI-generated misinformation, especially during crises.

Oversight Board Challenges Meta's Handling of AI-Generated Content

Meta faces mounting pressure to fundamentally reshape its approach to AI-generated content after its Oversight Board issued a scathing assessment of the company's current policies. The independent board is urging Meta to establish a dedicated AI content rule separate from its existing misinformation policy, invest in more sophisticated AI detection tools, and implement consistent digital watermarking practices across its platforms

1

. The recommendations stem from a case involving a deceptive AI-generated video that exposed critical gaps in how Meta identifies and labels deepfakes during global conflicts.

Source: Engadget

Source: Engadget

Viral Video Exposes Policy Gaps During Israel-Iran Conflict

The controversy centers on an AI-generated video posted last year that falsely depicted damaged buildings in the Israeli city of Haifa during the 2025 Israel-Iran conflict. The clip accumulated more than 700,000 views before the Oversight Board intervened

3

. The account sharing the content presented itself as a news outlet but was actually operated by someone in the Philippines

1

. Despite multiple user reports, Meta declined to remove the video or apply its "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The Oversight Board ultimately overturned Meta's decision and flagged "obvious signals of deception" linked to the account, prompting the company to disable three connected accounts

2

.

Source: TechSpot

Source: TechSpot

Current Labeling System Falls Short at Scale

The board delivered sharp criticism of Meta's existing "AI Info" label system, arguing it is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content," particularly during times of conflict or crisis

1

. The current approach relies heavily on user self-disclosure about AI usage and infrequent escalated reviews—a system the board says cannot meet challenges posed by the current environment

2

. This reliance on voluntary disclosure becomes especially problematic as AI tools become more accessible and deepfakes spread at unprecedented speeds across Facebook, Instagram, and Threads.

Board Demands Dedicated Community Standard for AI Content

Among the Oversight Board's top recommendations is the creation of a dedicated AI content rule independent of Meta's misinformation policy. This separate Community Standard should include specific requirements about how and when users must label AI-generated content, along with clear information about penalties for those who violate the rule

3

. The board emphasized that Meta must do more to address the proliferation of deceptive content shared by inauthentic or abusive networks of accounts and pages, especially on matters of public interest, so users can distinguish between what is real and fake

1

.

Investment in Detection Technology and Content Credentials Needed

The Oversight Board called on Meta to invest in more sophisticated AI detection tools capable of reliably identifying AI-generated images, audio, and video automatically

2

. The group also raised concerns about Meta's inconsistent implementation of Content Credentials, the industry framework designed to attach metadata showing where content originated and whether AI tools were involved in its creation

2

. Particularly troubling to the board were reports that digital watermarking is applied inconsistently even on content produced by Meta's own AI tools

1

.

Source: Digit

Source: Digit

Fact-Checker Reliance Proves Insufficient

This decision marks the third time the Oversight Board has criticized Meta's manipulated media rules, previously describing them as "incoherent"

1

. The board has repeatedly taken issue with Meta's reliance on third-parties, including fact-checkers and trusted partners, to flag problematic content. In this case, the board noted hearing from these organizations that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta's internal teams"

1

. The board stated Meta should be capable of conducting harm assessments itself rather than rely solely on partners during armed conflicts.

Timing Amplifies Urgency as Misinformation Surges

While the Oversight Board's decision relates to a post from last year, the issue has taken on new urgency during the latest Middle East conflict. Since US and Israeli strikes on Iran began earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media platforms

1

. The board suggested the entire industry needs coherence in helping users distinguish deceptive content and that social media platforms should address abusive accounts and pages sharing such output

1

. Meta has 60 days to formally respond to the recommendations

3

, a timeline that will test whether the company can adapt quickly enough to address mounting concerns about misinformation during critical global events.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo