3 Sources
3 Sources
[1]
The Oversight Board says Meta needs new rules for AI-generated content
The Oversight Board is once again urging Meta to overhaul its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that's independent of its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks among other changes. The group's recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which racked up more than 700,000 views, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines. After the video was reported to Meta, the company declined to remove it or add a "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The board overturned Meta's decision not to add the "high risk" label and says the case shines a light on several areas where the company's current AI rules are falling short. "Meta must do more to address the proliferation of deceptive AI- generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake," the board wrote in its decision. Meta eventually disabled three accounts linked to the page after the board flagged "obvious signals of deception." One of the board's top recommendations is that Meta create a dedicated rule for AI-generated content that's separate from its misinformation policy. The rule, according to the board, should include specifics about how and when users are required to label AI content as well as information about how Meta penalizes those who break the rule. The board was also highly critical of how Meta uses its current "AI Info" labels, noting that the way they are applied is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content," especially in times of conflict or crisis. "A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment." Meta, the board said, also needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. The group added that it was "concerned" about reports that the company is "inconsistently implementing" digital watermarks on AI content created by its own AI tools. Meta didn't immediately respond to a request for comment on the Oversight Board's decision. The company has 60 days to formally respond to its recommendations. The decision isn't the first time the board has been critical of Meta's handling of AI content. The group has described the company's manipulated media rules as "incoherent" on two other occasions, and has criticized it for relying on third-parties, including fact checking organizations, to flag problematic content. Meta's reliance on fact checkers and other "trusted partners" was again raised in this case, with the board saying that it had heard from these groups that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta's internal teams." Meta, the board writes, "should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict." While the Oversight Board's decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the start of the US and Israel's strikes on Iran earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media. The board, which has previously hinted that it would like to work with generative AI companies, included a suggestion that would seem to apply to not just Meta. "The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output," it wrote.
[2]
Oversight Board urges Meta to toughen rules on AI-generated content and deepfakes
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. In a nutshell: As deepfakes come under the spotlight following the increased abilities and availability of AI tools, Meta's Oversight Board says the company should make changes when it comes to regulating this type of content. The board says Meta's methods for identifying AI-generated content are lacking when it is created at scale, especially during times of conflict or crisis. The latest criticism centers on a fake AI-generated video shared on Meta platforms last year that purported to show damage to buildings in the Israeli city of Haifa. It was posted by a user in the Philippines posing as a news source. According to Meta's Oversight Board, several users reported the post, but it slipped through the cracks. Meta didn't review it, third-party fact-checkers didn't assess it, and the video remained online without a high-risk AI label until the board stepped in. The board, whose mission is to improve how Meta treats people and communities around the world, argues that the incident exposes a larger problem with the company's current system across Facebook, Instagram, and Threads. Right now, labeling often depends on users disclosing that AI was used or on content being escalated for special review. The Oversight Board says that approach simply isn't robust enough for the speed and scale at which AI-generated content now spreads, particularly during wars, disasters, elections, and other high-stakes events. Among its recommendations, the board wants Meta to create a dedicated Community Standard for AI-generated content instead of relying on a patchwork of misinformation rules. It also says Meta should apply high-risk AI labels more often, improve its automated detection systems for images, video, and audio, and clearly explain the penalties for people who fail to disclose digitally altered content. The Oversight Board also says Meta should do more with Content Credentials, the industry framework designed to attach metadata showing where a piece of content came from and whether AI tools were involved in its creation. The group raised concerns that Meta has been inconsistent in applying those standards, including on content produced by its own AI tools. The board previously blasted Meta's manipulated media rules as confusing and too narrowly focused on whether AI was used, rather than whether content is deceptive. Meta responded to that earlier pressure by overhauling its labeling system and replacing the "Made with AI" tag with the broader "AI info" label, though that change also drew complaints that the notices were too vague or easy to miss. On the eve of Donald Trump's inauguration in January, Meta CEO Mark Zuckerberg announced that the company's third-party fact checkers had become too politically biased and destroyed more trust than they created. As such, they were being replaced by Community Notes.
[3]
Meta under fire as fake AI war video gains over 700K views
The board asked Meta to create clearer rules and better tools for AI content. Meta is under fresh scrutiny over how it handles its AI-generated content. The company's independent Oversight Board has recently schooled the social media giant, stating that it should develop a dedicated policy for AI-related content. The recommendation came after a fake video made with AI went viral online, wrongly showing damaged buildings in Haifa during the 2025 Israel-Iran conflict. The clip gained more than 700,000 views before the board intervened. According to the decision, Meta failed to add a clear warning label or take stronger action even after the content was flagged. The board says the case shows gaps in Meta's current policies and highlights the growing challenge of misleading AI content spreading quickly on social media platforms worldwide today. The AI-generated video was posted on Meta by an account that presented itself as a news outlet. However, after investigations regulators found that the page was actually run by an individual in the Philippines. Meta decided not to remove it and even declined to apply its 'high risk' AI label even after the video was reported to them. The 'high risk' label in Meta is meant to warn users when content has been created or altered using artificial intelligence. Also read: Sony PS5 Pro may launch in India soon, BIS listing suggests However, the Oversight Board of Meta later overturned that decision and said the company should have clearly labelled the video. It also pointed to 'obvious signals of deception' linked to the account. After the board raised these concerns, Meta disabled three accounts connected to the page. In its latest recommendations, the Oversight Board had urged Meta to create a separate rule specifically for AI-generated content instead of treating it under the broader misinformation policy. According to the board, a dedicated rule should clearly explain when users must disclose that content is AI-generated and what penalties they may face if they fail to do so. Also read: Why do CBSE question papers have QR code Meta's current labelling system, which is widely known as 'AI Info', has also been criticised by the Oversight Board. The independent panel said that the system relies too heavily on users to voluntarily disclose when they use AI tools. And as such disclosures are rare, the board warned that the approach is not strong enough to deal with the fast spread of AI media, especially during conflicts or crises. Also read: Google Pixel 10 deal: Get up to Rs 12,000 discount, here's how Other than that, the board has also urged Meta to invest more in tools that can detect AI-generated images, audio, and video automatically. Furthermore, it also raised concerns that digital watermarks for content created with Meta's own AI tools are not applied consistently either. Meta is yet to respond to the ruling by its Oversight Board. Do note that the company just has 60 days to respond to the board's recommendations.
Share
Share
Copy Link
Meta's Oversight Board is calling for a complete overhaul of how the company handles AI-generated content. The demand follows a fake AI video showing alleged damage in Haifa during the Israel-Iran conflict that garnered over 700,000 views. The board says Meta's current approach—relying on user self-disclosure and fact-checkers—cannot keep pace with the scale and speed of AI-generated misinformation, especially during crises.
Meta faces mounting pressure to fundamentally reshape its approach to AI-generated content after its Oversight Board issued a scathing assessment of the company's current policies. The independent board is urging Meta to establish a dedicated AI content rule separate from its existing misinformation policy, invest in more sophisticated AI detection tools, and implement consistent digital watermarking practices across its platforms
1
. The recommendations stem from a case involving a deceptive AI-generated video that exposed critical gaps in how Meta identifies and labels deepfakes during global conflicts.
Source: Engadget
The controversy centers on an AI-generated video posted last year that falsely depicted damaged buildings in the Israeli city of Haifa during the 2025 Israel-Iran conflict. The clip accumulated more than 700,000 views before the Oversight Board intervened
3
. The account sharing the content presented itself as a news outlet but was actually operated by someone in the Philippines1
. Despite multiple user reports, Meta declined to remove the video or apply its "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The Oversight Board ultimately overturned Meta's decision and flagged "obvious signals of deception" linked to the account, prompting the company to disable three connected accounts2
.Source: TechSpot
The board delivered sharp criticism of Meta's existing "AI Info" label system, arguing it is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content," particularly during times of conflict or crisis
1
. The current approach relies heavily on user self-disclosure about AI usage and infrequent escalated reviews—a system the board says cannot meet challenges posed by the current environment2
. This reliance on voluntary disclosure becomes especially problematic as AI tools become more accessible and deepfakes spread at unprecedented speeds across Facebook, Instagram, and Threads.Among the Oversight Board's top recommendations is the creation of a dedicated AI content rule independent of Meta's misinformation policy. This separate Community Standard should include specific requirements about how and when users must label AI-generated content, along with clear information about penalties for those who violate the rule
3
. The board emphasized that Meta must do more to address the proliferation of deceptive content shared by inauthentic or abusive networks of accounts and pages, especially on matters of public interest, so users can distinguish between what is real and fake1
.The Oversight Board called on Meta to invest in more sophisticated AI detection tools capable of reliably identifying AI-generated images, audio, and video automatically
2
. The group also raised concerns about Meta's inconsistent implementation of Content Credentials, the industry framework designed to attach metadata showing where content originated and whether AI tools were involved in its creation2
. Particularly troubling to the board were reports that digital watermarking is applied inconsistently even on content produced by Meta's own AI tools1
.
Source: Digit
Related Stories
This decision marks the third time the Oversight Board has criticized Meta's manipulated media rules, previously describing them as "incoherent"
1
. The board has repeatedly taken issue with Meta's reliance on third-parties, including fact-checkers and trusted partners, to flag problematic content. In this case, the board noted hearing from these organizations that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta's internal teams"1
. The board stated Meta should be capable of conducting harm assessments itself rather than rely solely on partners during armed conflicts.While the Oversight Board's decision relates to a post from last year, the issue has taken on new urgency during the latest Middle East conflict. Since US and Israeli strikes on Iran began earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media platforms
1
. The board suggested the entire industry needs coherence in helping users distinguish deceptive content and that social media platforms should address abusive accounts and pages sharing such output1
. Meta has 60 days to formally respond to the recommendations3
, a timeline that will test whether the company can adapt quickly enough to address mounting concerns about misinformation during critical global events.Summarized by
Navi
06 Jun 2025•Technology

30 Oct 2025•Entertainment and Society

16 Dec 2024•Technology

1
Technology

2
Policy and Regulation

3
Policy and Regulation
