6 Sources
6 Sources
[1]
The Oversight Board says Meta needs new rules for AI-generated content
The Oversight Board is once again urging Meta to overhaul its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that's independent of its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks among other changes. The group's recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which racked up more than 700,000 views, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines. After the video was reported to Meta, the company declined to remove it or add a "high risk" AI label that would have clearly indicated the content had been created or manipulated with AI. The board overturned Meta's decision not to add the "high risk" label and says the case shines a light on several areas where the company's current AI rules are falling short. "Meta must do more to address the proliferation of deceptive AI- generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake," the board wrote in its decision. Meta eventually disabled three accounts linked to the page after the board flagged "obvious signals of deception." One of the board's top recommendations is that Meta create a dedicated rule for AI-generated content that's separate from its misinformation policy. The rule, according to the board, should include specifics about how and when users are required to label AI content as well as information about how Meta penalizes those who break the rule. The board was also highly critical of how Meta uses its current "AI Info" labels, noting that the way they are applied is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content," especially in times of conflict or crisis. "A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment." Meta, the board said, also needs to invest in more sophisticated detection technology that can reliably label AI media, including audio and video. The group added that it was "concerned" about reports that the company is "inconsistently implementing" digital watermarks on AI content created by its own AI tools. Meta didn't immediately respond to a request for comment on the Oversight Board's decision. The company has 60 days to formally respond to its recommendations. The decision isn't the first time the board has been critical of Meta's handling of AI content. The group has described the company's manipulated media rules as "incoherent" on two other occasions, and has criticized it for relying on third-parties, including fact checking organizations, to flag problematic content. Meta's reliance on fact checkers and other "trusted partners" was again raised in this case, with the board saying that it had heard from these groups that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta's internal teams." Meta, the board writes, "should be capable of conducting such assessments of harm itself, rather than rely solely on partners reaching out to them during an armed conflict." While the Oversight Board's decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the start of the US and Israel's strikes on Iran earlier this month, there has been a sharp rise in viral AI-generated misinformation across social media. The board, which has previously hinted that it would like to work with generative AI companies, included a suggestion that would seem to apply to not just Meta. "The industry needs coherence in helping users distinguish deceptive AI-generated content and platforms should address abusive accounts and pages sharing such output," it wrote.
[2]
Meta urged to boost oversight of fake AI videos
Meta should do more to address the "proliferation" of fake content made with artificial intelligence (AI) tools on its platforms, the social media giant's own advisors have said. The 21-person Oversight Board raised the concerns as it rebuked the company for leaving up an AI-generated video that claimed to show extensive damage in Haifa, Israel by Iranian forces without a label. It called on the company to overhaul its AI rules, warning that an increase in fake AI videos related to global military conflicts had "challenged the public's ability to distinguish fabrication from fact ... risking a general distrust of all information." Meta said it would label the video at issue within seven days. Meta launched the oversight board in 2020 as a semi-independent group providing supervision of content moderation decisions across its platforms, which include Facebook, Instagram and WhatsApp. It frequently disagrees with Meta's rulings, but the company has nevertheless continued to loosen its approach to policing content, raising questions about how much power the board actually wields. The board said the firm's handling of the Haifa video raised issues that it had flagged before about "inefficiencies in Meta's current approach during armed conflicts". Currently Meta relies largely on users to "self-disclose" when content they post is produced by an AI tool. Otherwise it waits for someone to complain to its content moderation team, which could then decide to affix a label to something. The board said the firm should be proactively labelling fake AI content "much more frequently". It said the firm's current methods were "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform". The board's review of the issue was sparked by a video posted last June by a Facebook account based in the Philippines describing itself as a news source. It was one of a string of fake AI videos posted to social media after the conflict began, with content either being pro-Israel and pro-Iran, which quickly collected at least 100 million views, according to a BBC analysis at the time. Despite the Facebook video being AI-generated and showing content that was not real, and Meta receiving several user complaints about it, the company did not label the video as AI-generated or remove it. It wasn't until a Facebook user appealed directly to the Oversight Board, and the board took up the issue, that Meta even responded to concerns, according to the board. The company then claimed the video, which garnered almost 1 million views, did not require any kind of label and did not need to be taken down because it did not "directly contribute to the risk of imminent physical harm." That is too high of a bar for labeling AI-generated content, particularly when the subject is armed conflict, the board said Tuesday, ruling that the video should have received a "high risk AI label." "Meta must do more to address the proliferation of deceptive AI-generated content on its platforms... so that users can distinguish between what is real and fake", it said. In its statement, Meta said that it would abide by the board's suggestions the next time it encounters "identical" content that is also "in the same context" as the video the board reviewed.
[3]
Oversight Board urges Meta to toughen rules on AI-generated content and deepfakes
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. In a nutshell: As deepfakes come under the spotlight following the increased abilities and availability of AI tools, Meta's Oversight Board says the company should make changes when it comes to regulating this type of content. The board says Meta's methods for identifying AI-generated content are lacking when it is created at scale, especially during times of conflict or crisis. The latest criticism centers on a fake AI-generated video shared on Meta platforms last year that purported to show damage to buildings in the Israeli city of Haifa. It was posted by a user in the Philippines posing as a news source. According to Meta's Oversight Board, several users reported the post, but it slipped through the cracks. Meta didn't review it, third-party fact-checkers didn't assess it, and the video remained online without a high-risk AI label until the board stepped in. The board, whose mission is to improve how Meta treats people and communities around the world, argues that the incident exposes a larger problem with the company's current system across Facebook, Instagram, and Threads. Right now, labeling often depends on users disclosing that AI was used or on content being escalated for special review. The Oversight Board says that approach simply isn't robust enough for the speed and scale at which AI-generated content now spreads, particularly during wars, disasters, elections, and other high-stakes events. Among its recommendations, the board wants Meta to create a dedicated Community Standard for AI-generated content instead of relying on a patchwork of misinformation rules. It also says Meta should apply high-risk AI labels more often, improve its automated detection systems for images, video, and audio, and clearly explain the penalties for people who fail to disclose digitally altered content. The Oversight Board also says Meta should do more with Content Credentials, the industry framework designed to attach metadata showing where a piece of content came from and whether AI tools were involved in its creation. The group raised concerns that Meta has been inconsistent in applying those standards, including on content produced by its own AI tools. The board previously blasted Meta's manipulated media rules as confusing and too narrowly focused on whether AI was used, rather than whether content is deceptive. Meta responded to that earlier pressure by overhauling its labeling system and replacing the "Made with AI" tag with the broader "AI info" label, though that change also drew complaints that the notices were too vague or easy to miss. On the eve of Donald Trump's inauguration in January, Meta CEO Mark Zuckerberg announced that the company's third-party fact checkers had become too politically biased and destroyed more trust than they created. As such, they were being replaced by Community Notes.
[4]
Oversight Board slams Meta's 'inadequate' deepfake rules -- calls for a total AI overhaul
A powerful oversight group is warning that AI-generated videos are spreading too easily on Meta's platforms -- and they're becoming increasingly difficult for users to recognize. While the more fun and innocent AI generated cat videos flooding your feed are easy to spot, others aren't as easy to detect. In a new decision, the Oversight Board urged the company to strengthen its policies and detection tools for AI-generated content, particularly realistic deepfake videos that can spread misinformation during major global events. The Oversight Board is affiliated with Meta but was set up as a "body of experts from around the world that exercises independent judgment and makes binding decisions on what content should be allowed on Facebook and Instagram." The recommendation follows a review of a fake AI-generated video depicting destruction in Israel, which circulated online and highlighted gaps in Meta's current moderation systems. According to the board, Meta's existing approach relies too heavily on users labeling AI-generated content themselves -- a system that can easily fail when misleading videos go viral before moderators catch them. Why deepfakes are getting harder to detect Advances in generative AI have made it dramatically easier to create realistic video, audio and images that appear authentic at first glance. Modern AI video tools can now generate footage with convincing lighting, motion and voice synchronization. As these tools improve, the line between authentic and synthetic media is becoming increasingly blurred. That's part of what worries experts. When highly realistic AI videos spread during wars, disasters or elections, they can quickly shape public perception before fact-checkers have time to respond. What the Oversight Board wants Meta to do The board has warned that current policies were largely designed for traditional misinformation -- not the new wave of highly convincing generative AI media. For that reason, it's recommending several changes designed to slow the spread of deceptive AI media: * Create clearer rules specifically for AI-generated content * Improve automated detection tools for deepfakes * Label AI-generated media more clearly * Adopt industry standards like Content Credentials, which can show whether media was created or modified using AI Bottom line Meta isn't the only platform grappling with the issue. Because AI video generation tools are available for free such as Gemini, Grok and Sora, anyone with a device can now generate realistic videos in minutes. For users, the challenge may only grow harder: the next viral video you see online could look real -- even if it isn't. If you're unsure, try using chatbots like Gemini, which can detect AI generation or dig deeper with Deep Research for the source of the video. Staying vigilant and doing your own due diligence is one of the best ways to avoid getting tricked by a deepfake. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[5]
Meta told by Oversight Board better moderation is needed for AI-generated deepfakes - SiliconANGLE
Meta told by Oversight Board better moderation is needed for AI-generated deepfakes Meta Platforms Inc. has been warned by its Oversight Board that it needs to do a lot more about the "proliferation" of deepfake videos shared on its platforms made by artificial intelligence tools. The 21-person board told the company its current policies around misinformation aren't enough. They said Meta should invest in better detection tools that can easily flag deepfake content and introduce digital watermarks for content that has been created by machines. "As the quantity and quality of AI-generated content increase, its impact on people and societies will be profound," the board wrote. "The risks are heightened when deepfake output designed to deceive, manipulate or increase engagement is shared during conflicts and crises, such as in Iran and Venezuela in 2026, and spreads rapidly on different companies' platforms." The board called the 2025 Iran-Israel War in June an "inflection point" where deceptive AI-generated content is concerned. An AI-generated video was shared during the conflict, which was watched around 700,000 times. The clip, showing damaged buildings in the Israeli city of Haifa, was a fake supposedly posted by a news outlet that turned out to be a group in the Philippines. The video was reported to Meta, but it was neither removed nor labeled as high-risk even though it was clearly AI-generated. The board overturned the decision, and the video was eventually correctly labeled, but the board warns that during times of crises, these processes are too slow. Meta does have "AI Info" labels for such content, but the board believes the process is "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform." The company was also accused of inconsistently implementing watermarks on AI and told it needs more thorough detection tools. In its statement, Meta said it will follow the board's suggestions and will implement the changes the board seeks when "it is technically and operationally possible to do so." Today, Google LLC-owned YouTube was also talking about AI-generated deepfakes, with the company announcing it has just introduced a deepfake detection tool that can be used by public figures whose likeness has been used. Once flagged, YouTube will take the video down if the content isn't protected under free expression standards. Otherwise, it might only receive a label. The tool will first be rolled out to a pilot group of testers - politicians, government officials, and journalists - and in the future may become more widely available.
[6]
Meta under fire as fake AI war video gains over 700K views
The board asked Meta to create clearer rules and better tools for AI content. Meta is under fresh scrutiny over how it handles its AI-generated content. The company's independent Oversight Board has recently schooled the social media giant, stating that it should develop a dedicated policy for AI-related content. The recommendation came after a fake video made with AI went viral online, wrongly showing damaged buildings in Haifa during the 2025 Israel-Iran conflict. The clip gained more than 700,000 views before the board intervened. According to the decision, Meta failed to add a clear warning label or take stronger action even after the content was flagged. The board says the case shows gaps in Meta's current policies and highlights the growing challenge of misleading AI content spreading quickly on social media platforms worldwide today. The AI-generated video was posted on Meta by an account that presented itself as a news outlet. However, after investigations regulators found that the page was actually run by an individual in the Philippines. Meta decided not to remove it and even declined to apply its 'high risk' AI label even after the video was reported to them. The 'high risk' label in Meta is meant to warn users when content has been created or altered using artificial intelligence. Also read: Sony PS5 Pro may launch in India soon, BIS listing suggests However, the Oversight Board of Meta later overturned that decision and said the company should have clearly labelled the video. It also pointed to 'obvious signals of deception' linked to the account. After the board raised these concerns, Meta disabled three accounts connected to the page. In its latest recommendations, the Oversight Board had urged Meta to create a separate rule specifically for AI-generated content instead of treating it under the broader misinformation policy. According to the board, a dedicated rule should clearly explain when users must disclose that content is AI-generated and what penalties they may face if they fail to do so. Also read: Why do CBSE question papers have QR code Meta's current labelling system, which is widely known as 'AI Info', has also been criticised by the Oversight Board. The independent panel said that the system relies too heavily on users to voluntarily disclose when they use AI tools. And as such disclosures are rare, the board warned that the approach is not strong enough to deal with the fast spread of AI media, especially during conflicts or crises. Also read: Google Pixel 10 deal: Get up to Rs 12,000 discount, here's how Other than that, the board has also urged Meta to invest more in tools that can detect AI-generated images, audio, and video automatically. Furthermore, it also raised concerns that digital watermarks for content created with Meta's own AI tools are not applied consistently either. Meta is yet to respond to the ruling by its Oversight Board. Do note that the company just has 60 days to respond to the board's recommendations.
Share
Share
Copy Link
Meta's Oversight Board issued a sharp rebuke of the company's handling of AI-generated content, calling for a complete overhaul of its detection and labeling systems. The criticism follows a fake AI video depicting damage in Haifa during the Israel-Iran conflict that garnered over 700,000 views without proper labeling, exposing critical gaps in Meta's current approach to synthetic media moderation.
Meta faces mounting pressure to fundamentally reshape how it handles AI-generated content across its platforms after the Oversight Board delivered a scathing assessment of the company's current policies. The 21-person independent board issued recommendations calling for dedicated rules separate from existing misinformation policy, improved detection technology, and more consistent use of digital watermarking
1
. The decision stems from a case involving a fake AI video that claimed to show damaged buildings in Haifa during the Israel-Iran conflict in 2025, which accumulated more than 700,000 views before Meta took action2
.Source: TechSpot
The video was posted by an account in the Philippines masquerading as a news outlet. Despite multiple user complaints, Meta declined to remove it or apply a "high risk" AI label that would have clearly indicated the content had been created or manipulated with artificial intelligence
1
. The board overturned Meta's decision and disabled three accounts linked to the page after identifying "obvious signals of deception."The Oversight Board's assessment reveals that Meta's current "AI Info" labels are "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content," particularly during times of crisis or conflict where engagement spikes dramatically
3
. The board emphasized that a system overly dependent on self-disclosure of AI usage and escalated review cannot meet the challenges posed in the current environment1
.
Source: Tom's Guide
Meta's reliance on users to voluntarily label their own content and on fact-checking partners to flag problematic material has proven insufficient. The board noted concerns from these trusted partners that Meta "is less responsive to outreach and concerns, in part due to a significant reduction in capacities for Meta's internal teams"
1
. This approach means fake AI videos can spread rapidly before moderators catch them, particularly during major global events4
.Among the board's top recommendations is the creation of a dedicated Community Standard for AI-generated content that operates independently from Meta's existing misinformation policy
3
. This new AI content rule should include specific details about how and when users are required to label content, as well as clear information about penalties for those who break the rule1
.
Source: Engadget
The board also called for Meta to invest in more sophisticated, robust detection tools capable of reliably identifying AI media, including audio and video formats. Additionally, the board expressed concern about reports that Meta is "inconsistently implementing" digital watermarks on content created by its own AI tools
1
. The recommendations include better adoption of Content Credentials, an industry framework that attaches metadata showing whether AI tools were involved in content creation3
.Related Stories
The board described the 2025 Iran-Israel conflict as an "inflection point" for deceptive AI content, warning that the proliferation of AI content during armed conflicts has "challenged the public's ability to distinguish fabrication from fact ... risking a general distrust of all information"
2
. Since the start of US and Israel's strikes on Iran, there has been a sharp rise in viral AI-generated misinformation across social media platforms1
.A BBC analysis at the time found that fake AI videos posted after the conflict began—content that was either pro-Israel or pro-Iran—quickly collected at least 100 million views
2
. The board argued that Meta's current threshold for labeling AI-generated content is too high, particularly when the subject involves armed conflict. Meta had claimed the Haifa video did not require any label because it did not "directly contribute to the risk of imminent physical harm"2
.Meta has 60 days to formally respond to the Oversight Board's recommendations
1
. In its initial statement, the company said it would label the video at issue within seven days and would abide by the board's suggestions when it encounters "identical" content in the same context2
. Meta also indicated it will implement changes "when it is technically and operationally possible to do so"5
.This isn't the first time the board has criticized Meta's handling of synthetic media. The group has previously described the company's manipulated media rules as "incoherent" on two occasions and criticized its reliance on third-party fact-checking organizations
1
. The board's suggestion that "the industry needs coherence in helping users distinguish deceptive AI-generated content" signals that these challenges extend beyond Meta to other platforms grappling with the rapid advancement of generative AI tools1
. As deepfake videos become increasingly difficult to detect due to advances in AI technology that can now generate realistic lighting, motion, and voice synchronization, the need for proactive labeling of fake AI videos and stronger content moderation during crises becomes more urgent4
.Summarized by
Navi
[4]
06 Jun 2025•Technology

30 Oct 2025•Entertainment and Society

16 Dec 2024•Technology

1
Technology

2
Technology

3
Science and Research
