8 Sources
8 Sources
[1]
Meta rolls out new AI content enforcement systems while reducing reliance on third-party vendors | TechCrunch
Meta on Thursday announced that it's starting to roll out more advanced AI systems to handle content enforcement as it plans to cut back on third-party vendors. Tasks related to content enforcement include catching and removing content about terrorism, child exploitation, drugs, fraud, and scams. The company says it will deploy these more advanced AI systems across its apps once they consistently outperform its current content enforcement methods. At the same time, it will reduce its reliance on third-party vendors for content enforcement. "While we'll still have people who review content, these systems will be able to take on work that's better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drug sales or scams," Meta explained in a blog post. Meta believes these AI systems can detect more violations with greater accuracy, better prevent scams, respond more quickly to real-world events, and reduce over-enforcement. The company says early tests of the AI systems have been promising, as they can detect twice as much violating adult sexual solicitation content as its review teams, while also reducing the error rate by more than 60%. It also says the systems can identify and prevent more impersonation accounts involving celebrities and other high-profile individuals, as well as help stop account takeovers by detecting signals such as logins from new locations, password changes, or edits made to a profile. Additionally, Meta says the systems can identify and mitigate around 5,000 scam attempts per day, in which scammers try to trick people into giving away their login details. "Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions," Meta wrote in the blog post. "For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement." The move comes as Meta has been loosening its content moderation rules over the past year or so, as President Donald Trump took office for a second time. Last year, the company ended its third-party fact-checking program in favor of an X-like Community Notes model. It also lifted restrictions around "topics that are part of mainstream discourse" and said users would be encouraged to take a "personalized" approach to political content. It also comes as Meta, and other Big Tech companies, are currently facing several lawsuits looking to hold social media giants accountable for harming children and young users. Meta also announced Thursday that it's launching a Meta AI support assistant that will give users access to 24/7 support. The assistant is rolling out globally to the Facebook and Instagram apps for iOS and Android, and within the Help Center on Facebook and Instagram on desktop.
[2]
Meta says its AI moderation systems will replace contractors over the next few years.
Last year, content moderators who've risked consequences like PTSD working for Big Tech companies have started to organize for better treatment in the last several years. Now, Meta has announced a wide rollout of its AI support assistant for Facebook and Instagram, and that it will "reduce our reliance on third-party vendors" employing humans for content enforcement. While we'll still have people who review content, these systems will be able to take on work that's better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drugs sales or scams.
[3]
Meta to Reduce Role of Outside Content Moderators in Favor of AI
Meta will still use human reviewers for nuanced cases, and people will continue to play a key role in making the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement. Meta Platforms Inc. will soon cut back on its use of third-party vendors to help with content moderation, relying instead on advanced artificial intelligence systems to detect and remove posts that violate the company's terms of service. Meta, which owns Facebook and Instagram, has used AI for years to detect spam and abusive posts at scale on its networks, and has also paid human moderators from companies like Accenture Plc to manually review and remove inappropriate posts. The social media giant recently started testing more advanced AI tools built on large language models to help sift through posts and enforce its content rules. Those more advanced solutions have improved its enforcement efforts, the company wrote in a blog post published Thursday. The AI is better at spotting scams, identifying celebrity impersonators and catching adult sexual solicitation, among other things, Meta said, adding that the new systems "consistently perform better than our current methods of content enforcement." Now the company plans to deploy those tools more broadly across its various apps -- and will cut back on outside moderators as a result. "As we do this, we'll reduce our reliance on third-party vendors for content enforcement and focus on strengthening our internal systems and workforce," it said in the blog post. Meta will still use human reviewers for various nuanced cases and said that AI "doesn't replace human judgment," but it will instead rely more on in-house experts. "People will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement," the company said. The transition will take a "few years," Meta added, though it did not name specific third-party vendors it plans to cut back on. Meta has long relied on thousands of third-party contractors for its content moderation efforts -- jobs that can expose human reviewers to some of the internet's darkest and most disturbing images and videos. But the company has also started to rely on AI for more and more tasks, including some engineering-related ones. Chief Financial Officer Susan Li said in January that the firm has seen a "30% increase in output per engineer" thanks to the adoption of AI agents that assist with coding.
[4]
Meta's latest AI improves its terrible content moderation
Enterprise tools have detected impossible logins for years. Zuck's human mods couldn't join the dots Meta has revealed it's tested using AI for content moderation chores and found it does better than humans. The social networking giant on Thursday announced it has started a global rollout for its Meta AI support, a tool that handles tasks like password resets, reporting dodgy content, explaining content takedowns and allowing appeals, or managing privacy settings. The company also said "Over the next few years, we will deploy more advanced AI systems across our apps to transform our approach to content enforcement, more accurately finding and removing severe content violations like scams and illegal content, so people see less of them." Early experiments have delivered promising results: one AI tool detected and mitigated 5,000 attempts at scamming users to reveal their passwords every day. Meta says its human teams could not detect those scams. Another AI helped to reduce the number of reports users lodged about fake celebrity profiles by over 80 percent. Other tests doubled detection of adult sexual solicitation content that violates Meta's rules. Meta says its AI can also "Prevent an account takeover by noticing it was suddenly accessed from a new location, the password was changed, and edits were made to the profile." The company says those changes "look harmless to a person reviewing the account, but AI was able to recognize as a threat." That's an odd observation given that numerous enterprise security products can detect "impossible travel" such as a single user logging in from London and an hour later requesting a password reset from San Francisco, and flag it as a likely attack. Meta also enthused that AI can "Detect a fake site spoofing a legitimate web address and pretending to be a popular sporting goods store by noticing the real logo being used with unusually low prices and a suspicious web address," because AI "drove down views of ads with scams and other serious violations by seven percent, offering promising results and better protections for users and brands." Again, that's a nice outcome, but also a little odd as fake ads are a known problem - and one to which Meta has often been indifferent. Your correspondent once spotted a suspicious ad for a brand that publishes lists of its legitimate URLs. The ad led to a spoof site, so I made a report to Meta - which replied that the fake site was not in violation of its policies. The company ignored my reply that pointed out the URL was not on the brand's list of official sites. Meta's march toward AI came in the same week as it walked away from the metaverse, its vision for immersive online communities. On Wednesday, the company announced the shutdown of Horizon Worlds, its metaverse platform. That decision meant owners of Meta's Quest VR goggles would have a lot less content to consume. The company later walked that back and promised to continue offering some of its immersive environments but said it would not create new ones. Meta adopted its current name to reflect founder Mark Zuckerberg's belief that the metaverse was the next big thing, and spent over $80 billion on its ambitions over five years - a period when we can now see its content moderation for Facebook and Instagram was going badly, and children were often harmed by the company's products. The company now plans to develop something it calls "superintelligence," and is spending tens of billions more to make that happen. ®
[5]
Meta will move away from human content moderators in favor of more AI
A little more than a year after ditching third-party fact checkers and rolling back much of its proactive content moderation, the company says it will further "transform" its approach by drastically reducing the number of human moderators in favor of AI-based systems. The company says the change will happen "over the next few years," and will allow the company to catch more issues faster than its current approach. Meta didn't say how much of its contract workforce might be cut as it makes this transition. The company employs thousands of contractors around the world to review content flagged by its AI systems and user reports among other tasks. The company said that as it shifts its approach humans will "play a key role" in "critical decisions" and aid in training and other tasks. "Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions," Meta said in an update. "For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement." The company has been testing LLM-based systems for content moderation for a while and says that early tests have had "promising" results. Another advantage is that its AI can handle languages used by "98% of people online," compared with the 80 languages currently supported by its moderation capabilities. While Meta says its underlying rules aren't changing, the new approach could dramatically change users' perception of how Meta enforces its policies. The company already relies heavily on AI for certain rules, and many users believe that these systems make too many mistakes and make it difficult for their appeals to reach a set of human eyes. On the other hand, Meta, which stands to save a lot of money if it significantly downsizes its contract workforce, says its new systems make "fewer over-enforcement mistakes" and catch more of the most "severe" violations. In the nearer term, Meta is introducing an AI powered "support assistant" that will help users with certain types of account issues. The chatbot, which is rolling out now in the Facebook and Instagram app, will be able to help users report content and manage appeals, reset passwords and manage other account settings. It will also be able to help people who get locked out of their accounts "starting with select cases in the US and Canada."
[6]
Meta to cut back on third-party vendors in favor of AI for content enforcement
Meta is beginning a yearslong rollout of more advanced artificial intelligence systems that will handle content enforcement-related tasks like catching scams and removing illegal media, as the company reduces its use of third-party vendors and contractors in favor of AI. In a blog post Thursday, Meta said that the process could take a few years, and that the company won't completely rely on AI for monitoring content. "While we'll still have people who review content, these systems will be able to take on work that's better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drugs sales or scams," Meta said in the post. Meta didn't name any of its current vendors, but the company has previously relied on contractors from firms like Accenture, Concentrix and Teleperformance. The announcement represents Meta's latest effort to use its hefty investments in AI to streamline its business and operations while it struggles to find revenue-generating applications that compete with offerings from OpenAI, Anthropic and Google. Meta said AI will help more accurately flag violations "while also stopping more scams and responding faster to real-world events with fewer overenforcement mistakes." Meanwhile, Meta is also defending itself in several high-profile trials involving the safety of children on its platform, an issue directly tied to its existing challenges with content moderation. The company said it will still rely on experts to design, train and oversee its AI content enforcement systems, and humans will remain involved with the "most complex, high‑impact decisions" that involve law enforcement and appeals related to account disablement. The company also said Thursday that it has debuted a new Meta AI digital support assistant that people on Facebook and Instagram can use to address various account-related issues. According to a Reuters report last week, Meta has been considering whether to lay off over 20% of its workforce to help balance its big AI spending. Meta responded that it was "a speculative report about theoretical approaches."
[7]
Meta Phases Out Human Moderators as AI Detection Outpaces Teams | PYMNTS.com
"While we'll still have people who review content, these systems will be able to take on work that's better suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as will illicit drugs sales or scams," the owner of Facebook and Instagram said in a Thursday blog post. Meta's content enforcement efforts encompass content having to do with issues such as terrorism, child exploitation, drugs, fraud and scams, according to the post. The company has experimented with AI systems for these efforts over the past year. It found that the systems found 5,000 scam attempts per day that had not been caught by human teams, identified more accounts that attempted to impersonate celebrities and other high-profile people, caught twice as much adult sexual solicitation content, prevented an account takeover by spotting clues that may have been missed by a human, and reduced views of ads with scams and other violations by 7%, the post said. People will still play a key role in appeals of account disablement, reports to law enforcement and other critical decisions, per the post. "Over the next few years, we'll be deploying these more advanced AI systems across our apps once we've seen them consistently perform better than our current methods of content enforcement, transforming our approach," Meta said in the post. In another, separate effort, Meta said March 11 that it launched AI-powered anti-scam tools for its platforms Facebook, WhatsApp and Messenger. In February, the company announced that it is suing advertisers who have allegedly impersonated celebrities to defraud consumers. Meta also announced in its Thursday blog post that it is rolling out a Meta AI support assistant that it previewed in December. The support assistant will be added to the Facebook and Instagram apps for iOS and Android and within the Help Center on Facebook and Instagram on desktop. The support assistant can answer questions about account problems and, with the user's permission, act on requests such as reporting problematic content, managing privacy settings and resetting passwords. "It can respond to requests typically in under five seconds, dramatically reducing wait times compared to traditional help center searches or seeking answers on external websites," Meta said in the post.
[8]
Meta shifts content moderation to AI from third-party vendors By Investing.com
Investing.com - Meta Platforms Inc. (NASDAQ:META) is reducing its use of third-party content moderation vendors in favor of advanced artificial intelligence systems to detect and remove posts that violate the company's terms of service. The company announced Thursday it is rolling out a Meta AI support assistant globally on Facebook and Instagram to provide 24/7 help for account issues. The assistant can respond to requests in under five seconds and handles tasks including reporting scams, resetting passwords, managing privacy settings, and updating profile settings. Meta said early tests of its advanced AI systems have shown promising results. The technology identifies and prevents 5,000 scam attempts per day that existing review teams had not caught, and reduced user reports of the most impersonated celebrities by over 80%. The AI also catches twice as much violating adult sexual solicitation content compared to review teams while decreasing the error rate by more than 60%. The new AI systems operate in languages spoken by 98% of people online, expanding from the company's previous coverage of around 80 languages. Meta said the technology can adapt to understand cultural nuance, rapidly changing code words, emoji meanings, and slang. Over the next few years, Meta will deploy these AI systems across its apps once they consistently perform better than current content enforcement methods. The company will reduce reliance on third-party vendors for content enforcement while maintaining human oversight for complex decisions, including appeals of account disablement and reports to law enforcement. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
Share
Share
Copy Link
Meta announced a major shift in content moderation strategy, deploying advanced AI systems to handle enforcement tasks while reducing its reliance on third-party vendors employing human content moderators. Early tests show the AI can detect twice as much violating content while reducing errors by over 60%, though experts will still oversee critical decisions.
Meta announced Thursday it will deploy advanced AI moderation systems across Facebook and Instagram over the next few years, marking a significant shift away from human content moderators employed by third-party vendors
1
. The move comes as the company seeks to improve its ability to detect and remove illicit content including terrorism, child exploitation, illicit drug sales, fraud, and scams1
.
Source: The Verge
The company explained that while human reviewers will still play a role, AI systems will handle work better suited to technology, such as repetitive reviews of graphic content and areas where adversarial actors constantly change tactics
1
. Meta has employed thousands of contractors from companies like Accenture to manually review and remove policy-violating posts, jobs that can expose workers to disturbing content and risks like PTSD2
.Meta's new AI content enforcement systems, built on large language models, have demonstrated promising results in early testing . The systems can detect twice as much violating adult sexual solicitation content compared to review teams while reducing the error rate by more than 60%
1
.
Source: PYMNTS
The AI tools also identify and mitigate around 5,000 scams per day where attackers attempt to trick users into revealing login credentials—threats that human teams could not detect
4
. Additionally, the systems reduced reports about fake celebrity impersonation accounts by over 80% and drove down views of ads containing scams and serious violations by 7%4
.The AI can also prevent account takeover attempts by detecting signals such as logins from new locations, password changes, or profile edits that might appear harmless individually but collectively indicate a threat
1
. Another advantage is language coverage—Meta's AI can handle languages used by 98% of people online, compared to the 80 languages currently supported5
.Despite the shift toward automation, Meta emphasized that experts will design, train, oversee, and evaluate the AI systems. People will continue making the highest risk and most critical decisions, including appeals of account disablement and reports to law enforcement
1
. The company will reduce reliance on human workers from third-party vendors while focusing on strengthening internal systems and workforce .Meta also launched an AI-powered support assistant rolling out globally to Facebook and Instagram on iOS, Android, and desktop
1
. The chatbot provides 24/7 support for tasks like password resets, reporting content, managing appeals, and handling privacy settings, with account recovery assistance starting in select US and Canada cases5
.Related Stories
The transition raises questions about user experience, as many already believe Meta's automated systems make too many mistakes and make it difficult for appeals to reach human reviewers
5
. While Meta claims the new systems make fewer over-enforcement mistakes, the company has faced criticism for indifference to fake ads and other content violations4
.This announcement follows Meta's loosening of content moderation rules as President Donald Trump took office for a second time, including ending its third-party fact-checking program in favor of an X-like Community Notes model
1
. The timing is notable as Meta and other Big Tech companies face multiple lawsuits seeking to hold social media platforms accountable for harming children and young users1
. The company is now investing tens of billions toward developing what it calls superintelligence, even as it recently walked back its metaverse ambitions after spending over $80 billion over five years4
.Summarized by
Navi
[1]
[4]
01 Jun 2025•Technology

22 Oct 2025•Business and Economy

04 Dec 2025•Technology

1
Technology

2
Policy and Regulation

3
Business and Economy
