3 Sources
3 Sources
[1]
Meta has introduced revised guardrails for its AI chatbots to prevent inappropriate conversations with children
Business Insider has obtained the guidelines that Meta contractors are reportedly now using to train its AI chatbots, showing how it's attempting to more effectively address potential child sexual exploitation and prevent kids from engaging in age-inappropriate conversations. The company said in August that it was updating the guardrails for its AIs after Reuters reported that its policies allowed the chatbots to "engage a child in conversations that are romantic or sensual," which Meta said at the time was "erroneous and inconsistent" with its policies and removed that language. The document, which Business Insider has shared an excerpt from, outlines what kinds of content are "acceptable" and "unacceptable" for its AI chatbots. It explicitly bars content that "enables, encourages, or endorses" child sexual abuse, romantic roleplay if the user is a minor or if the AI is asked to roleplay as a minor, advice about potentially romantic or intimate physical contact if the user is a minor, and more. The chatbots can discuss topics such as abuse, but cannot engage in conversations that could enable or encourage it.
[2]
Meta rewrites chatbot rules after bots caught in 'romantic' chats...
Meta has reportedly updated rules for its AI-powered chatbot on child sexual exploitation and other high-risk content after lawmakers and advocacy groups expressed outrage over revelations that the bots were allowed to participate in "romantic or sensual" conversations with minors. Contractors have been instructed to adhere to newly created guidelines that are designed to train the bots to respond appropriately to "egregiously unacceptable" prompts involving child sexual exploitation, violent crimes and other sensitive topics, according to Business Insider. Meta and other tech giants including OpenAI, Google, CharacterAI and other AI companies came under scrutiny from the Federal Trade Commission earlier this year in the wake of a Reuters report detailing how Meta's bot was permitted to "engage a child in conversations that are romantic or sensual." According to the new guidelines, Meta's AI systems are strictly prohibited from generating material that depicts or facilitates the involvement of children in obscene media or sexual services. The bots are also banned from providing instructions or links for acquiring child sexual abuse material. Any sexualized description of a child under 13, including through roleplay, is also strictly forbidden. The chatbots are permitted to discuss factual, educational or clinical discussions of sensitive issues, including the existence of relationships between children and adults, the reality of child sexual abuse or the involvement of children in obscene materials -- but only when framed in an academic, preventative or awareness-building context, according to the leaked documents. They may also explain the solicitation or creation of sexual materials involving children as a matter of discussion, not as guidance. Additionally, content addressing child sexualization in general terms is acceptable. When roleplay is involved, chatbots may only describe themselves or characters as 18 or older, never as minors. "This reflects what we have repeatedly said regarding AI chatbots: our policies prohibit content that sexualizes children and any sexualized or romantic role-play by minors," Meta's communications chief Andy Stone told Business Insider. "Our policies extend beyond what's outlined here with additional safety protections and guardrails designed with younger users in mind." In August, Sen. Josh Hawley (R-Mo.) gave Meta CEO Mark Zuckerberg a Sept. 19 deadline for him to submit drafts of a more than 200-page handbook outlining chatbot rules, including enforcement procedures, age-verification systems and risk assessments. Meta did not meet that deadline but told Business Insider this week it has since provided an initial set of documents after fixing a technical problem. The company said more records will follow and that it remains engaged with Hawley's office.
[3]
Meta reportedly tightens AI chatbot protocols to block inappropriate conversations with kids
The update follows a Reuters report alleging Meta's policies left loopholes for risky child interactions. Meta is reportedly tightening safety protocols for the AI chatbots with new training guidelines. The social media giant aims to reduce the risks around child safety and inappropriate conversations. This comes after the company faced criticism that its systems lacked sufficient guardrails to prevent minors from being exposed to harmful interactions. As per the documents accessed by Business Insider, contractors responsible for training Meta's AI have been given clearer directions on what the chatbots can and cannot say. The new guidelines have emphasised a zero-tolerance stance toward content that may facilitate child exploitation or blur boundaries in conversations with underage users. The rules specifically prohibit any scenario that encourages, normalises, or facilitates child sexual abuse. They also prohibit romantic roleplay if a user identifies as a minor or if the AI is instructed to act as one. Similarly, when a child is present, the bots are unable to provide advice on physical intimacy. However, the training material does allow AI systems to discuss sensitive topics such as abuse in an educational or awareness-building context, as long as it does not veer into endorsement or roleplaying. "The policy permits AI to engage in sensitive discussions about child exploitation, but only in an educational context. Acceptable responses include explaining grooming behaviors in general terms, discussing child sexual abuse in academic settings, or offering non-sexual advice to minors about social situations," the report stated. Also read: CMF Phone 2 Pro available for under Rs 15,000 during Flipkart Big Billion Days 2025 This comes after a Reuters investigation earlier this year reported that Meta's policies left room for AI chatbots to engage in romantic or sensual discussions with children, an allegation Meta dismissed at the time. In August, the company promised revised safeguards, which appear to be taking shape with new contractor guidelines.
Share
Share
Copy Link
Meta has implemented stricter guidelines for its AI chatbots to prevent inappropriate conversations with children. The move comes in response to criticism and aims to enhance child safety measures on the platform.
Meta, the parent company of Facebook, has recently implemented significant changes to its AI chatbot guidelines in response to growing concerns about child safety on its platforms
1
2
. This move comes after a Reuters report earlier this year revealed potential loopholes in Meta's policies that could allow AI chatbots to engage in inappropriate conversations with minors3
.Source: New York Post
The updated guidelines, obtained by Business Insider, outline strict prohibitions for Meta's AI chatbots
1
. These include:The guidelines also explicitly forbid generating material that depicts or facilitates the involvement of children in obscene media or sexual services
2
.Source: engadget
While the new rules are stringent, they do allow for certain discussions within appropriate contexts. The AI chatbots can engage in:
2
Related Stories
The changes come amid increased scrutiny from regulators and lawmakers. Senator Josh Hawley (R-Mo.) had previously given Meta CEO Mark Zuckerberg a deadline to submit drafts of a comprehensive handbook outlining chatbot rules
2
. While Meta missed the initial deadline, they have since provided an initial set of documents and promised more to follow.Meta's communications chief, Andy Stone, emphasized that their policies prohibit content that sexualizes children and any sexualized or romantic roleplay by minors. He added that their policies extend beyond what's outlined in the leaked documents, with additional safety protections designed with younger users in mind
2
.Source: Digit
Meta's policy updates reflect a broader trend in the AI industry towards enhancing safety measures, particularly for vulnerable users like children. The Federal Trade Commission has also scrutinized other tech giants, including OpenAI, Google, and CharacterAI, highlighting the industry-wide importance of addressing these concerns
2
.Summarized by
Navi
[2]
14 Aug 2025β’Technology
28 Apr 2025β’Technology
15 Aug 2025β’Policy and Regulation