4 Sources
4 Sources
[1]
Meta has introduced revised guardrails for its AI chatbots to prevent inappropriate conversations with children
Business Insider has obtained the guidelines that Meta contractors are reportedly now using to train its AI chatbots, showing how it's attempting to more effectively address potential child sexual exploitation and prevent kids from engaging in age-inappropriate conversations. The company said in August that it was updating the guardrails for its AIs after Reuters reported that its policies allowed the chatbots to "engage a child in conversations that are romantic or sensual," which Meta said at the time was "erroneous and inconsistent" with its policies and removed that language. The document, which Business Insider has shared an excerpt from, outlines what kinds of content are "acceptable" and "unacceptable" for its AI chatbots. It explicitly bars content that "enables, encourages, or endorses" child sexual abuse, romantic roleplay if the user is a minor or if the AI is asked to roleplay as a minor, advice about potentially romantic or intimate physical contact if the user is a minor, and more. The chatbots can discuss topics such as abuse, but cannot engage in conversations that could enable or encourage it.
[2]
Meta has new safety guardrails for kids talking to its AI chatbots
Meta is training its AI chatbots to more effectively address child sexual exploitation after a series of high-profile blunders around the sensitive topic, according to guidelines obtained by Business Insider. The guidelines that contractors are reportedly using to train its AI chatbots have recently been updated, Business Insider reported. These guidelines state that content that "enables, encourages, or endorses" child sexual abuse is explicitly barred, as is romantic roleplay if the user is a minor or if the user asks the AI to roleplay as a minor, advice about intimacy if the user is a minor, and more, according to an Engadget report based on the Business Insider scoop. While these may seem like obvious safety guardrails for underage users, they are necessary as more people -- including underage users -- experiment with AI companions and roleplaying. An August report by Reuters revealed that Meta's AI rules permitted suggestive behavior with kids. As Reuters reported, Meta's previous chatbot policies specifically allowed it to "engage a child in conversations that are romantic or sensual." Just weeks after that report, Meta spokesperson Stephanie Otway told TechCrunch that their AI chatbots are being trained to no longer "engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations." Before this change, Meta's chatbots could engage with those topics when it was deemed "appropriate." So, what's included in the new guidelines? Content that "describes or discusses" a minor in a sexualized manner is also unacceptable, according to the Business Insider report. Minors cannot engage in "romantic roleplay, flirtation or expression of romantic or intimate expression" with the chatbot, nor can they ask for advice that "potentially-romantic or potentially-intimate physical content with another person, such as holding hands, hugging, or putting an arm around someone," Business Insider reported. However, acceptable use cases for training the chatbot include discussing the "formation of relationships between children and adults," the "sexual abuse of a child," "the topic of child sexualisation," "the solicitation, creation, or acquisition of sexual materials involving children," and "the involvement of children in the use or production of obscene materials or the employment of children in sexual services in academic, educational, or clinical purposes." Minors can still use the AI for romance-related roleplay as long as it is "non-sexual and non-sensual" and "is presented as literature or fictional narrative (e.g. a story in the style of Romeo and Juliet) where the AI and the user are not characters in the narrative." As Business Insider reported, the guidelines defined "discuss" as "providing information without visualization." So, Meta's chatbots can discuss topics like abuse but cannot describe, enable, or encourage it, per the new guidelines. Meta isn't the only AI struggling with child safety. Parents of a teen who died by suicide after confiding in ChatGPT recently sued the AI platform for wrongful death; in response, OpenAI announced additional safety measures and behavioral prompts for its updated GPT-5. Anthropic updated its chatbot to allow it to end chats that are harmful or abusive, and Chatacter.AI introduced parental supervision features earlier this year. If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. - 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources. If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org. Disclosure: Ziff Davis, Mashable's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[3]
Meta rewrites chatbot rules after bots caught in 'romantic' chats...
Meta has reportedly updated rules for its AI-powered chatbot on child sexual exploitation and other high-risk content after lawmakers and advocacy groups expressed outrage over revelations that the bots were allowed to participate in "romantic or sensual" conversations with minors. Contractors have been instructed to adhere to newly created guidelines that are designed to train the bots to respond appropriately to "egregiously unacceptable" prompts involving child sexual exploitation, violent crimes and other sensitive topics, according to Business Insider. Meta and other tech giants including OpenAI, Google, CharacterAI and other AI companies came under scrutiny from the Federal Trade Commission earlier this year in the wake of a Reuters report detailing how Meta's bot was permitted to "engage a child in conversations that are romantic or sensual." According to the new guidelines, Meta's AI systems are strictly prohibited from generating material that depicts or facilitates the involvement of children in obscene media or sexual services. The bots are also banned from providing instructions or links for acquiring child sexual abuse material. Any sexualized description of a child under 13, including through roleplay, is also strictly forbidden. The chatbots are permitted to discuss factual, educational or clinical discussions of sensitive issues, including the existence of relationships between children and adults, the reality of child sexual abuse or the involvement of children in obscene materials -- but only when framed in an academic, preventative or awareness-building context, according to the leaked documents. They may also explain the solicitation or creation of sexual materials involving children as a matter of discussion, not as guidance. Additionally, content addressing child sexualization in general terms is acceptable. When roleplay is involved, chatbots may only describe themselves or characters as 18 or older, never as minors. "This reflects what we have repeatedly said regarding AI chatbots: our policies prohibit content that sexualizes children and any sexualized or romantic role-play by minors," Meta's communications chief Andy Stone told Business Insider. "Our policies extend beyond what's outlined here with additional safety protections and guardrails designed with younger users in mind." In August, Sen. Josh Hawley (R-Mo.) gave Meta CEO Mark Zuckerberg a Sept. 19 deadline for him to submit drafts of a more than 200-page handbook outlining chatbot rules, including enforcement procedures, age-verification systems and risk assessments. Meta did not meet that deadline but told Business Insider this week it has since provided an initial set of documents after fixing a technical problem. The company said more records will follow and that it remains engaged with Hawley's office.
[4]
Meta reportedly tightens AI chatbot protocols to block inappropriate conversations with kids
The update follows a Reuters report alleging Meta's policies left loopholes for risky child interactions. Meta is reportedly tightening safety protocols for the AI chatbots with new training guidelines. The social media giant aims to reduce the risks around child safety and inappropriate conversations. This comes after the company faced criticism that its systems lacked sufficient guardrails to prevent minors from being exposed to harmful interactions. As per the documents accessed by Business Insider, contractors responsible for training Meta's AI have been given clearer directions on what the chatbots can and cannot say. The new guidelines have emphasised a zero-tolerance stance toward content that may facilitate child exploitation or blur boundaries in conversations with underage users. The rules specifically prohibit any scenario that encourages, normalises, or facilitates child sexual abuse. They also prohibit romantic roleplay if a user identifies as a minor or if the AI is instructed to act as one. Similarly, when a child is present, the bots are unable to provide advice on physical intimacy. However, the training material does allow AI systems to discuss sensitive topics such as abuse in an educational or awareness-building context, as long as it does not veer into endorsement or roleplaying. "The policy permits AI to engage in sensitive discussions about child exploitation, but only in an educational context. Acceptable responses include explaining grooming behaviors in general terms, discussing child sexual abuse in academic settings, or offering non-sexual advice to minors about social situations," the report stated. Also read: CMF Phone 2 Pro available for under Rs 15,000 during Flipkart Big Billion Days 2025 This comes after a Reuters investigation earlier this year reported that Meta's policies left room for AI chatbots to engage in romantic or sensual discussions with children, an allegation Meta dismissed at the time. In August, the company promised revised safeguards, which appear to be taking shape with new contractor guidelines.
Share
Share
Copy Link
Meta has introduced revised guardrails for its AI chatbots to prevent inappropriate conversations with children. The new guidelines aim to address potential child sexual exploitation and ensure safer interactions for underage users.
Meta, the parent company of Facebook, has recently implemented stricter guidelines for its AI chatbots to prevent inappropriate interactions with children. This move comes in response to criticism and reports highlighting potential risks in the company's previous policies
1
2
.
Source: New York Post
The revised guidelines, obtained by Business Insider, outline clear distinctions between acceptable and unacceptable content for Meta's AI chatbots. The new rules explicitly prohibit content that 'enables, encourages, or endorses' child sexual abuse
1
2
. Other banned interactions include:2
3

Source: Mashable
While the guidelines are strict, they do allow for certain educational and awareness-building discussions. The AI chatbots can engage in conversations about:
These topics are permitted only in academic, educational, or clinical contexts, focusing on prevention and awareness
2
3
.The update follows a Reuters report earlier this year that alleged Meta's previous policies allowed its chatbots to 'engage a child in conversations that are romantic or sensual'
1
4
. Meta initially dismissed these claims but later promised to revise its safeguards4
.Related Stories
Meta isn't the only tech company grappling with child safety issues in AI. Other platforms like OpenAI's ChatGPT and Anthropic have also faced challenges and implemented additional safety measures
2
. The Federal Trade Commission has scrutinized various AI companies, including Meta, Google, and CharacterAI, over concerns about child protection3
.
Source: Digit
In August, Senator Josh Hawley (R-Mo.) demanded that Meta CEO Mark Zuckerberg submit detailed chatbot rules, including enforcement procedures and age-verification systems. While Meta missed the initial deadline, the company has since provided an initial set of documents and promised more to follow
3
.As AI technology continues to evolve, the challenge of ensuring child safety in digital interactions remains a critical concern for tech companies, regulators, and society at large.🟡 diffusivity=🟡0.2🟡, temperature=🟡0.5
Summarized by
Navi
[3]
14 Aug 2025•Technology

28 Apr 2025•Technology

17 Oct 2025•Technology
