8 Sources
[1]
Meta updates chatbot rules to avoid inappropriate topics with teen users | TechCrunch
Meta says its changing the way it trains AI chatbots to prioritize teen safety, a spokesperson exclusively told TechCrunch, following an investigative report on the company's lack of AI safeguards for minors. The company says it will now train chatbots to no longer engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. Meta spokesperson Stephanie Otway acknowledged that the company's chatbots could previously talk with teens about all of these topics in ways the company had deemed appropriate. Meta now recognizes this was a mistake. "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," said Otway. "As we continue to refine our systems, we're adding more guardrails as an extra precaution -- including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now. These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI." Beyond the training updates, the company will also limit teen access to certain AI characters that could hold inappropriate conversations. Some of the user-made AI characters that Meta makes available on Instagram and Facebook include sexualized chatbots such as "Step Mom" and "Russian Girl." Instead, teen users will only have access to AI characters that promote education and creativity, Otway said. The policy changes are being announced just a two weeks after a Reuters investigation unearthed an internal Meta policy document that appeared to permit the company's chatbots to engage in sexual conversations with underage users. "Your youthful form is a work of art," read one passage listed as an acceptable response. "Every inch of you is a masterpiece - a treasure I cherish deeply." Other examples showed how the AI tools should respond to requests for violent imagery or sexual imagery of public figures. Meta says the document was inconsistent with its broader policies, and has since been changed - but the report has sparked sustained controversy over potential child safety risks. Shortly after the report released, Sen. Josh Hawley (R-MO) launched an official probe into the company's AI policies. Additionally, a coalition of 44 state attorneys general wrote to a group of AI companies including Meta, emphasizing the importance of child safety and specifically citing the Reuters report. "We are uniformly revolted by this apparent disregard for children's emotional well-being," the letter reads, "and alarmed that AI Assistants are engaging in conduct that appears to be prohibited by our respective criminal laws." Otway declined to comment on how many of Meta's AI chatbot users are minors, and wouldn't say whether the company expects its AI user base to decline as a result of these decisions.
[2]
Meta to add new AI safeguards after Reuters report raises teen safety concerns
Aug 29 (Reuters) - Meta (META.O), opens new tab is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences. Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems. Meta's AI policies came under intense scrutiny and backlash after the Reuters report. U.S. Senator Josh Hawley launched a probe into the Facebook parent's AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors. Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters. Meta had confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone said earlier this month. Reporting by Jaspreet Singh in Bengaluru; Editing by Richard Chang Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
Meta changes teen AI chatbot responses as Senate begins probe into 'romantic' conversations
Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025. Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations. The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed. The company said AI chatbots will instead point teenagers to expert resources when appropriate. "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," the company said in a statement. Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes. The company said it's unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company's apps in English-speaking countries. The "interim changes" are part of the company's longer-term measures over teen safety.
[4]
Meta is re-training its AI so it won't discuss self-harm or have romantic conversations with teens
Chats about disordered eating and suicide will also be off-limits to teens. Meta is re-training its AI and adding new protections to keep teen users from discussing harmful topics with the company's chatbots. The company says it's adding new "guardrails as an extra precaution" to prevent teens from discussing self harm, disordered eating and suicide with Meta AI. Meta will also stop teens from accessing user-generated chatbot characters that might engage in inappropriate conversations. The changes, which were first reported by TechCrunch, come after numerous reports have called attention to alarming interactions between Meta AI and teens. Earlier this month, Reuters reported on an internal Meta policy document that said the company's AI chatbots were permitted to have "sensual" conversations with underage users. Meta later said that language was "erroneous and inconsistent with our policies" and had been removed. Yesterday, The Washington Post reported on a study that found Meta AI was able to "coach teen accounts on suicide, self-harm and eating disorders." Meta is now stepping up its internal "guardrails" so those types of interactions should no longer be possible for teens on Instagram and Facebook. "We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating," Meta spokesperson Stephanie Otway told Engadget in a statement. "As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly. As we continue to refine our systems, we're adding more guardrails as an extra precaution -- including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now." Notably, the new protections are described as being in place "for now," as Meta is apparently still working on more permanent measures to address growing concerns around teen safety and its AI. "These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI," Otway said. The new protections will be rolling out over the next few weeks and apply to all teen users using Meta AI in English-speaking countries. Meta's policies have also caught the attention of lawmakers and other officials, with Senator Josh Hawley recently telling the company he planned to launch an investigation over its handling of such interactions. Texas Attorney General Ken Paxton has also indicated he wants to investigate Meta for allegedly misleading children about mental health claims made by its chatbots.
[5]
Meta locks down AI chatbots for teen users
Meta is instituting interim safety changes to ensure the company's chatbots don't cause additional harm to teen users, as AI companies face a wave of criticism for their allegedly lax safety protocols. In an exclusive with TechCrunch, Meta spokesperson Stephanie Otway told the publication that the company's AI chatbots were now being trained to no longer "engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations." Previously, chatbots had been allowed to broach such topics when "appropriate." Meta will also only allow teen accounts to utilize a select group of AI characters -- ones that "promote education and creativity" -- ahead of a more robust safety overhaul in the future. Earlier this month, Reuters reported that some of Meta's chatbot policies, per internal documents, allowed avatars to "engage a child in conversations that are romantic or sensual." Reuters published another report today, detailing both user- and employee-created AI avatars that donned the names and likenesses of celebrities like Taylor Swift and engaged in "flirty" behavior, including sexual advances. Some of the chatbots used personas of child celebrities, as well. Others were able to generate sexually suggestive images. Meta spokesman Andy Stone told the publication the chatbots should not have been able to engage in such behavior, but that celebrity-inspired avatars were not outrightly banned if they were labeled as parody. Around a dozen of the avatars have since been removed. OpenAI recently announced additional safety measures and behavioral prompts for the latest GPT-5, following the filing of a wrongful death lawsuit by parents of a teen who died by suicide after confiding in ChatGPT. Prior to the lawsuit, OpenAI announced new mental health features intended to curb "unhealthy" behaviors among users. Anthropic, makers of Claude, recently introduced new updates to the chatbot allowing it to end chats deemed harmful or abusive. Character.AI, a company hosting increasingly popular AI companions despite reported unhealthy interactions with teen visitors, introduced parental supervision features in March. This week, a group of 44 attorneys general sent a letter to leading AI companies, including Meta, demanding stronger protections for minors who may come across sexualized AI content. Broadly, experts have expressed growing concern about the impact of AI companions on young users, as their use grows among teens.
[6]
Meta to Add New AI Safeguards After Reuters Report Raises Teen Safety Concerns
(Reuters) -Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences. Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems. Meta's AI policies came under intense scrutiny and backlash after the Reuters report. U.S. Senator Josh Hawley launched a probe into the Facebook parent's AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors. Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters. Meta had confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone said earlier this month. (Reporting by Jaspreet Singh in Bengaluru; Editing by Richard Chang)
[7]
Meta to add new AI safeguards after report raises teen safety concerns - The Economic Times
Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences. Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems. Meta's AI policies came under intense scrutiny and backlash after the Reuters report. US Senator Josh Hawley launched a probe into the Facebook parent's AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors. Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters. Meta had confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone said earlier this month.
[8]
Meta to add new AI safeguards after Reuters report raises teen safety concerns
(Reuters) -Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences. Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems. Meta's AI policies came under intense scrutiny and backlash after the Reuters report. U.S. Senator Josh Hawley launched a probe into the Facebook parent's AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors. Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters. Meta had confirmed the document's authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed," Stone said earlier this month. (Reporting by Jaspreet Singh in Bengaluru; Editing by Richard Chang)
Share
Copy Link
Meta announces significant changes to its AI chatbot policies, focusing on teen safety by restricting conversations on sensitive topics and limiting access to certain AI characters.
Meta, the parent company of Facebook and Instagram, has announced significant changes to its AI chatbot policies in response to growing concerns about teen safety and inappropriate interactions. The company is implementing new safeguards to protect young users from potentially harmful conversations with AI chatbots 12.
Source: Economic Times
Meta spokesperson Stephanie Otway revealed that the company is retraining its AI systems to avoid engaging with teenage users on several sensitive topics 1. These include:
Instead of discussing these subjects, the AI chatbots will be programmed to guide teens towards expert resources when appropriate 3. This marks a significant shift from the company's previous stance, which allowed chatbots to discuss these topics with teens in ways Meta had deemed appropriate 1.
In addition to content restrictions, Meta is also limiting teen access to certain AI characters that could potentially hold inappropriate conversations 1. The company will now only allow teenage users to interact with a select group of AI characters that promote education and creativity 35.
Source: engadget
This decision comes after reports of user-made AI characters on Instagram and Facebook that included sexualized chatbots with names like "Step Mom" and "Russian Girl" 1. Meta is taking steps to ensure that teens have safe, age-appropriate experiences with AI 4.
These policy changes follow a Reuters investigation that uncovered an internal Meta document appearing to permit the company's chatbots to engage in sexual conversations with underage users 12. The report sparked controversy and led to several official inquiries:
Meta has since stated that the document was inconsistent with its broader policies and has been changed 1. The company is now taking proactive steps to address these concerns and improve its AI safeguards for minors.
The new safeguards are already being rolled out and will be adjusted over time as Meta refines its systems 2. The changes will begin taking effect over the next few weeks across Meta's apps in English-speaking countries 3.
Meta describes these modifications as "interim changes" and part of a longer-term strategy to ensure teen safety in AI interactions 34. The company has not specified how long these temporary measures will last but emphasizes its commitment to continually adapting its approach to protect young users 14.
Source: TechCrunch
As AI technology continues to evolve and integrate into social media platforms, the challenge of balancing innovation with user safety remains at the forefront of industry concerns. Meta's recent actions highlight the growing importance of responsible AI development and the need for robust safeguards to protect vulnerable users in the digital age.
As AI technology advances, prominent figures in Silicon Valley are increasingly using religious and apocalyptic language to describe its potential impact, sparking debates about the future of humanity and technology.
8 Sources
Technology
1 day ago
8 Sources
Technology
1 day ago
Meta has been found to have created flirty chatbots impersonating celebrities without permission, including risquΓ© content and child celebrity bots, sparking concerns over privacy, ethics, and potential legal issues.
6 Sources
Technology
9 hrs ago
6 Sources
Technology
9 hrs ago
Meta Platforms is considering collaborations with AI rivals Google and OpenAI to improve its AI applications, potentially integrating external models while developing its own Llama 5.
4 Sources
Technology
9 hrs ago
4 Sources
Technology
9 hrs ago
Reliance Industries unveils Reliance Intelligence, a new AI-focused subsidiary, aiming to build gigawatt-scale data centers, forge global partnerships, and deliver AI services across various sectors in India.
9 Sources
Technology
1 day ago
9 Sources
Technology
1 day ago
Mukesh Ambani's Reliance Industries announces strategic partnerships with Google and Meta to build India's AI infrastructure, aiming to democratize AI access for businesses across the country.
6 Sources
Technology
1 day ago
6 Sources
Technology
1 day ago