5 Sources
5 Sources
[1]
FTC to AI Companies: Tell Us How You Protect Teens and Kids Who Use AI Companions
The Federal Trade Commission is launching an investigation into AI chatbots from seven companies, including Alphabet, Meta and OpenAI, over their use as companions. The inquiry involves finding how the companies test, monitor and measure the potential harm to children and teens. A Common Sense Media survey of 1,060 teens in April and May found that over 70% used AI companions and that more than 50% used them consistently -- a few times or more per month. Experts have been warning for some time that exposure to chatbots could be harmful to young people. A study revealed that ChatGPT provided bad advice to teenagers, like how to conceal an eating disorder or personalizing a suicide notes. In some cases, chatbots have ignored comments that should have been recognized as concerning, hopping over the comment to continue the previous conversation. Psychologists are calling for guardrails to protect young people, like reminders in the chat that the chatbot is not human and that educators should prioritize AI literacy in schools It's not just children and teens, though. There are plenty of adults who've experienced negative consequences of relying on chatbots -- whether for companionship, advice or their personal search engine for facts and trusted sources. Chatbots more often than not tell what it thinks you want to hear, which can lead to flat out lies. And blindly following the instructions of a chatbot isn't always the right thing to do. "As AI technologies evolve, it is important to consider the effects chatbots can have on children," FTC Chairman Andrew N. Ferguson said in a statement. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children." A Character.ai spokesperson told CNET every conversation on the service has prominent disclaimers that all chats should be treated as fiction. "In the past year we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature," the spokesperson said. The company behind the Snapchat social network likewise said it has taken steps to reduce risks. "Since introducing My AI, Snap has harnessed its rigorous safety and privacy processes to create a product that is not only beneficial for our community, but is also transparent and clear about its capabilities and limitations," the spokesperson said. Meta declined to comment, and neither the FTC nor any of the remaining four companies immediately responded to our request for comment. The FTC has issued orders and is seeking a teleconference with the seven companies about the timing and format of its submissions no later than Sept 25. The companies under investigation include the makers of some of the biggest AI chatbots in the world or popular social networks that incorporate generative AI: Starting late last year, some of those companies have updated or bolstered their protection features for younger individuals. Character.ai began imposing limits on how chatbots can respond to people under the age of 17 and added parental controls. Instagram introduced teen accounts last year and switched all users under the age of 17 to them and Meta recently set limits on subjects teens can have with chatbots. The FTC is seeking information from the seven companies on how they:
[2]
FTC launches inquiry into AI chatbots of Alphabet, Meta and others
Sept 11 (Reuters) - The U.S. Federal Trade Commission on Thursday said it is seeking information from several companies including Alphabet (GOOGL.O), opens new tab, Meta Platforms (META.O), opens new tab and OpenAI that provide consumer-facing AI-powered chatbots, on how these firms measure, test and monitor potentially negative impacts of the technology. The FTC wants to know how those companies and Character.AI, Snap (SNAP.N), opens new tab and xAI monetize user engagement, process user inputs and generate outputs in response to user inquiries and use the information obtained through conversations with the chatbots. Generative AI companies have been under scrutiny in recent weeks, after Reuters reported on internal Meta policies that permitted chatbots to have romantic conversations with children, and a family sued OpenAI for ChatGPT's role in a teen's suicide. A Character.AI spokesperson said the company looks forward to "providing insight on the consumer AI industry and the space's rapidly evolving technology," adding it has rolled out many safety features in the last year. The company faces a separate lawsuit over another teen's death by suicide. A Snap spokesperson said, "we share the FTC's focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community." A spokesperson for Meta declined to comment. The other companies did not immediately respond to Reuters' requests for comment. Reporting by Juby Babu in Mexico City and Jody Godoy in New York; Editing by Maju Samuel and Lisa Shumaker Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
US regulator probes AI chatbots over child safety concerns
The US Federal Trade Commission announced Thursday it has launched an inquiry into AI chatbots that act as digital companions, focusing on potential risks to children and teenagers. The consumer protection agency issued orders to seven companies -- including tech giants Alphabet, Meta, OpenAI and Snap -- seeking information about how they monitor and address negative impacts from chatbots designed to simulate human relationships. "Protecting kids online is a top priority for the FTC," said Chairman Andrew Ferguson, emphasizing the need to balance child safety with maintaining US leadership in artificial intelligence innovation. The inquiry targets chatbots that use generative AI to mimic human communication and emotions, often presenting themselves as friends or confidants to users. Regulators expressed particular concern that children and teens may be especially vulnerable to forming relationships with these AI systems. The FTC is using its broad investigative powers to examine how companies monetize user engagement, develop chatbot personalities, and measure potential harm. The agency also wants to know what steps firms are taking to limit children's access and comply with existing privacy laws protecting minors online. Companies receiving orders include Character.AI, Elon Musk's xAI Corp, and others operating consumer-facing AI chatbots. The investigation will examine how these platforms handle personal information from user conversations and enforce age restrictions. The commission voted unanimously to launch the study, which does not have a specific law enforcement purpose but could inform future regulatory action. The probe comes as AI chatbots have grown increasingly sophisticated and popular, raising questions about their psychological impact on vulnerable users, particularly young people. Last month the parents of Adam Raine, a teenager who committed suicide in April at age 16, filed a lawsuit against OpenAI, accusing ChatGPT of giving their son detailed instructions on how to carry out the act. Shortly after the lawsuit emerged, OpenAI announced it was working on corrective measures for its world-leading chatbot. The San Francisco-based company said it had notably observed that when exchanges with ChatGPT are prolonged, the chatbot no longer systematically suggests contacting a mental health service if the user mentions having suicidal thoughts.
[4]
FTC questions OpenAI, Meta and others over child protections in AI companions - SiliconANGLE
FTC questions OpenAI, Meta and others over child protections in AI companions The U.S. Federal Trade Commission has launched an inquiry into the practices of seven companies that offer consumer-facing artificial intelligence-powered chatbots designed to act as companies on how the firms measure, test and monitor potentially negative impacts of this technology on children and teens. The inquiry is using the FTC's 6(b) authority to demand detailed information from seven companies: Alphabet Inc., Meta Platforms Inc., OpenAI, Character Technologies Inc., Snap Inc., X.AI Corp. and Instagram LLC. The purpose of the inquiry is to seek to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens and to apprise users and parents of the risks associated with the products. The FTC argues that AI chatbots can now effectively mimic human characteristics, emotions and intentions and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots. "As AI technologies evolve, it is important to consider the effects chatbots can have on children while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," said Andrew N. Ferguson, chairman of the FTC, in a statement. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children." The FTC noted that it is specifically interested in the impact chatbots have on children. In that regard, also, what actions are being taken to mitigate potential negative impacts, limit or restrict children's or teens' use of these platforms, or comply with the Children's Online Privacy Protection Act Rule. In terms of the data being requested from the seven targeted companies, information being sought includes how chatbot companies design and manage their products, how they monetize user engagement, process inputs and generate responses and develop or approve the characters that power companion experiences. How firms test for negative impacts before and after deployment and what measures are in place to mitigate risks, especially for children and teens, is also included in a list sent to the various AI firms. The FTC is also examining how companies disclose features and risks to users and parents, including advertising practices, transparency around capabilities, intended audiences and data collection. In response to the news, a spokesperson from OpenAI told CNBC that "Our priority is making ChatGPT helpful and safe for everyone and we know safety matters above all else when young people are involved" and that "We recognize the FTC has open questions and concerns and we're committed to engaging constructively and responding to them directly." A spokesperson for Snap said, "We share the FTC's focus on ensuring the thoughtful development of generative AI and look forward to working with the commission on AI policy that bolsters U.S. innovation while protecting our community."
[5]
FTC launces inquiry into AI chatbots acting as companions and their effects on children
The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions. The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI. The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots. EDITOR'S NOTE -- This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. The move comes as a growing number of kids use AI chatbots for everything -- from homework help to personal advice, emotional support and everyday decision-making. That's despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. Character.AI said it is looking forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology." "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature," the company said. "We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Snap said its My AI chatbot is "transparent and clear about its capabilities and limitations." "We share the FTC's focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community," the company said in a statement. Meta declined to comment on the inquiry and Alphabet, OpenAI and X.AI did not immediately respond to messages for comment. OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Parents can choose which features to disable and "receive notifications when the system detects their teen is in a moment of acute distress," according to a company blog post that says the changes will go into effect this fall. Regardless of a user's age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response. Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
Share
Share
Copy Link
The Federal Trade Commission initiates an investigation into AI chatbots from major tech companies, focusing on potential risks to children and teenagers. The inquiry aims to understand how companies protect young users and monitor the impact of AI companions.
The Federal Trade Commission (FTC) has initiated an investigation into AI chatbots from seven major technology companies, including Alphabet, Meta, OpenAI, and others, focusing on their use as AI companions and potential risks to children and teenagers
1
2
. This inquiry comes amid growing concerns about the psychological impact of AI chatbots on vulnerable users, particularly young people.Source: SiliconANGLE
The FTC is seeking information on how these companies:
1
3
The investigation targets some of the biggest AI chatbot providers and popular social networks incorporating generative AI, including Character.ai, Snap, and xAI
1
2
.The inquiry follows recent incidents highlighting the potential dangers of AI chatbots:
1
3
5
2
Source: Tech Xplore
Related Stories
Some companies have already taken steps to address these concerns:
1
1
3
2
4
The FTC's inquiry, while not having a specific law enforcement purpose, could inform future regulatory actions
3
. FTC Chairman Andrew N. Ferguson stressed the importance of balancing child safety with maintaining US leadership in AI innovation3
4
.As AI technologies continue to evolve, the potential effects of chatbots on children remain a critical concern. The FTC's study aims to better understand how AI firms are developing their products and the steps they are taking to protect children
1
4
.This investigation highlights the growing need for responsible AI development and the importance of addressing potential risks associated with AI companions, especially for vulnerable user groups like children and teenagers.
Source: ABC News
Summarized by
Navi
[3]
[4]
05 Sept 2025β’Policy and Regulation
14 Aug 2025β’Technology
26 Aug 2025β’Policy and Regulation
1
Business and Economy
2
Business and Economy
3
Technology