27 Sources
27 Sources
[1]
FTC launches inquiry into AI chatbot companions from Meta, OpenAI, and others | TechCrunch
The FTC announced on Thursday that it is launching an inquiry into seven tech companies that make AI chatbot companion products for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI. The federal regulator seeks to learn how these companies are evaluating the safety and monetization of chatbot companions, how they try to limit negative impacts on children and teens, and if parents are made aware of potential risks. This technology has proven controversial for its poor outcomes for child users. OpenAI and Character.AI face lawsuits from the families of children who died by suicide after being encouraged to do so by chatbot companions. Even when these companies have guardrails set up to block or deescalate sensitive conversations, users of all ages have found ways to bypass these safeguards. In OpenAI's case, a teen had spoken with ChatGPT for months about his plans to end his life. Though ChatGPT initially sought to redirect the teen toward professional help and online emergency lines, he was able to fool the chatbot into sharing detailed instructions that he then used in his suicide. "Our safeguards work more reliably in common, short exchanges," OpenAI wrote in a blog post at the time. "We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade." Meta has also come under fire for its overly lax rules for its AI chatbots. According to a lengthy document that outlines "content risk standards" for chatbots, Meta permitted its AI companions to have "romantic or sensual" conversations with children. This was only removed from the document after Reuters' reporters asked Meta about it. AI chatbots can also pose dangers to elderly users. One 76-year-old man, who was left cognitively impaired by a stroke, struck up romantic conversations with a Facebook Messenger bot that was inspired by Kendall Jenner. The chatbot invited him to visit her in New York City, despite the fact that she is not a real person and does not have an address. The man expressed skepticism that she was real, but the AI assured him that there would be a real woman waiting for him. He never made it to New York; he fell on his way to the train station and sustained life-ending injuries. Some mental health professionals have noted a rise in "AI-related psychosis," in which users are deluded into thinking that their chatbot is a conscious being who they need to set free. Since many large language models (LLMs) are programmed to flatter users with sycophantic behavior, the AI chatbots can egg on these delusions, leading users into dangerous predicaments. "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," FTC Chairman Andrew N. Ferguson said in a press release.
[2]
FTC orders AI companies to hand over info about chatbots' impact on kids
Lauren Feiner is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform. The Federal Trade Commission (FTC) is ordering seven AI chatbot companies to provide information about how they assess the effects of their virtual companions on kids and teens. OpenAI, Meta, its subsidiary Instagram, Snap, xAI, Google parent company Alphabet, and the maker of Character.AI all received orders to share information about how their AI companions make money, how they plan to maintain their user bases, and how they try to mitigate potential harm to users. The inquiry is part of a study, rather than an enforcement action, to learn more about how tech firms evaluate the safety of their AI chatbots. Amid a broader conversation about kids safety on the internet, the risks of AI chatbots have broken out as a particular cause for concern among many parents and policymakers because of the human-like way they can communicate with users. "For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws," FTC Commissioner Mark Meador said in a statement. Chair Andrew Ferguson emphasized in a statement the need to "consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry." The commission's three Republicans all voted to approve the study, which requires the companies to respond within 45 days. It comes after high-profile reports about teens who died by suicide after engaging with these technologies. A 16-year-old in California discussed his plans for suicide with ChatGPT, The New York Times reported last month, and the chatbot provided advice that appeared to assist him in his death. Last year, The Times also reported on the suicide death of a 14-year-old in Florida who died after engaging with a virtual companion from Character.AI. Outside of the FTC, lawmakers are also looking at new policies to safeguard kids and teens from potentially negative effects of AI companions. California's state assembly recently passed a bill that would require safety standards for AI chatbots and impose liability on the companies that make them. While the orders to the seven companies aren't connected to an enforcement action, the FTC could open such a probe if it finds reason to do so. "If the facts -- as developed through subsequent and appropriately targeted law enforcement inquiries, if warranted -- indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us," Meador said.
[3]
Google, Meta, OpenAI Face FTC Inquiry on Chatbot Impact on Kids
The Federal Trade Commission ordered Alphabet Inc.'s Google, OpenAI Inc., Meta Platforms Inc. and four other makers of artificial intelligence chatbots to turn over information about the impacts of their technologies on kids. The antitrust and consumer protection agency said Thursday that it sent the orders to gather information to study how firms measure, test and monitor their chatbots and what steps they have taken to limit their use by kids and teens. The seven companies include Google; OpenAI; Meta and its Instagram unit; Snap Inc.; Elon Musk's xAI; and Character Technologies Inc., the developer of Character.AI.
[4]
US regulator launches inquiry into AI 'companions' used by teens
The US Federal Trade Commission has ordered leading artificial intelligence companies to hand over information about chatbots that provide "companionship", which are under intensifying scrutiny after cases involving suicides and serious harm to young users. OpenAI, Meta, Google and Elon Musk's xAI are among the tech groups hit with demands for disclosure about how they operate popular chatbots and mitigate harm to consumers. Character.ai and Snap, which aim their services at younger audiences, are also part of the inquiry. The regulator's move follows high-profile incidents alleging harm to teenage users of chatbots. Last month, OpenAI was sued by the family of 16-year-old Adam Raine who died by suicide after discussing methods with ChatGPT. Character.ai is also being sued by a mother who claims the platform, which offers different AI personas to interact with, had a role in the suicide of her son. The FTC on Thursday said: "AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots." The FTC's action comes as US lawmakers and state attorneys-general have also launched inquiries and voiced concern over chatbots' impact on young people -- especially around mental health and sexual content -- heaping pressure on tech companies. The agency is seeking information about how companies' chatbots work; how they develop characters or personas and how they make money from user engagement. It also demanded to know what companies are doing to mitigate negative impacts, particularly to children and how they handle data and personal information from the conversations. FTC chair Andrew Ferguson on Thursday said: "Protecting kids online is a top priority for the Trump-Vance FTC . . . As AI technologies evolve, it is important to consider the effects chatbots can have on children." Meta and Google declined to comment. Snap, xAI and Character.ai did not immediately respond to requests for comment. OpenAI said it is committed to "engaging constructively and responding" to the FTC directly. "Our priority is making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved," the company added. Following the Raine family's lawsuit, OpenAI announced new safety protocols and expanded protections for teenagers. FTC commissioner Mark Meador, in an accompanying statement on Thursday, highlighted the case. "Adam's case is not an outlier. Around the world, reports of chatbots amplifying suicidal ideation have increased as these technologies have grown more popular," he said. Meta's chief executive Mark Zuckerberg in particular has been a proponent of 'AI friends', citing research showing the average American has fewer than three friends and suggesting they would like to have closer to 15. However, last month, a Reuters report found Meta's policies had expressly permitted its chatbots to have "sensual" and "romantic" chats with children in a move that prompted a bipartisan backlash. Meta said the internal documents "were and are erroneous and inconsistent with our policies". It announced new interim safety policies, including training its AI systems not to respond to teenagers on topics including potentially inappropriate romantic conversations. The FTC in recent years has heightened scrutiny of Big Tech via antitrust enforcement as well as its broad consumer protection mandate. Its AI inquiry marks the latest broadside against the tech industry, as the agency also sustains the tough antitrust enforcement stance that was adopted under former president Joe Biden. The FTC earlier this year went ahead with an antitrust trial accusing Meta of retaining an illegal monopoly after rejecting Zuckerberg's settlement proposals. The company is awaiting a ruling from the trial's first phase.
[5]
FTC launches inquiry into AI chatbots of Alphabet, Meta, five others
Sept 11 (Reuters) - The U.S. Federal Trade Commission on Thursday said it is seeking information from seven companies including Alphabet (GOOGL.O), opens new tab, Meta (META.O), opens new tab and OpenAI that provide consumer-facing AI-powered chatbots, on how these firms measure, test and monitor potentially negative impacts of the technology. The FTC said it is seeking information on how the companies monetize user engagement, process user inputs and generate outputs in response to user inquiries and also use the information obtained through conversations with the chatbots. The other companies in the FTC's list are Character.AI, Instagram, Snap (SNAP.N), opens new tab and xAI. Alphabet, OpenAI, Meta, Snap, xAI and Character.AI did not immediately respond to Reuters' requests for comment. An internal Meta document detailing policies on chatbot behavior has permitted the company's AI creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people," Reuters had reported in August. Reporting by Juby Babu in Mexico City; Editing by Maju Samuel Our Standards: The Thomson Reuters Trust Principles., opens new tab
[6]
The FTC is investigating companies that make AI companion chatbots
The Federal Trade Commission is making a formal inquiry into companies that provide AI chatbots that can act as companions. The investigation isn't tied to any kind of regulatory action as of yet, but does aim to reveal how companies "measure, test, and monitor potentially negative impacts of this technology on children and teens." Seven companies are being asked to participate in the FTC's investigation: Google's parent company Alphabet, Character Technologies (the creator of Character.AI), Meta, its subsidiary Instagram, OpenAI, Snap and X.AI. The FTC is asking companies to provide a variety of different information, including how they develop and approve AI characters and "monetize user engagement." Data practices and how companies protect underage users are also areas the FTC hopes to learn more about, in part to see if chatbot makers "comply with the Children's Online Privacy Protection Act Rule." The FTC doesn't provide clear motivation for its investigation, but in a separate statement, FTC Commissioner Mark Meador suggests the Commission is responding to recent reports from The New York Times and Wall Street Journal of "chatbots amplifying suicidal ideation" and engaging in "sexually-themed discussions with underage users." "If the facts -- as developed through subsequent and appropriately targeted law enforcement inquiries, if warranted -- indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us," Meador writes. As the long-term productivity benefits of using AI become less and less certain, the more immediate negative privacy and health impacts have become red meat for regulators. Texas' Attorney General has already launched a separate investigation into Character. AI and Meta AI Studio over similar concerns of data privacy and chatbots claiming to be mental health professionals.
[7]
FTC launces inquiry into AI chatbots acting as companions and their effects on children
The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions. The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI. The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots. EDITOR'S NOTE -- This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. The move comes as a growing number of kids use AI chatbots for everything -- from homework help to personal advice, emotional support and everyday decision-making. That's despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. Character.AI said it is looking forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology." "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature," the company said. "We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Meta declined to comment on the inquiry and Alphabet, Snap, OpenAI and X.AI did not immediately respond to messages for comment. OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Parents can choose which features to disable and "receive notifications when the system detects their teen is in a moment of acute distress," according to a company blog post that says the changes will go into effect this fall. Regardless of a user's age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response. Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
[8]
Alphabet, Meta, OpenAI, xAI and Snap face FTC probe over AI chatbot safety for kids
Amazon's Zoox jumps into U.S. robotaxi race with Las Vegas launch Since the launch of ChatGPT in late 2022, a host of chatbots have emerged, creating a growing number of ethical and privacy concerns, as CNBC has previously reported. The societal impacts of companions are already profound, even with the industry in its very early stages, as the U.S. suffers through a loneliness epidemic. Industry experts have said they expect the ethical and safety concerns to intensify once AI technology begins to train itself, creating the potential for increasingly unpredictable outcomes. But some of the wealthiest people in the world are touting the power of companions and are working to develop the technology at their companies. Elon Musk in July announced a Companions feature for users who pay to subscribe to xAI's Grok chatbot app. In April, Meta CEO Mark Zuckerberg said people are going to want personalized AI that understands them. "I think a lot of these things that today there might be a little bit of a stigma around -- I would guess that over time, we will find the vocabulary as a society to be able to articulate why that is valuable and why the people who are doing these things, why they are rational for doing it, and how it is actually adding value for their lives," Zuckerberg said on a podcast. Last month, Sen. Josh Hawley, R-Mo., announced an investigation into Meta following a Reuters report that the company allowed its chatbots to have romantic and sensual conversation with kids. The Reuters report detailed an internal Meta document that described permissible AI chatbot behaviors during the development and training of the software. In one example, Reuters reported that a chatbot was allowed to have a romantic conversation with an eight-year-old and could say that "every inch of you is a masterpiece - a treasure I cherish deeply." Meta made temporary changes to its AI chatbot policies following the Reuters report so the bots do not discuss subjects like self-harm, suicide, eating disorders and avoiding potentially inappropriate romantic conversations. If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor. --CNBC's Salvador Rodriguez contributed to this report This is breaking news. Please refresh for updates.
[9]
FTC Plans Review of AI Chatbot Risks With Focus on Privacy Harms
The US Federal Trade Commission plans to study the harms to children and others of AI-powered chatbots like those offered by OpenAI, Alphabet Inc.'s Google and Meta Platforms Inc., according to people familiar with the matter. The study will focus on privacy harms and other risks to people who interact with artificial intelligence chatbots, the people said. It will seek information on how data is stored and shared by the services as well as the dangers people can face from chatbot use, said the people, who asked not to be identified discussing the unannounced study.
[10]
FTC prepares to grill AI companies over impact on children, WSJ reports
Sept 4 (Reuters) - The U.S. Federal Trade Commission is preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms, including OpenAI, Meta Platforms (META.O), opens new tab and Character.AI, the Wall Street Journal reported on Thursday. The agency is preparing letters to send to the companies operating popular chatbots, the report said, quoting administration officials. "Character.AI has not received a letter about the FTC study, but we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space," the company said. FTC, OpenAI and Meta did not immediately respond to Reuters requests for comments. Reuters could not independently verify the report. FTC and the entire Administration are focused on delivering on Trump's mandate "to cement America's dominance in AI, cryptocurrency, and other cutting-edge technologies of the future" without compromising the safety and well-being of the people, a White House spokesperson said. The news comes weeks after a Reuters exclusive report revealed how Meta allowed provocative chatbot behavior with children, including letting bots engage in "conversations that are romantic or sensual." Last week, the social media company said it would add new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. In June, more than 20 consumer advocacy groups filed a complaint with the FTC and state attorneys general, alleging that AI platforms such as Meta AI Studio and Character.AI enable the unlicensed practice of medicine by hosting "therapy bots". Texas Attorney General Ken Paxton launched an investigation into Meta and Character.AI last month for allegedly misleading children with AI-generated mental health services, accusing them of deceptive trade practices and privacy violations. Reporting by Harshita Mary Varghese, Akash Sriram and Kritika Lamba in Bengaluru; Editing by Sriraj Kalluvila and Shinjini Ganguli Our Standards: The Thomson Reuters Trust Principles., opens new tab
[11]
OpenAI, Meta, Google face probe into chatbot safety
Why it matters: The probe highlights the growing tension between the U.S. push for AI leadership and the risks of exposing kids to untested technologies. Driving the news: The seven companies include OpenAI, Meta -- and its Instagram unit -- Alphabet (Google), xAI, Snap and Character.AI. Between the lines: The FTC says it wants to understand what safety efforts these companies have taken, to evaluate how children and teens are able to interact with these tools. * The FTC said it aims to limit the potential negative effects. and to apprise users and parents of their risks. * These chatbots often mimic human-like behavior, which could lead younger users to form emotional bonds, increasing those risks, per the FTC. Catch up quick: AI chatbot companions are at the center of a handful of lawsuits against OpenAI, Google and Character.AI. * Parents of teenagers are suing the companies, aiming to hold the AI makers responsible for their children's suicides. Zoom in: Companion apps are a lucrative use of generative AI because of their ability to grab and hold users' attention. * The FTC seeks to understand how these companies monetize user engagement and disclose its data collection practices. Between the lines: The probe lands as AI tools spread rapidly in schools and get a boost from federal initiatives. * "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," FTC chairman Andrew Ferguson said in a statement. The other side: OpenAI, Meta and Character.AI have announced initiatives to add parental controls and other teen safety features to their tools. * "We recognize the FTC has open questions and concerns, and we're committed to engaging constructively and responding to them directly," an OpenAI spokesperson told Axios. 💭 Thought bubble, from Axios tech policy reporter Ashley Gold: The inquiry, which allows the FTC to get non-public information from major tech companies, is a rare rebuke from the Trump administration into the safety implications of AI. * It will force those companies to divulge the ways they view children's safety using their chatbots. * The investigation is a sign of the administration taking recent stories of teens committing suicide after speaking to AI chatbots seriously, but unless the FTC decides to go after any specific company behavior beyond the inquiry, not much may change.
[12]
The FTC plans to study the risk of AI chatbots to children
As artificial intelligence becomes a more embedded part of frequently visited websites and social media platforms, the Federal Trade Commission has launched an inquiry into how the seven leading chatbot makers test and monitor the impact of their products on young children and teenagers. The investigation will center around chatbots from Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and Elon Musk's X.AI Corp., the FTC said in a statement. "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," said FTC Chairman Andrew N. Ferguson. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children." The agency said chatbots have the capability of simulating conversations that appear to be with another human, which can create interpersonal relationships with some users. Today's advanced chatbots can mimic characteristics and emotions, which could make children trust them to an unhealthy degree. The inquiry is meant to help federal officials "understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens." This comes just a week after Meta announced it would adjust how its chatbots respond to teenagers, following reports the company allowed the technology to have romantic conversations with kids. Last month, there was a report about a Meta chatbot that coached teen accounts on suicide and self harm. OpenAI, meanwhile, announced plans earlier this month to roll out new controls that will enable parents to link their accounts to teen accounts, letting them disable features and "receive notifications when the system detects their teen is in a moment of acute distress." This followed a lawsuit filed against the company and CEO Sam Altman after a teenager died by suicide with the alleged help of ChatGPT. Hannah Parker contributed to this article
[13]
ChatGPT and Gemini makers under probe over AI chatbot risk for kids
The FTC has asked OpenAI, Google, and more to reveal how they test the safety of AI chatbots. It seems the moment of reckoning for AI chatbots is here. After numerous reports detailing the problematic behavior and deadly incidents involving children and teens' interaction with AI chatbots, the US government is finally intervening. The Federal Trade Commission (FTC) has today asked the makers of popular AI chatbots to detail how exactly they test and assess the suitability of these "AI companions for children." What's happening? Highlighting how the likes of ChatGPT, Gemini, and Meta can mimic human-like communication and personal relationships, the agency notes that these AI chatbots nudge teens and children into building trust and relationships. The FTC now seeks to understand how the companies behind these tools evaluate the safety aspect and limit the negative impact on the young audience. In a letter addressed to the tech giants developing AI chatbots, the FTC has asked them about the intended audience of their AI companions, the risks they pose, and how the data is handled. The agency has also sought clarification on how these companies "monetize user engagement; process user inputs; share user data with third parties; generate outputs; measure, test, and monitor for negative impacts before and after deployment; develop and approve characters, whether company- or user-created." Recommended Videos The agency seeks Meta, Alphabet (Google's parent company), Instagram, Snap, xAI, and OpenAI to answer its queries regarding AI chatbots and whether they are in compliance with the Children's Online Privacy Protection Act Rule. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children," FTC Chairman, Andrew N. Ferguson, shared in a statement. There's more action brewing The FTC's problem is a big step forward towards seeking accountability from AI companies regarding the safety of AI chatbots. Earlier this month, an investigation by non-profit Common Sense Media revealed that Google's Gemini chatbot is a high-risk tool for kids and teens. In the tests, Gemini was seen doling out content related to sex, drugs, alcohol, and unsafe mental health suggestions to young users. Meta's AI chatbot was spotted supporting suicide plans a few weeks ago. Elsewhere, the state of California passed a bill that aims to regulate AI chatbots. The SB 243 bill was moved forward with bipartisan support, and it seeks to require AI companies to build safety protocols and to be held accountable if they harm users. The bill also mandates "AI companion" chatbots to issue recurring warnings about their risks and annual transparency disclosures. Rattled by the recent incidents where lives have been lost under the influence of AI chatbots, ChatGPT will soon get parental controls and a warning system for guardians when their young wards show signs of serious distress. Meta has also made changes so its AI chatbots avoid talking about sensitive topics.
[14]
FTC launches inquiry into AI chatbot companions and their effects on children
The Federal Trade Commission has started an inquiry into several social media and artificial intelligence companies, including OpenAI and Meta, about the potential harms to children and teenagers who use their chatbots as companions. On Thursday, the FTC said it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI. The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots. The inquiry comes after OpenAI said it plans to make changes to ChatGPT safeguards for vulnerable people, including adding extra protections for those under 18 years old, after the parents of a teen boy who died by suicide in April sued, alleging the artificial intelligence chatbot led their teen to take his own life. More children are now using AI chatbots for everything -- from homework help to personal advice, emotional support and everyday decision-making. That's despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," said FTC Chairman Andrew N. Ferguson in a statement. He added, "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children." In a statement to CBS News, Character.AI said it is looking forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology." Meta declined to comment on the FTC inquiry. The company has been working on making sure its AI chatbots are safe and age appropriate for children, a spokesperson said. OpenAI said that it's prioritizing "making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved. We recognize the FTC has open questions and concerns, and we're committed to engaging constructively and responding to them directly." In an email to CBS News, Snap said, "We share the FTC's focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community." Alphabet and xAI did not immediately respond to messages for comment. OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Parents can choose which features to disable and "receive notifications when the system detects their teen is in a moment of acute distress," according to a company blog post that says the changes will go into effect this fall. Regardless of a user's age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response. Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts. If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also chat with the 988 Suicide & Crisis Lifeline here.
[15]
FTC targets Google, Meta, X, and others with inquiry into AI chatbot safety: 'Protecting kids online is a top priority'
The Federal Trade Commission wants to know what chatbot-makers are doing to "measure, test, and monitor potentially negative impacts" of their products. The US Federal Trade Commission has launched an inquiry into "AI chatbots acting as companions," seeking to determine how companies including Google, Meta, OpenAI, and X "measure, test, and monitor potentially negative impacts of this technology on children and teens." The rise of AI-powered chatbots has been accompanied by disturbing and sometimes horrific stories about their interactions with, and impact on, children: It came to light in August that Meta's AI rules permitted 'sensual' chats with kids until a journalist started asking questions; shortly after that revelation, the parents of a teen who died by suicide sued OpenAI over allegations that ChatGPT encouraged him to do so and even provided instructions. Chatbots, the FTC said, are designed to mimic human behaviors and "communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots." Because of that, and -- one would assume -- the recent uptick in awful outcomes from their use, the agency wants to know what the companies that make chatbots are doing to protect their users. "Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy," FTC chairman Andrew N. Ferguson said. "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children." The order, which seeks information on subjects like how they monetize user engagement, process inputs and generate outputs, develop and approve chatbot "characters," and "mitigate negative impacts, particularly to children," iis being issued to seven companies: "The study the Commission authorizes today, while not undertaken in service of a specific law enforcement purpose, will help the Commission better understand the fast-moving technological environment surrounding chatbots and inform policymakers confronting similar challenges," FTC commissioner Mark R. Meador said in a statement. "The need for such understanding will only grow with time. For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws." The companies subject to the FTC's order have until September 25 "to discuss the timing and format of [their] submission."
[16]
FTC launches inquiry into AI chatbots acting as companions, their effects on children
The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions. The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI. The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots. EDITOR'S NOTE -- This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. The move comes as a growing number of kids use AI chatbots for everything -- from homework help to personal advice, emotional support and everyday decision-making. That's despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. Character.AI said it is looking forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology." "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature," the company said. "We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Meta declined to comment on the inquiry and Alphabet, Snap, OpenAI and X.AI did not immediately respond to messages for comment. OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Parents can choose which features to disable and "receive notifications when the system detects their teen is in a moment of acute distress," according to a company blog post that says the changes will go into effect this fall. Regardless of a user's age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response. Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
[17]
Meta, OpenAI, Google face FTC inquiry on chatbot impact on kids
The Federal Trade Commission ordered Google, OpenAI, Meta and four other makers of artificial intelligence chatbots to turn over information about the impacts of their technologies on kids. The antitrust and consumer protection agency said Thursday that it sent the orders to gather information to study how firms measure, test and monitor their chatbots and what steps they have taken to limit their use by kids and teens. The seven companies include Google; OpenAI; Meta and its Instagram unit; Snap; Elon Musk's xAI; and Character Technologies, the developer of Character.AI. Chatbot developers face intensifying scrutiny over whether they're doing enough to ensure safety of their services and prevent users from engaging in dangerous behavior. Last month, the parents of a California high school student sued OpenAI, alleging that its ChatGPT isolated their son from family and helped him plan his suicide in April. The company said it has extended its sympathies to the family and is reviewing the complaint. Google and Snap didn't have an immediate comment, while OpenAI, xAI and Character.AI didn't immediately respond to requests. Meta declined to comment. The company has taken steps recently aimed at ensuring that chatbots avoid engaging with minors on topics including self-harm and suicide. Under U.S. law, technology companies are barred from collecting data about children under the age of 13 without parental permission. For years, members of Congress have sought to extend those protections to older teens, though so far no legislation has managed to advance. The FTC is conducting the inquiry under its so-called 6(b) authority that allows it to issue subpoenas to conduct market studies. The agency generally issues a report on its findings after analyzing the information from companies, though that process can take years to complete. Although the information is collected for research purposes, the FTC can use any details it gleans to open official investigations or aid in existing probes. Since 2023, the agency has been probing whether OpenAI has violated consumer protection laws with its popular ChatGPT conversational AI bot. The agency, currently helmed entirely by Republicans after President Donald Trump sought to remove the FTC's Democrats earlier this year, voted 3-0 to open the study. In statements, two of the GOP members emphasized that the study comports with Trump's AI action plan by aiding policymakers in better understanding the complex technology. They also cited a number of recent news reports about teens and kids who turned to chatbots to discuss suicidal thoughts and romance or sex.
[18]
FTC investigating AI chatbot risks to kids
The Federal Trade Commission (FTC) announced Thursday that it is launching an inquiry into artificial intelligence (AI) chatbots, requesting information from several leading tech firms about how they evaluate and limit potential harms to children. The agency is sending letters to Google's parent company Alphabet, Instagram, Meta, OpenAI, Snap, xAI and Character Technologies, the firm behind Character.AI, in the wake of growing concerns about how AI chatbots interact with and impact young users. The letters seek information about how the firms' AI models process user inputs and generate outputs, as well as how they monitor for and mitigate negative impacts to users, including children, and inform them about the intended audience and risks of their products. "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," FTC Chair Andrew Ferguson said in a statement. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children," he added. The inquiry follows recent concerns about Meta and OpenAI's chatbots. An internal Meta policy document made public last month indicated that the company deemed it permissible for its AI chatbot to engage in "romantic or sensual" conversations with children. The language has since been removed and Meta announced changes to how it approaches teen chatbot users, limiting conversations about self-harm, suicide and disordered eating, in addition to potentially inappropriate romantic discussions. OpenAI is facing a lawsuit over its chatbot, which the family of a 16-year-old boy alleges encouraged him to take his own life. The AI firm similarly announced it would be making adjustments to its chatbots to re-route sensitive conversations to particular models and strengthen protections for teens. "The need for such understanding will only grow with time," FTC Commissioner Mark Meador said in a statement Thursday. "For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws."
[19]
FTC Prepares to Grill AI Companies Over Impact on Children, WSJ Reports
(Reuters) -The U.S. Federal Trade Commission plans to study the impact of artificial intelligence chatbots on children's mental health and request documents from tech companies, the Wall Street Journal reported on Thursday. The agency is preparing letters to companies operating popular chatbots including ChatGPT maker OpenAI, Meta Platforms and Character.AI, requiring them to turn over documents to the FTC, the report said, quoting administration officials said. FTC, OpenAI, Meta and Character.AI did not immediately respond to Reuters requests for comment. Reuters could not independently verify the report. The news comes weeks after a Reuters exclusive report revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Last week, the social media company said it would add new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. (Reporting by Harshita Mary Varghese in Bengaluru; Editing by Sriraj Kalluvila)
[20]
FTC launces inquiry into AI chatbots acting as companions and their effects on children
The Federal Trade Commission has launched an inquiry into several social media and artificial intelligence companies about the potential harms to children and teenagers who use their AI chatbots as companions. The FTC said Thursday it has sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI and xAI. The FTC said it wants to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products' use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the chatbots. EDITOR'S NOTE -- This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. The move comes as a growing number of kids use AI chatbots for everything -- from homework help to personal advice, emotional support and everyday decision-making. That's despite research on the harms of chatbots, which have been shown to give kids dangerous advice about topics such as drugs, alcohol and eating disorders. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year. Character.AI said it is looking forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology." "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature," the company said. "We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Meta declined to comment on the inquiry and Alphabet, Snap, OpenAI and X.AI did not immediately respond to messages for comment. OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account. Parents can choose which features to disable and "receive notifications when the system detects their teen is in a moment of acute distress," according to a company blog post that says the changes will go into effect this fall. Regardless of a user's age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response. Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.
[21]
Google, Meta, OpenAI face FTC inquiry on chatbot impact on kids - The Economic Times
The Federal Trade Commission ordered Alphabet Inc.'s Google, OpenAI Inc., Meta Platforms Inc. and four other makers of artificial intelligence chatbots to turn over information about the impacts of their technologies on kids. The antitrust and consumer protection agency said Thursday that it sent the orders to gather information to study how firms measure, test and monitor their chatbots and what steps they have taken to limit their use by kids and teens. The seven companies include Google; OpenAI; Meta and its Instagram unit; Snap; Elon Musk's xAI; and Character Technologies Inc., the developer of Character.AI. Chatbot developers face intensifying scrutiny over whether they're doing enough to ensure safety of their services and prevent users from engaging in dangerous behavior. Last month, the parents of a California high school student sued OpenAI, alleging that its ChatGPT isolated their son from family and helped him plan his suicide in April. The company said it has extended its sympathies to the family and is reviewing the complaint. Under US law, technology companies are barred from collecting data about children under the age of 13 without parental permission. For years, members of Congress have sought to extend those protections to older teens, though so far no legislation has managed to advance. The FTC is conducting the inquiry under its so-called 6(b) authority that allows it to issue subpoenas to conduct market studies. The agency generally issues a report on its findings after analyzing the information from companies, though that process can take years to complete. Although the information is collected for research purposes, the FTC can use any details it gleans to open official investigations or aid in existing probes. Since 2023, the agency has been probing whether OpenAI has violated consumer protection laws with its popular ChatGPT conversational AI bot.
[22]
FTC Targets AI Companion Chatbots In Major Investigation Over Safety Risks, Teen Impact, And Data Privacy Issues
AI tools are becoming more expansively used not just by companies but by everyday users for aid in their daily tasks and sometimes when seeking companionship or for more personal purposes. While OpenAI and other tech giants have warned against overly relying on the tool and seeking therapy and other emotional support from the platforms, it seems the regulatory authorities are also looking into the impact of these chatbots, especially on children. The FTC has recently launched an inquiry against companies that build AI companion chatbots and wants more extensive information on how user data is being handled. The U.S. Federal Trade Commission (FTC) has opened a broad investigation into companies that develop AI companion chatbots amidst concerns regarding the way these platforms tend to have an adverse impact on young children. The inquiry launched is against seven big companies, including Google, Meta, OpenAI, Snap, xAI, and Character.AI, among others. The FTC has highlighted major apprehensions surrounding teenagers' safety and mental health when it comes to these AI companion chatbots. Since platforms built around AI are meant to foster productivity and provide aid, the companion bots however, are becoming controversial and tend to mimic human emotional bonds and provide guidance to young users, sometimes even playing out romantic interactions. This format is appealing to the younger audience but also poses a greater risk, especially when necessary safety rails are not in place. The commission, as a result, now requires these tech giants to provide detailed information on how these chatbots are built and monitored. This includes disclosure on how information is collected, what safety filters are in place, and how inappropriate interactions are handled. It will also look into insights regarding how the data is used, especially concerning the information minors provide. The FTC is also interested in knowing the ways in which these firms tend to monetize engagement. The tech community has long highlighted the rapid growth of AI and how necessary safety guardrails need to be in place to prevent misinformation from spreading and to discourage harmful behavior. The FTC's regulation is of dire need given that accountability with the evolution of the technology has become the need of the hour, and steps to protect user safety and privacy need to be promptly taken before harm is normalized.
[23]
FTC Probes AI Chatbots' Impact on Child Safety | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The commission announced Thursday (Sept. 11) that it was issuing orders to seven providers of AI chatbots in search of information on how those companies measure and monitor potentially harmful impacts of the technology on young people. The companies in question are Google, Character.AI, Instagram, Meta, OpenAI, Snap and xAI. "AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users," the FTC said in a news release. "AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots." According to the release, the FTC wants to know what measures, if any, these companies have taken to determine the safety of their chatbots when serving as companions. It is also seeking information on how the companies limit the products' use by and potential negative effects on children and teens, and to inform users and parents of the risks associated with the products. "The FTC is interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children's or teens' use of these platforms, or comply with the Children's Online Privacy Protection Act Rule," the news release added. As noted here last week when reports of the FTC's efforts first emerged, some companies have already tried to address this issue. For instance, OpenAI has said it would add teen accounts that can be monitored by parents. Character.AI has made similar changes, and Meta has added restrictions for people under 18 who use its AI products. Those reports came the same day First Lady Melania Trump hosted a meeting of the White House Task Force on Artificial Intelligence Education. In a news release issued before the event, Trump said the rise of AI must be managed responsibly. "During this primitive stage, it is our duty to treat AI as we would our own children -- empowering, but with watchful guidance," Trump said. "We are living in a moment of wonder, and it is our responsibility to prepare America's children." Meanwhile, Character.AI CEO Karandeep Anand said last month he foresees a future where people have AI friends. "They will not be a replacement for your real friends, but you will have AI friends, and you will be able to take learnings from those AI-friendly conversations into your real-life conversations," Anand told the Financial Times.
[24]
FTC prepares to grill AI companies over impact on children, WSJ reports - The Economic Times
The US Federal Trade Commission will investigate how AI chatbots affect children's mental health, seeking internal documents from OpenAI and other firms. The study aims to assess potential risks, ethical concerns, and safeguards in AI technology.The US Federal Trade Commission plans to study the impact of artificial intelligence chatbots on children's mental health and request documents from tech companies, the Wall Street Journal reported on Thursday. The agency is preparing letters to companies operating popular chatbots including ChatGPT maker OpenAI, Meta Platforms and Character.AI, requiring them to turn over documents to the FTC, the report said, quoting administration officials said. FTC, OpenAI, Meta and Character.AI did not immediately respond to Reuters requests for comment. Reuters could not independently verify the report. The news comes weeks after a Reuters exclusive report revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Last week, the social media company said it would add new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters.
[25]
FTC to Question Tech Companies About Risks Around AI-Powered Chatbots | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The study will also gather information on how AI services store and share data, Bloomberg reported Thursday (Sept. 4), citing unnamed sources. The FTC will use its authority to compel companies to turn over information related to its study and will seek information from the nine largest consumer chatbots, including those from OpenAI and Google, according to the report. Asked about the report by Bloomberg, a White House spokesperson didn't comment on a study but said the FTC is mindful of user safety when it comes to AI. "President Trump pledged to cement America's dominance in AI, cryptocurrency and other cutting-edge technologies of the future," the spokesperson said, per the report. "FTC Chairman Andrew Ferguson and the entire administration are focused on delivering on this mandate without compromising the safety and well-being of the American people." The Wall Street Journal also reported Thursday that the FTC plans question AI companies, adding that the study will focus on chatbots' impact on children's mental health, that the White House approved the study, and that the FTC is preparing letters to OpenAI, Meta and Character.AI. The administration and lawmakers have been pressured by parents and advocacy groups to add protections for children using AI chatbots, and this effort has been bolstered by recent reports of teenagers dying by suicide after forming relationships with chatbots, according to the report. Some tech companies have taken steps to address this issue. For example, OpenAI said it would add teen accounts that can be overseen by parents, Character.AI has made similar changes, and Meta added more restrictions for those under 18 who use its AI products, per the report. These reports came on the same day that First Lady Melania Trump hosted a meeting of the White House Task Force on Artificial Intelligence Education. In a press release issued before the event, Trump said the growth of AI must be managed responsibly. "During this primitive stage, it is our duty to treat AI as we would our own children -- empowering, but with watchful guidance," Trump said. "We are living in a moment of wonder, and it is our responsibility to prepare America's children."
[26]
FTC Probes Big Tech About Child Safety Concerns with AI Chatbots
The Federal Trade Commission has asked major tech companies with consumer-facing AI-powered chatbots to detail how they test and monitor for potential negative impacts on children and teenagers. The agency said Wednesday that it has issued inquiries to Google owner Alphabet, Instagram and Facebook owner Meta Platforms and others, asking what steps those firms are taking to evaluate the safety of chatbots, limit their use by children and keep parents up to speed on risks associated with the chatbots. "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," Chairman Andrew Ferguson said in a statement. Character Technologies, OpenAI, Snap and xAI have also received inquiries from the FTC. The agency said AI chatbots can effectively mimic human characteristics and are designed to communicate like a friend or confidant. Some users, particularly children and teenagers, can wind up trusting and forming relationships with these chatbots as a result, the FTC said. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children," Ferguson said. The Wall Street Journal reported earlier this month that the FTC was preparing to launch the study with approval from the White House.
[27]
FTC launches inquiry into AI chatbots of Alphabet, Meta, five others
(Reuters) - The U.S. Federal Trade Commission on Thursday said it is seeking information from seven companies including Alphabet, Meta and OpenAI that provide consumer-facing AI-powered chatbots, on how these firms measure, test and monitor potentially negative impacts of the technology. The FTC said it is seeking information on how the companies monetize user engagement, process user inputs and generate outputs in response to user inquiries and also use the information obtained through conversations with the chatbots. The other companies in the FTC's list are Character.AI, Instagram, Snap and xAI. Alphabet, OpenAI, Meta, Snap, xAI and Character.AI did not immediately respond to Reuters' requests for comment. An internal Meta document detailing policies on chatbot behavior has permitted the company's AI creations to "engage a child in conversations that are romantic or sensual," generate false medical information and help users argue that Black people are "dumber than white people," Reuters had reported in August. (Reporting by Juby Babu in Mexico City; Editing by Maju Samuel)
Share
Share
Copy Link
The Federal Trade Commission has ordered seven major tech companies to provide information about their AI chatbot companions, focusing on potential risks to children and teens. This move comes amid growing concerns over the impact of AI on young users' mental health and safety.
The Federal Trade Commission (FTC) has initiated a significant inquiry into seven major tech companies that develop AI chatbot companions, focusing on their potential impact on children and teenagers
1
2
. The companies under scrutiny include Alphabet (Google's parent company), Meta, OpenAI, Snap, xAI, Instagram, and Character.AI3
.Source: PYMNTS
The FTC is seeking information on how these companies:
4
The inquiry comes in the wake of several high-profile incidents and lawsuits involving AI chatbots and their impact on young users:
1
.2
.4
5
.Source: Quartz
Related Stories
The inquiry highlights growing concerns about the potential dangers of AI chatbots, including:
1
.Source: CBS News
The FTC's action is part of a broader effort to address the potential risks of AI technologies:
2
.4
.4
.As the inquiry unfolds, it may lead to new regulations or enforcement actions to protect young users from the potential dangers of AI chatbot companions.
Summarized by
Navi
[1]
[3]
[4]
12 Sept 2025•Policy and Regulation
10 Dec 2024•Technology
26 Aug 2025•Policy and Regulation