5 Sources
5 Sources
[1]
FTC Plans Review of AI Chatbot Risks With Focus on Privacy Harms
The US Federal Trade Commission plans to study the harms to children and others of AI-powered chatbots like those offered by OpenAI, Alphabet Inc.'s Google and Meta Platforms Inc., according to people familiar with the matter. The study will focus on privacy harms and other risks to people who interact with artificial intelligence chatbots, the people said. It will seek information on how data is stored and shared by the services as well as the dangers people can face from chatbot use, said the people, who asked not to be identified discussing the unannounced study.
[2]
FTC prepares to grill AI companies over impact on children, WSJ reports
Sept 4 (Reuters) - The U.S. Federal Trade Commission is preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms, including OpenAI, Meta Platforms (META.O), opens new tab and Character.AI, the Wall Street Journal reported on Thursday. The agency is preparing letters to send to the companies operating popular chatbots, the report said, quoting administration officials. "Character.AI has not received a letter about the FTC study, but we welcome working with regulators and lawmakers as they begin to consider legislation for this emerging space," the company said. FTC, OpenAI and Meta did not immediately respond to Reuters requests for comments. Reuters could not independently verify the report. FTC and the entire Administration are focused on delivering on Trump's mandate "to cement America's dominance in AI, cryptocurrency, and other cutting-edge technologies of the future" without compromising the safety and well-being of the people, a White House spokesperson said. The news comes weeks after a Reuters exclusive report revealed how Meta allowed provocative chatbot behavior with children, including letting bots engage in "conversations that are romantic or sensual." Last week, the social media company said it would add new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. In June, more than 20 consumer advocacy groups filed a complaint with the FTC and state attorneys general, alleging that AI platforms such as Meta AI Studio and Character.AI enable the unlicensed practice of medicine by hosting "therapy bots". Texas Attorney General Ken Paxton launched an investigation into Meta and Character.AI last month for allegedly misleading children with AI-generated mental health services, accusing them of deceptive trade practices and privacy violations. Reporting by Harshita Mary Varghese, Akash Sriram and Kritika Lamba in Bengaluru; Editing by Sriraj Kalluvila and Shinjini Ganguli Our Standards: The Thomson Reuters Trust Principles., opens new tab
[3]
FTC Prepares to Grill AI Companies Over Impact on Children, WSJ Reports
(Reuters) -The U.S. Federal Trade Commission plans to study the impact of artificial intelligence chatbots on children's mental health and request documents from tech companies, the Wall Street Journal reported on Thursday. The agency is preparing letters to companies operating popular chatbots including ChatGPT maker OpenAI, Meta Platforms and Character.AI, requiring them to turn over documents to the FTC, the report said, quoting administration officials said. FTC, OpenAI, Meta and Character.AI did not immediately respond to Reuters requests for comment. Reuters could not independently verify the report. The news comes weeks after a Reuters exclusive report revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Last week, the social media company said it would add new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters. (Reporting by Harshita Mary Varghese in Bengaluru; Editing by Sriraj Kalluvila)
[4]
FTC prepares to grill AI companies over impact on children, WSJ reports - The Economic Times
The US Federal Trade Commission will investigate how AI chatbots affect children's mental health, seeking internal documents from OpenAI and other firms. The study aims to assess potential risks, ethical concerns, and safeguards in AI technology.The US Federal Trade Commission plans to study the impact of artificial intelligence chatbots on children's mental health and request documents from tech companies, the Wall Street Journal reported on Thursday. The agency is preparing letters to companies operating popular chatbots including ChatGPT maker OpenAI, Meta Platforms and Character.AI, requiring them to turn over documents to the FTC, the report said, quoting administration officials said. FTC, OpenAI, Meta and Character.AI did not immediately respond to Reuters requests for comment. Reuters could not independently verify the report. The news comes weeks after a Reuters exclusive report revealed how Meta allowed provocative chatbot behavior, including letting bots engage in "conversations that are romantic or sensual." Last week, the social media company said it would add new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters.
[5]
FTC to Question Tech Companies About Risks Around AI-Powered Chatbots | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. The study will also gather information on how AI services store and share data, Bloomberg reported Thursday (Sept. 4), citing unnamed sources. The FTC will use its authority to compel companies to turn over information related to its study and will seek information from the nine largest consumer chatbots, including those from OpenAI and Google, according to the report. Asked about the report by Bloomberg, a White House spokesperson didn't comment on a study but said the FTC is mindful of user safety when it comes to AI. "President Trump pledged to cement America's dominance in AI, cryptocurrency and other cutting-edge technologies of the future," the spokesperson said, per the report. "FTC Chairman Andrew Ferguson and the entire administration are focused on delivering on this mandate without compromising the safety and well-being of the American people." The Wall Street Journal also reported Thursday that the FTC plans question AI companies, adding that the study will focus on chatbots' impact on children's mental health, that the White House approved the study, and that the FTC is preparing letters to OpenAI, Meta and Character.AI. The administration and lawmakers have been pressured by parents and advocacy groups to add protections for children using AI chatbots, and this effort has been bolstered by recent reports of teenagers dying by suicide after forming relationships with chatbots, according to the report. Some tech companies have taken steps to address this issue. For example, OpenAI said it would add teen accounts that can be overseen by parents, Character.AI has made similar changes, and Meta added more restrictions for those under 18 who use its AI products, per the report. These reports came on the same day that First Lady Melania Trump hosted a meeting of the White House Task Force on Artificial Intelligence Education. In a press release issued before the event, Trump said the growth of AI must be managed responsibly. "During this primitive stage, it is our duty to treat AI as we would our own children -- empowering, but with watchful guidance," Trump said. "We are living in a moment of wonder, and it is our responsibility to prepare America's children."
Share
Share
Copy Link
The US Federal Trade Commission plans to study the impact of AI-powered chatbots on children's privacy and mental health, requesting documents from major tech companies like OpenAI, Meta, and Character.AI.
The U.S. Federal Trade Commission (FTC) is gearing up to launch a comprehensive study on the potential risks associated with AI-powered chatbots, with a particular focus on privacy concerns and impacts on children's mental health. This investigation comes as part of the government's efforts to address the rapidly evolving landscape of artificial intelligence and its effects on society
1
2
.Source: Economic Times
The FTC plans to scrutinize major tech companies operating popular chatbots, including OpenAI (creator of ChatGPT), Meta Platforms, Google, and Character.AI. The agency will use its authority to compel these companies to turn over internal documents and information related to their AI services
1
3
.Key areas of focus for the study include:
The investigation follows recent revelations about potentially harmful chatbot behaviors. A Reuters report exposed how Meta allowed provocative chatbot interactions with children, including "conversations that are romantic or sensual"
2
4
. In response to growing concerns, some tech companies have already taken steps to address these issues:2
4
.5
.Source: Reuters
Related Stories
The FTC's study aligns with the Trump administration's focus on maintaining U.S. dominance in AI while ensuring user safety. A White House spokesperson emphasized the need to deliver on this mandate "without compromising the safety and well-being of the American people"
1
5
.The investigation has gained support from various stakeholders:
5
.2
.2
.As AI technology continues to advance rapidly, this FTC study represents a significant step towards understanding and regulating its impact on society, particularly on vulnerable groups like children. The findings from this investigation could potentially shape future policies and regulations governing AI chatbots and other AI-powered services
1
3
5
.First Lady Melania Trump, hosting a meeting of the White House Task Force on Artificial Intelligence Education, emphasized the need for responsible management of AI growth, stating, "We are living in a moment of wonder, and it is our responsibility to prepare America's children"
5
.As the FTC prepares to delve into the complexities of AI chatbot risks, the tech industry, policymakers, and the public will be closely watching the outcomes of this study and its potential implications for the future of AI development and regulation.
Summarized by
Navi
[1]
[3]
[4]
26 Aug 2025•Policy and Regulation
14 Aug 2025•Technology
19 Aug 2025•Technology