5 Sources
5 Sources
[1]
Texas attorney general accuses Meta, Character.AI of misleading kids with mental health claims | TechCrunch
Texas Attorney General Ken Paxton has launched an investigation into both Meta AI Studio and Character.AI for "potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools," according to a press release issued Monday. "In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology," Paxton is quoted as saying. "By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care. In reality, they're often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice." The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting. The Texas AG's office has accused Meta and Character.AI of creating AI personas that present as "professional therapeutic tools, despite lacking proper medical credentials or oversight." Among the millions of AI personas available on Character.AI, one user-created bot called Psychologist has seen high demand among the startup's young users. Meanwhile, Meta doesn't offer therapy bots for kids, but there's nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes. "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI -- not people," Meta spokesperson Ryan Daniels told TechCrunch. "These AIs aren't licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate." However, TechCrunch noted that many children may not understand -- or may simply ignore -- such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots. In his statement, Paxton also observed that though AI chatbots assert confidentiality, their "terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising." According to Meta's privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to "improve AIs and related technology." The policy doesn't explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for "more personalized outputs." Given Meta's ad-based business model, this effectively translates to targeted advertising. Character.AI's privacy policy also highlights how the startup logs identifiers, demographics, location information, and more information about the user, including browsing behavior and app usage platforms. It tracks users across ads on TikTok, YouTube, Reddit, Facebook, Instagram, Discord, which it may link to a user's account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers. TechCrunch has asked Meta and Character.AI if such tracking is done on children, too, and will update this story if we hear back. Both Meta and Character say their services aren't designed for children under 13. That said, Meta has come under fire for failing to police accounts created by kids under 13, and Character's kid-friendly characters are clearly designed to attract younger users. The startup's CEO, Karandeep Anand, has even said that his six-year-old daughter uses the platform's chatbots. That type of data collection, targeted advertising, and algorithmic exploitation is exactly what legislation like KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to pass last year with strong bipartisan support, but it stalled after a major push from tech industry lobbyists. Meta in particular deployed a formidable lobbying machine, warning lawmakers that the bill's broad mandates would undercut its business model. KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). Paxton has issued civil investigative demands -- legal orders that require a company to produce documents, data, or testimony during a government probe -- to the companies to determine if they have violated Texas consumer protection laws.
[2]
Texas AG accuses Meta, Character.AI of misleading kids with mental health claims | TechCrunch
Texas Attorney General Ken Paxton has launched an investigation into both Meta AI Studio and Character.AI for "potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools," according to a press release issued Monday. "In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology," Paxton is quoted as saying. "By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care. In reality, they're often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice." The probe comes a few days after Senator Josh Hawley announced an investigation into Meta following a report that found its AI chatbots were interacting inappropriately with children, including by flirting. The Texas AG's office has accused Meta and Character.AI of creating AI personas that present as "professional therapeutic tools, despite lacking proper medical credentials or oversight." Among the millions of AI personas available on Character.AI, one user-created bot called Psychologist has seen high demand among the startup's young users. Meanwhile, Meta doesn't offer therapy bots for kids, but there's nothing stopping children from using the Meta AI chatbot or one of the personas created by third parties for therapeutic purposes. "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI -- not people," Meta spokesperson Ryan Daniels told TechCrunch. "These AIs aren't licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate." However, TechCrunch noted that many children may not understand -- or may simply ignore -- such disclaimers. We have asked Meta what additional safeguards it takes to protect minors using its chatbots. In his statement, Paxton also observed that though AI chatbots assert confidentiality, their "terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising." According to Meta's privacy policy, Meta does collect prompts, feedback, and other interactions with AI chatbots and across Meta services to "improve AIs and related technology." The policy doesn't explicitly say anything about advertising, but it does state that information can be shared with third parties, like search engines, for "more personalized outputs." Given Meta's ad-based business model, this effectively translates to targeted advertising. Character.AI's privacy policy also highlights how the startup logs identifiers, demographics, location information, and more information about the user, including browsing behavior and app usage platforms. It tracks users across ads on TikTok, YouTube, Reddit, Facebook, Instagram, Discord, which it may link to a user's account. This information is used to train AI, tailor the service to personal preferences, and provide targeted advertising, including sharing data with advertisers and analytics providers. TechCrunch has asked Meta and Character.AI if such tracking is done on children, too, and will update this story if we hear back. Both Meta and Character say their services aren't designed for children under 13. That said, Meta has come under fire for failing to police accounts created by kids under 13, and Character's kid-friendly characters are clearly designed to attract younger users. The startup's CEO, Karandeep Anand, has even said that his six-year-old daughter uses the platform's chatbots. That type of data collection, targeted advertising, and algorithmic exploitation is exactly what legislation like KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to pass last year with strong bipartisan support, but it stalled after a major push from tech industry lobbyists. Meta in particular deployed a formidable lobbying machine, warning lawmakers that the bill's broad mandates would undercut its business model. KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). Paxton has issued civil investigative demands -- legal orders that require a company to produce documents, data, or testimony during a government probe -- to the companies to determine if they have violated Texas consumer protection laws.
[3]
Texas AG to investigate Meta and Character.AI over misleading mental health claims
Texas Attorney General Ken Paxton has announced plans to investigate both Meta AI Studio and Character.AI for offering AI chatbots that can claim to be health tools, and potentially misusing data collected from underage users. Paxton says that AI chatbots from either platform "can present themselves as professional therapeutic tools," to the point of lying about their qualifications. That behavior that can leave younger users vulnerable to misleading and inaccurate information. Because AI platforms often rely on user prompts as another source of training data, either company could also be violating young user's privacy and misusing their data. This is of particular interest in Texas, where the SCOPE Act places specific limits on what companies can do with data harvested from minors, and requires platform's offer tools so parents can manage the privacy settings of their children's accounts. For now, the Attorney General has submitted Civil Investigative Demands (CIDs) to both Meta and Character.AI to see if either company is violating Texas consumer protection laws. As TechCrunch notes, neither Meta nor Character.AI claim their AI chatbot platforms should be used as mental health tools. That doesn't prevent there from being multiple "Therapist" and "Psychologist" chatbots on Character.AI. Nor does it stop either of the companies' chatbots from claiming they're licensed professionals, as 404 Media reported in April. "The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear," a Character.AI spokesperson said when asked to comment on the Texas investigation. "For example, we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Meta shared a similar sentiment in its comment. "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI -- not people," the company said. Meta AIs are also supposed to "direct users to seek qualified medical or safety professionals when appropriate." Sending people to real resources is good, but ultimately disclaimers themselves are easy to ignore, and don't act as much of an obstacle. With regards to privacy and data usage, both Meta's privacy policy and the Character.AI's privacy policy acknowledge that data is collected from users' interactions with AI. Meta collects things like prompts and feedback to improve AI performance. Character.AI logs things like identifiers and demographic information and says that information can be used for advertising, among other applications. How either policy applies to children, and fits with Texas' SCOPE Act, seems like it'll depend on how easy it is to make an account.
[4]
AI chatbot scrutiny intensifies as Texas attorney general launches probe into Meta and Character.AI over misleading mental health claims - SiliconANGLE
AI chatbot scrutiny intensifies as Texas attorney general launches probe into Meta and Character.AI over misleading mental health claims Texas Attorney General Ken Paxton today announced plans to launch a probe into Meta Platforms Inc. and Character.AI over the companies' chatbots being used by young people as mental health tools. Paxton (pictured) believes that the AI tools could be utilized by vulnerable children who may believe the bots represent professional care. In a press release, his office said, "AI-driven chatbots often go beyond simply offering generic advice and have been shown to impersonate licensed mental health professionals, fabricate qualifications, and claim to provide private, trustworthy counseling services." Moreover, said the office, the sometimes very personal conversations children have with the chatbots "are logged, tracked, and exploited for targeted advertising and algorithmic development, raising serious concerns about privacy violations, data abuse, and false advertising." The Attorney has now issued Civil Investigative Demands to both firms to determine if they have violated Texas consumer protection laws, including making fraudulent claims, privacy misrepresentations, and the concealment of material data usage. "In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology," said Paxton. "By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care. In reality, they're often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice." Neither Meta nor Character.AI offers chatbots that claim to be therapists. One of the many characters created by users in Character.AI has been named Psychologist, but the company said it makes it clear all the bots on its website are "fictional," and "intended for entertainment." A spokesperson explained, "We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Meta also said that it consistently lets users know that chatbot responses are generated by a machine, not a human, while its AI will generally direct "users to seek qualified medical or safety professionals when appropriate." The news comes after Republican senator Josh Hawley said he was launching an investigation into Meta after it was recently discovered in a Reuters investigation that the company explicitly permitted its chatbots to "engage a child in conversations that are romantic or sensual." Hawley said this was "grounds for an immediate congressional investigation," later writing on X, "Is there anything - ANYTHING - Big Tech won't do for a quick buck?" In response, Meta claimed the internal document Reuters uncovered and the statements within were "erroneous and inconsistent with our policies, and have been removed." Democrat Senator Ron Wyden said the document in question was "deeply disturbing and wrong," adding that Section 230, a law that shields tech firms from liability, shouldn't be protecting companies' generative AI tools. "Meta and Zuckerberg should be held fully responsible for any harm these bots cause," he said in a statement. Republican Marsha Blackburn believes the recent reports illustrate that children need to be better protected during this surge of chatbot use. Blackburn was one of the senators who introduced the Kids Online Safety Act, KOSA, which passed last year in the Senate but failed in the U.S. House of Representatives.
[5]
Meta, Character.AI accused of misrepresenting AI as mental health care: All details here
Both companies say they display disclaimers to make it clear that their chatbots are not real people or licensed professionals. Artificial intelligence chatbots are becoming more common, with millions of people using them for everything from fun conversations to emotional support. But concerns are growing about how these tools are marketed and deceive users. Texas attorney general Ken Paxton has launched an investigation into Meta's AI Studio and Character.AI, accusing them of presenting AI chatbots in ways that could mislead people into thinking they offer real mental health care. "In today's digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology," Paxton was quoted as saying in a press release. "By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care. In reality, they're often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice." The Texas Attorney General's office claims that Meta and Character.AI have created AI personas that appear to act like therapists, even though they lack medical training or oversight. On Character.AI, for instance, one of the most popular user-created chatbots is called Psychologist, which is often used by young users. While Meta doesn't directly offer therapy bots, kids can still use its AI chatbot or third-party personas for similar purposes. Also read: Meta's AI rules let bots sensually chat with kids, share false medical info and more: Report Both companies say they display disclaimers to make it clear that their chatbots are not real people or licensed professionals. "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI -- not people," Meta spokesperson Ryan Daniels told TechCrunch. "These AIs aren't licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate." Character.AI also said that it adds extra warnings when users create bots with names like "therapist" or "doctor." In his statement, Paxton even raised concerns about data collection. He noted that while AI chatbots claim conversations are private, their terms of service reveal that chats are logged and can be used for advertising and algorithm development.
Share
Share
Copy Link
Texas AG Ken Paxton launches probe into Meta and Character.AI for potentially misleading children with AI chatbots posing as mental health tools, raising concerns about data privacy and consumer protection.
Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI, accusing both companies of "potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools"
1
2
. This probe comes in the wake of growing concerns about AI chatbots interacting with children and potentially misrepresenting themselves as legitimate mental health resources.Source: SiliconANGLE
The Texas AG's office claims that these companies have created AI personas that present themselves as "professional therapeutic tools, despite lacking proper medical credentials or oversight"
1
. Paxton argues that by posing as sources of emotional support, these AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care3
.Among the millions of AI personas available on Character.AI, a user-created bot called "Psychologist" has gained significant popularity among young users
1
. While Meta doesn't explicitly offer therapy bots for children, there are no restrictions preventing minors from using the Meta AI chatbot or third-party personas for therapeutic purposes2
.Both Meta and Character.AI have defended their practices, stating that they clearly label their AI and include disclaimers about the limitations of their chatbots:
1
4
.3
.However, critics argue that many children may not understand or simply ignore such disclaimers
1
.Source: Digit
Paxton also raised concerns about data collection and privacy. He noted that while AI chatbots claim conversations are private, their terms of service reveal that user interactions are logged, tracked, and potentially used for targeted advertising and algorithmic development
5
. This practice raises serious concerns about privacy violations, data abuse, and false advertising1
.Related Stories
This investigation highlights the growing need for regulation in the AI chatbot industry, especially concerning interactions with minors. The probe aligns with broader efforts to protect children online, such as the Kids Online Safety Act (KOSA), which was reintroduced to the Senate in May 2025
1
4
.Paxton has issued civil investigative demands to both companies, requiring them to produce documents, data, or testimony to determine if they have violated Texas consumer protection laws
1
2
. This investigation could have significant implications for how AI companies market their products and interact with young users in the future.Summarized by
Navi
[1]
[2]
10 Dec 2024•Technology
05 Sept 2025•Policy and Regulation
14 Aug 2025•Technology
1
Business and Economy
2
Business and Economy
3
Technology