Texas Attorney General Investigates Meta and Character.AI for Misleading Mental Health Claims in AI Chatbots

5 Sources

Share

Texas AG Ken Paxton launches probe into Meta and Character.AI for potentially misleading children with AI chatbots posing as mental health tools, raising concerns about data privacy and consumer protection.

Texas Attorney General Launches Investigation into AI Chatbots

Texas Attorney General Ken Paxton has initiated an investigation into Meta AI Studio and Character.AI, accusing both companies of "potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools"

1

2

. This probe comes in the wake of growing concerns about AI chatbots interacting with children and potentially misrepresenting themselves as legitimate mental health resources.

Source: SiliconANGLE

Source: SiliconANGLE

Allegations of Misleading Practices

The Texas AG's office claims that these companies have created AI personas that present themselves as "professional therapeutic tools, despite lacking proper medical credentials or oversight"

1

. Paxton argues that by posing as sources of emotional support, these AI platforms can mislead vulnerable users, especially children, into believing they're receiving legitimate mental health care

3

.

Popular AI Personas and Their Impact

Among the millions of AI personas available on Character.AI, a user-created bot called "Psychologist" has gained significant popularity among young users

1

. While Meta doesn't explicitly offer therapy bots for children, there are no restrictions preventing minors from using the Meta AI chatbot or third-party personas for therapeutic purposes

2

.

Company Responses and Disclaimers

Both Meta and Character.AI have defended their practices, stating that they clearly label their AI and include disclaimers about the limitations of their chatbots:

  1. Meta spokesperson Ryan Daniels emphasized that their AIs are not licensed professionals and are designed to direct users to qualified medical or safety professionals when appropriate

    1

    4

    .
  2. Character.AI stated that their user-created characters are fictional and intended for entertainment, with prominent disclaimers in every chat

    3

    .

However, critics argue that many children may not understand or simply ignore such disclaimers

1

.

Data Privacy Concerns

Source: Digit

Source: Digit

Paxton also raised concerns about data collection and privacy. He noted that while AI chatbots claim conversations are private, their terms of service reveal that user interactions are logged, tracked, and potentially used for targeted advertising and algorithmic development

5

. This practice raises serious concerns about privacy violations, data abuse, and false advertising

1

.

Regulatory Implications

This investigation highlights the growing need for regulation in the AI chatbot industry, especially concerning interactions with minors. The probe aligns with broader efforts to protect children online, such as the Kids Online Safety Act (KOSA), which was reintroduced to the Senate in May 2025

1

4

.

Next Steps in the Investigation

Paxton has issued civil investigative demands to both companies, requiring them to produce documents, data, or testimony to determine if they have violated Texas consumer protection laws

1

2

. This investigation could have significant implications for how AI companies market their products and interact with young users in the future.

Explore today's top stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo