Illinois Bans AI in Mental Health Therapy, Sparking Nationwide Debate

5 Sources

Illinois becomes the third state to restrict AI use in mental health therapy, following Nevada and Utah. The ban prohibits licensed therapists from using AI for treatment decisions and client communication, raising concerns about AI's role in healthcare and potential risks.

Illinois Leads the Charge in AI Therapy Restrictions

Illinois has become the latest state to implement restrictions on the use of artificial intelligence (AI) in mental health therapy, following similar moves by Nevada and Utah. Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act into law on August 1, 2023, prohibiting the use of AI for "mental health and therapeutic decision-making" 1.

Source: Quartz

Source: Quartz

Key Provisions of the Illinois Ban

The new law in Illinois imposes several significant restrictions:

  1. Licensed therapists are forbidden from using AI to make treatment decisions or communicate with clients 4.
  2. Companies are prohibited from marketing AI chatbots as therapy tools without the involvement of licensed professionals 3.
  3. Violations can result in civil penalties of up to $10,000 4.

However, the law still allows licensed behavioral health professionals to use AI for administrative and supplementary support services 1.

Growing Trend of State-Level AI Regulation

Other states are following suit with their own initiatives:

  • Nevada signed a bill in June restricting AI use in schools and limiting its application by mental health care providers 1.
  • Utah passed regulations on AI-powered mental health chatbots in March 1.
  • California, New Jersey, and Pennsylvania have bills underway that would further restrict or regulate AI use in therapy 1.

Concerns Driving AI Restrictions in Mental Health

Source: Fast Company

Source: Fast Company

Several factors are motivating these legislative actions:

  1. Safety and Efficacy: A Stanford University study found that AI therapy chatbots are far from ready to replace human providers, often expressing stigma and making inappropriate statements about mental health conditions 1.

  2. Privacy Issues: OpenAI CEO Sam Altman warned that therapy sessions with ChatGPT may not always remain private, and there are no legal protections for sensitive information shared with AI 1.

  3. Potential for Harm: Some chatbots have been found to encourage harmful behavior, such as recommending drug use to addicts or failing to respond appropriately to suicidal ideation 4.

  4. "AI Psychosis": Recent research suggests that heavy AI usage, particularly among youth, may be inducing psychological distress in users with no prior history of mental illness 5.

Challenges in Enforcement and Future Implications

While these bans represent a significant step in regulating AI in healthcare, experts note potential challenges:

Source: New York Post

Source: New York Post

  • Enforcement may prove difficult, particularly in determining what constitutes therapy services 4.
  • The bans cannot prevent individuals from seeking emotional support from AI chatbots independently 4.
  • The restrictions may lead to legal battles as AI services challenge professional licensing regulations 4.

As AI continues to evolve, the balance between innovation and regulation in mental health care remains a critical challenge for policymakers and healthcare professionals alike.

Explore today's top stories

Space: The New Frontier of 21st Century Warfare

As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.

AP NEWS logoTech Xplore logoeuronews logo

7 Sources

Technology

14 hrs ago

Space: The New Frontier of 21st Century Warfare

Anthropic's Claude AI Models Gain Ability to End Harmful Conversations

Anthropic has updated its Claude Opus 4 and 4.1 AI models with the ability to terminate conversations in extreme cases of persistent harm or abuse, as part of its AI welfare research.

Bleeping Computer logoengadget logoAnalytics India Magazine logo

6 Sources

Technology

22 hrs ago

Anthropic's Claude AI Models Gain Ability to End Harmful

Russian Disinformation Campaign Exploits AI to Spread Fake News

A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.

Rolling Stone logoBenzinga logo

2 Sources

Technology

14 hrs ago

Russian Disinformation Campaign Exploits AI to Spread Fake

OpenAI Updates GPT-5 to Be 'Warmer and Friendlier' Following User Feedback

OpenAI has made subtle changes to GPT-5's personality, aiming to make it more approachable after users complained about its formal tone. The company is also working on allowing greater customization of ChatGPT's style.

Tom's Guide logoDataconomy logoNDTV Gadgets 360 logo

4 Sources

Technology

6 hrs ago

OpenAI Updates GPT-5 to Be 'Warmer and Friendlier'

SoftBank Acquires Foxconn's Ohio Facility for $375 Million to Manufacture AI Servers for Stargate Project

SoftBank has purchased Foxconn's Ohio plant for $375 million to produce AI servers for the Stargate project. Foxconn will continue to operate the facility, which will be retrofitted for AI server production.

Tom's Hardware logoBloomberg Business logoReuters logo

5 Sources

Technology

6 hrs ago

SoftBank Acquires Foxconn's Ohio Facility for $375 Million
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo