Legion Health's AI chatbot can now prescribe psychiatric meds in Utah's groundbreaking pilot

6 Sources

Share

Utah approved Legion Health to launch an AI chatbot that can renew psychiatric medication prescriptions without a doctor—only the second such program in the U.S. The $19-per-month service targets stable patients needing refills for low-risk medications like Zoloft and Prozac, but psychiatrists warn the system may not expand access to those who need it most and raises concerns about autonomous AI in healthcare.

Legion Health Breaks New Ground with AI Prescribing in Utah

Utah has authorized Legion Health to operate an AI chatbot capable of renewing psychiatric medication prescriptions without direct physician involvement, marking only the second time the state—and the nation—has granted such clinical authority to artificial intelligence. The one-year pilot program, announced last week and set to begin in April, allows the San Francisco startup's system to handle prescription renewals for certain low-risk psychiatric medications through a $19-per-month subscription service

1

. State officials position the initiative as a potential solution to address mental health provider shortages affecting up to 500,000 Utah residents, while physicians raise significant patient safety concerns about delegating this responsibility to AI

1

.

Source: Futurism

Source: Futurism

The program operates within Utah's regulatory sandbox, which allows companies to test new technologies under supervision with required audits, transparency measures, and mandatory patient consent

5

. Legion Health, founded in 2021 by Princeton University classmates Yash Patel, Arthur MacWaters, and Daniel Wilson, promises Utah-based patients "fast, simple refills" that could eliminate the median 67-day wait time for psychiatry appointments

5

. The company currently operates a waitlist as it prepares to launch the service.

Strict Limits Define What AI Can and Cannot Do

The pilot program imposes deliberate restrictions on both the medications covered and patient eligibility criteria. The chatbot can renew only 15 lower-risk maintenance medications already prescribed by a human clinician, including fluoxetine (Prozac), sertraline (Zoloft), bupropion (Wellbutrin), mirtazapine, and hydroxyzine—commonly used SSRIs and other drugs to treat anxiety and depression

1

2

. The system cannot issue new prescriptions, adjust dosages, or handle medications requiring blood-test monitoring

1

.

Controlled substances remain barred from the program, ruling out many ADHD medications like Adderall. The exclusion also encompasses benzodiazepines used for anxiety, antipsychotics for conditions like schizophrenia and bipolar disorder, and lithium—widely considered the gold-standard treatment for bipolar disorder

1

3

. Only stable patients with existing prescriptions qualify; anyone with a recent dose or medication change or psychiatric hospitalization within the last year is excluded

1

. Patients must check in with a healthcare provider every 10 refills or after six months, whichever comes first

1

.

How the System Screens for Safety and Red Flags

To use the service, patients must opt in, verify their identity, and prove they already have a prescription by providing a photo of the label or pill bottle. The AI then conducts a brief safety check, asking about symptoms, side effects, and medication efficacy. Questions about suicidal thoughts, self-harm, severe reactions, and pregnancy help identify red flags

1

5

. If any answers fall outside the pilot's low-risk criteria, cases are escalated to a human clinician before any refill is issued. Patients and pharmacists can also request human oversight at any point

1

.

The rollout follows a staged approach with built-in safeguards. The first 250 prescriptions will require mandatory review by a licensed physician, and the system must achieve a 98% approval rate before operating without immediate oversight

3

. The next 1,000 prescriptions will be reviewed after the fact, followed by independent operation with ongoing audits and monthly reporting to Utah regulators

4

5

.

Psychiatrists Question Whether AI Can Truly Expand Access

Despite state officials' optimism that the program could free healthcare providers to "focus their time on more complex, higher-risk patient needs," psychiatrists remain skeptical about both safety and effectiveness. Brent Kious, a psychiatrist and professor at the University of Utah School of Medicine, told The Verge he believes "the advantages of an AI-based refill system may be overstated" and suspects the tool "will not increase access for those who are most in need of care". The target patient would already need to be on a treatment plan with their psychiatrist to use the service, limiting its reach to those already in the system.

Source: PC Magazine

Source: PC Magazine

Kious also warned that automation could contribute to an "epidemic of over-treatment" in psychiatry, with some patients staying on medication longer than necessary

1

4

. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center and professor of psychiatry at Harvard Medical School, raised related concerns about medication adherence and management. He noted that while some people benefit from staying on psychiatric medications long-term, others may benefit from reducing or stopping them—decisions that "require more active management, changes, and careful consideration"

1

2

. That nuanced clinical judgment becomes harder to provide when outsourcing refill check-ins to a chatbot.

Concerns About Autonomous AI in Healthcare Persist

A fundamental question looms over the pilot: whether any AI system today "can understand the unique context and factors that go into a person's medication plan," as Torous questioned

2

4

. Kious emphasized that "this is something that could be safe in principle, but it all depends on the details," noting that prescribing involves more than just checking for drug interactions

1

. Experts also cautioned that chatbots may miss important details or fail to recognize when patients answer questions inaccurately to speed up care—subtleties that human clinicians can often detect

4

.

Source: The Verge

Source: The Verge

These concerns are amplified by the opacity surrounding AI systems in healthcare. "It feels a bit like alchemy right now," Kious said. "It would be better if there were greater transparency, more science, and more rigorous testing before people are asked to use this"

1

4

. Utah's first AI prescription pilot, provided by Doctronic and launched in December to renew prescriptions for cholesterol and blood pressure medications, quickly demonstrated vulnerabilities. Security researchers found it could be manipulated to spread conspiracy theories about vaccines, recommend methamphetamine as a treatment for social withdrawal, and triple a patient's dosage of opioids

3

4

. A study published last year found that large language models in healthcare settings are extremely susceptible to jailbreak attacks

3

.

The Path Forward and Nationwide Ambitions

Arthur MacWaters, Legion Health's cofounder and president, frames the initiative as addressing a critical bottleneck in mental health care. Estimates suggest that 40% to 50% of patients do not consistently take psychiatric medications as prescribed, increasing the risk of relapse and hospitalization

5

. "We see this as critical to expand access to hundreds of thousands of people in Utah who live in mental health shortage areas, as well as an important proving ground for AI in medicine," MacWaters told The Verge

4

. The company aims for a nationwide rollout before the end of this year, with MacWaters predicting the chatbot could be available in every U.S. state "very quickly"

2

.

MacWaters has positioned the long-term vision as building an "AI doctor" that works alongside human physicians and clinics to "handle specific clinical tasks safely, transparently, and at scale," predicting that "every patient is going to have AI working on their behalf in five years"

5

. Utah's regulator will gather qualitative and quantitative data on the program's real-world impact over the year-long experiment before making any permanent changes to state law

2

. For now, Torous advised patients to continue seeking advice from a human clinician instead

4

. The pilot represents a test case for whether AI can safely automate even the most routine aspects of psychiatric care—or whether addressing mental health shortages requires solutions that keep human judgment at the center of treatment decisions.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo