2 Sources
2 Sources
[1]
Study explores role of AI automation in psychotherapy practice
University of UtahApr 6 2026 Psychotherapy has always been a deeply human endeavor: a patient talking, a therapist listening and responding, and healing happening through words. But with the rapid rise of conversational artificial intelligence, particularly large language models (LLMs), that paradigm is shifting fast. A team of University of Utah researchers is tackling this change, but not by asking, "Will robots replace therapists?" Rather, they explore more practical questions: What are we automating and how much? "The history of new technology like this is almost always about collaboration, and it's about how it supports the human expert in doing the work they can do," said Zac Imel, a professor of educational psychology and lead author of a new study titled "A Framework for Automation in Psychotherapy." "It might be useful to think about frameworks for understanding the different types of work that could be done through automation, and that's what this paper is." The study is the result of a cross-campus collaboration among researchers from the U's College of Engineering, School of Medicine and College of Education. Simply put, automation is when machines perform tasks humans have previously done. In therapy, that could range from a chatbot delivering prewritten coping tips to AI systems that take and organize notes, analyze therapy sessions and provide feedback to clinicians, or even talk directly to patients. Varying degrees of automation Co-author Vivek Srikumar uses self-driving cars as an analogy for the varying levels of automation. "The automobile industry has been introducing driver assistance systems in our cars for many years now, and the extreme end is self-driving cars," said Srikumar, an associate professor at the Kahlert School of Computing. "This paper can be seen from that perspective. The extreme version of AI in psychotherapy is an AI therapist, but there are different levels of automation that might be associated with different amounts of risk. You might have different capabilities or assistance that is provided to therapists, to clients, to organizations by AI." Imel and Srikumar are long-time collaborators who teamed up with Brent Kious, an associate professor of psychiatry, to craft the automation framework, which was posted in advance of publication by Current Directions in Psychological Science. The team outlined four categories, representing different levels of automation along a continuum. Category A: Scripted systems. Content is prewritten by humans, but provided to patients by chatbots that follow decision trees. Category B: AI evaluates therapists. The AI reviews therapy sessions and gives feedback or ratings. Category C: AI assists therapists. The AI suggests interventions, prompts, or phrasing, but a human therapist delivers care. Category D: AI provides therapy directly. An autonomous agent generates responses and interacts with patients, possibly with supervision. The team evaluated each category for its potential utility and risk levels, which vary widely. A scripted chatbot, an AI coaching tool for therapists, and a fully autonomous AI therapist are fundamentally different technologies with different risks. However, it's often not clear to users, or even health systems, which technology they are using. Weighing risks and benefits "By cataloging the various levels of automation, the same question takes on different flavors at various levels, questions about risk, questions about consent, who gets to consent and how much consent and the impact of potential mistakes and the questions about who and how much responsibility is borne by various parties," Srikumar said. "All of these things, the questions remain the same, but the impact of these questions changes." The team is particularly interested in improving the way clinicians are evaluated and mentored to improve the level of care provided to patients. "We are currently partnering with SafeUT, Utah's statewide text-based crisis line, to develop tools that help evaluate crisis counselors' sessions so that they can get feedback to maintain key skills and even develop new ones as we learn more about crisis counseling," Kious said. Evaluation and training are where large language models can support therapists without coming close to replacing them, Imel said. Current methods are no match to the scale of need in mental health care. Automating without replacing human therapists "To evaluate a psychotherapy session is tremendously labor-intensive. It's slow, it's unreliable, it rarely gets used," Imel said. "You're not recording your sessions and then mailing them off to an expert who can listen to them and evaluate them and give you feedback and then send it back to you so you can learn from it." Here, appropriately trained LLMs can quickly capture core components of treatment and provide that information back to therapists quickly-often in real time. The researchers note that anyone can now turn to ChatGPT for counseling that might resemble psychotherapy. LLMs are designed to be engaging and sound empathetic, and are trained on vast datasets, but they don't necessarily use evidence-based psychotherapy techniques. Accordingly, they carry huge risks since they are known to fabricate information, encode biases and respond unpredictably. "Why would one want to deploy the riskiest version of a tool when there are so many lighter versions of it that we can already deploy that are going to make life easier?" Srikumar said. "A note-taking application, for example, something that maintains notes across a session. These are already going to improve the quality of life for clinicians, the quality of service." The team also envisions a role for AI in crisis hotlines someday. "It's a really challenging environment where you don't know anything about the people you're talking to. They're calling in, you may only have five or six talk turns to connect with them. You have a very confined space to try and help this person and get them safe and reduce risk," Srikumar said. "What I do foresee is that future crisis counseling systems will be heavily augmented by AI because the scale is too big to be satisfied without automation." University of Utah Journal reference: DOI: 10.1177/09637214251386047
[2]
AI in the mental health care workforce is met with fear, pushback -- and enthusiasm
<iframe src="https://www.npr.org/player/embed/nx-s1-5771707/nx-s1-9695614" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player"> Artificial intelligence has arrived in the field of mental health. Large health systems and independent therapists alike have begun to adopt different AI tools to manage the delivery of mental health treatment. The speed of the adoption -- alongside disturbing incidents of individuals using general-use AI chatbots with catastrophic consequences -- is causing some concern among practitioners and researchers. "There is a lot of fear and anxiety about AI," says psychologist Vaile Wright, senior director of health care innovation at the American Psychological Association (APA). "And in particular fear around AI replacing jobs." Those concerns were a key issue last month, when 2,400 mental health care providers for Kaiser Permanente in Northern California and the Central Valley went on a 24-hour strike. One of the therapists who went on strike is Ilana Marcucci-Morris. Since 2019, Marcucci-Morris worked as a triage clinician at Kaiser Permanente's telepsychiatry intake hub. But that changed in May 2025. "I have been reassigned from triage to other duties," says Marcucci-Morris, a licensed clinical social worker based at KP in Oakland, California. The change in her role was driven by KP's efforts to revamp its triage system, she says. "What used to always be a 10 to 15-minute screening from a licensed clinician like myself is now being conducted by unlicensed lay operators following a script," she says. "Or, an E-visit." She and her colleagues worry that this downsizing of the triage system is paving the way for AI to take over their jobs. At Kaiser Permanente in Walnut Creek, California, the triage team of nine providers has been cut to three, says Harimandir Khalsa, a marriage and family therapist, who also works as a triage clinician. "The jobs that we did [are] being handled by these telephone service representatives," says Khalsa. The 24-hour strike on March 18 protested these changes among other things. "Part of our unfair labor practice strike really is about the erosion of licensed triage within the health plan," says Marcucci-Morris. "At Kaiser Permanente, our use of AI does not replace clinical expertise," Lionel Sims, senior vice president of human resources at Kaiser Permanente Northern California, said in a statement to NPR. The health system, which is both a direct care provider and an insurer, confirmed to NPR that it is assessing AI tools from a U.K. company called Limbic. "We are currently evaluating the use of Limbic to assist members in accessing care. Limbic is not in use at this time," the statement reads. "I have not seen within mental health care any jobs be replaced by AI as of yet," says Wright of the American Psychological Association. Instead, she says, the growing adoption of AI in mental health care has been mostly limited to certain kinds of tasks. "One clear positive use case of AI tools is in the use of improving efficiencies around documentation and other automated types of activities," she says. Like billing insurance companies or updating electronic health records -- time consuming tasks that bog therapists down. "Most providers want to help people and when they get mired down with excessive paperwork or documentation in order to get paid, that takes away time from direct patient care," Wright adds. "And so I do think that there are benefits to incorporating these tools into your practice based on your personal comfort level." There are nearly 40 different products with transcription and other "documentation support" services for providers, she says. One such company is Blueprint, an AI assistant that summarizes sessions, updates electronic health records, and helps individual therapists track patient progress. Other companies are building AI tools for large health systems. For example, Limbic has built AI assistants to perform a range of tasks including intake, and patient support for big health systems. "We are deployed across 63% of the U.K.'s National Health Service and we are currently serving patients in 13 U.S. states," says founder and CEO Ross Harper. One Limbic chatbot, called Limbic Care, is trained on cognitive behavioral therapy skills and provides direct patient support. "Let's imagine you're an individual," says Harper. "It's 3 a.m. in the morning on a Wednesday. You can't sleep and you think 'I may actually need some help.'" In such a scenario, a patient can connect right away to Limbic Care on the patient portal. "What Limbic Care would do is it would provide evidence-based cognitive behavioral therapy tools and techniques so that you can really begin working on the challenges that you're experiencing right there and then," says Harper. Despite the growing adoption of AI tools for administrative tasks by health systems and mental health care providers, "we're not seeing a lot of clinical use of AI today," says psychiatrist Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston. One reason, he says, is that while the AI tools are exciting, "they're not well tested." Also, "it could be very expensive to run these systems," he adds. "You need a large IT team. You need infrastructure. There's safety things that have to go in place." Most small mental health practices and community mental health centers do not have the infrastructure or expertise to use these AI platforms, he says. The APA's Wright agrees. "At this point, because there is little regulation, it is incumbent on the provider to do the legwork and the research to figure out, 'Are the tools that are on the market and available, safe and effective?'" she says. However, Torous predicts that adoption of AI will keep growing as the technology improves. "I think AI is going to transform the future of mental health care for the better," he says. "But we as the clinical community have to learn to use it and work for it. So that means there's going to be a lot more training. We have to upskill ourselves." Refusing to use the technology is no longer an option, he adds. "Because if you take this approach and companies come in with products that may be good, maybe really bad and dangerous, we won't know how to evaluate them." In fact, involving mental health care professionals in the development of AI tools will only help make them better, adds Torous. That's what the striking mental health workers at Kaiser Permanente in northern California and the Central Valley would like to see their employer do -- involve them in the development and rolling out of AI tools. "If AI is utilized, don't keep us clinicians out of the human process of engaging with our patients in determining the right level of care," says Khalsa. As the technology improves to be more useful to mental health care providers, Torous thinks human providers will likely work hand-in-hand with AI assistants. "What we're probably moving towards is something called a hybrid or blended model of care," he says. Providers would still treat patients and provide therapy, while AI assistants or chatbots help patients do therapy homework, practice skills, and give providers "real-time feedback" on patients. Vaile Wright of the APA sees an ongoing role for flesh-and-blood therapists. "And that's in part because there are no AI digital solutions that can replace human-driven psychotherapy or care."
Share
Share
Copy Link
University of Utah researchers have developed a framework categorizing four levels of AI automation in psychotherapy, from scripted chatbots to autonomous AI therapists. Meanwhile, 2,400 Kaiser Permanente mental health providers in Northern California went on strike in March 2025, protesting changes they fear could lead to AI replacing licensed clinicians in triage roles.
As conversational artificial intelligence reshapes mental health treatment, University of Utah researchers have introduced a comprehensive framework to understand the role of AI automation in psychotherapy. The study, led by Zac Imel, a professor of educational psychology, shifts the conversation away from whether AI will replace therapists entirely toward more nuanced questions about what tasks should be automated and to what degree
1
.
Source: NPR
The framework, developed through collaboration among researchers from the College of Engineering, School of Medicine, and College of Education, outlines four distinct categories of automation in mental health care. Category A involves scripted systems where chatbots deliver prewritten content through decision trees. Category B focuses on AI evaluating therapists by reviewing sessions and providing feedback. Category C encompasses AI support for human therapists, where language models suggest interventions while clinicians deliver care. Category D represents fully autonomous AI providing therapy directly to patients, possibly with supervision
1
."The history of new technology like this is almost always about collaboration, and it's about how it supports the human expert in doing the work they can do," Imel explained. Co-author Vivek Srikumar, an associate professor at the Kahlert School of Computing, compared the varying levels of AI in the mental health care workforce to self-driving cars, noting that different automation levels carry different risks and capabilities
1
.While researchers map theoretical frameworks, real-world tensions are escalating. In March 2025, 2,400 mental health care providers at Kaiser Permanente in Northern California and the Central Valley staged a 24-hour strike, with job security emerging as a central concern
2
.Ilana Marcucci-Morris, a licensed clinical social worker who worked as a triage clinician at Kaiser Permanente's telepsychiatry intake hub since 2019, was reassigned in May 2025. What previously required a 10 to 15-minute screening from a licensed clinician is now conducted by unlicensed lay operators following a script or through an e-visit. At Kaiser Permanente in Walnut Creek, California, the triage team shrank from nine providers to three, according to marriage and family therapist Harimandir Khalsa
2
.Kaiser Permanente confirmed it is evaluating AI tools for mental health from U.K. company Limbic, though the health systems stated the technology is not yet in use. "At Kaiser Permanente, our use of AI does not replace clinical expertise," said Lionel Sims, senior vice president of human resources at Kaiser Permanente Northern California
2
.Despite fears about automation in mental health care, current applications largely target administrative tasks rather than direct patient care. Vaile Wright, senior director of health care innovation at the American Psychological Association, noted that AI for administrative efficiencies represents the clearest positive use case. "I have not seen within mental health care any jobs be replaced by AI as of yet," Wright said
2
.Nearly 40 different products now offer clinical documentation support, including transcription services and electronic health record updates. Companies like Blueprint provide AI assistants that summarize sessions and help individual therapists track patient progress. These therapist tools address time-consuming tasks that reduce direct patient care time
2
.The University of Utah team is partnering with SafeUT, Utah's statewide text-based crisis line, to develop session evaluation tools that help crisis counselors maintain and develop skills. "To evaluate a psychotherapy session is tremendously labor-intensive. It's slow, it's unreliable, it rarely gets used," Imel explained. Appropriately trained language models can capture core treatment components and provide feedback to therapists quickly, often in real time
1
.Related Stories
Some companies are developing more advanced AI for direct patient support. Limbic, deployed across 63% of the U.K.'s National Health Service and serving patients in 13 U.S. states, has built chatbots including Limbic Care, which is trained on cognitive behavioral therapy skills. Ross Harper, Limbic's founder and CEO, described a scenario where someone experiencing distress at 3 a.m. could immediately access evidence-based cognitive behavioral therapy tools through the patient portal
2
.The Utah researchers emphasize that different automation levels carry vastly different risks and benefits. A scripted chatbot, an AI coaching tool for therapists, and a fully autonomous AI therapist are fundamentally different technologies, yet it's often unclear to users or health systems which technology they're using. "By cataloging the various levels of automation, the same question takes on different flavors at various levels, questions about risk, questions about consent, who gets to consent and how much consent and the impact of potential mistakes," Srikumar explained
1
.As the AI framework helps clarify different automation approaches, mental health workers continue grappling with uncertainty about their professional futures. The tension between technological advancement and job security reflects broader questions about how AI will reshape patient intake, session evaluation, and the fundamental nature of therapeutic relationships in coming years.
Summarized by
Navi
[1]
1
Technology

2
Science and Research

3
Science and Research
