Curated by THEOUTPOST
On Sat, 5 Apr, 12:03 AM UTC
7 Sources
[1]
Artificial intelligence has potential to aid physician decisions during virtual urgent care
Do physicians or artificial intelligence (AI) offer better treatment recommendations for patients examined through a virtual urgent care setting? A new Cedars-Sinai study shows physicians and AI models have distinct strengths. The late-breaking study presented at the American College of Physicians Internal Medicine Meeting and published simultaneously in the Annals of Internal Medicine compared initial AI treatment recommendations to final recommendations of physicians who had access to the AI recommendations but may or may not have reviewed them. "We found that initial AI recommendations for common complaints in an urgent care setting were rated higher than final physician recommendations," said Joshua Pevnick, MD, MSHS, co-director of the Cedars-Sinai Division of Informatics, associate professor of Medicine and co-senior author of the study. "Artificial intelligence, as an example, was especially successful in flagging urinary tract infections potentially caused by antibiotic-resistant bacteria and suggesting a culture be ordered before prescribing medications." However, Pevnick said that while AI was shown to be better at identifying critical red flags, "physicians were better at eliciting a more complete history from patients and adapting their recommendations accordingly." The retrospective study was conducted using data from Cedars-Sinai Connect, a virtual primary and urgent care program that began in 2023. An extension of Cedars-Sinai's in-person care, Cedars-Sinai Connect aims to expand virtual healthcare for patients in California through a mobile app that allows individuals to quickly and easily access Cedars-Sinai experts for acute, chronic and preventive care. The study reviewed 461 physician-managed visits with AI recommendations from June 12 through July 14, 2024. Key medical issues addressed during these virtual urgent care visits involved adults with respiratory, urinary, vaginal, vision or dental symptoms. Patients using the mobile app initiate visits by entering their medical concerns and, for first-time users, providing demographic information. An expert AI model conducts a structured dynamic interview, gathering symptom information and medical history. On average, patients answer 25 questions in five minutes. An algorithm uses the patient's answers as well as data from the patient's electronic health record to provide initial information about conditions with related symptoms. After presenting patients with possible diagnoses to explain their symptoms, the mobile app allows patients to initiate a video visit with a physician. The algorithm also suggests diagnosis and treatment recommendations that can be viewed by the Cedars-Sinai Connect treating physician, though during the time of the study, Cedars-Sinai Connect required physicians to scroll down to view them. "The major uncertainty of this study is whether physicians scrolled down to view the prescribing, ordering, referral or other management suggestions made by AI, and whether they incorporated these recommendations into their clinical decision-making," said Caroline Goldzweig, MD, Cedars-Sinai Medical Network chief medical officer and co-senior author of the study. "The fact that the AI recommendations were often rated as higher quality than physician decisions, however, suggests that AI decision support, when implemented effectively at the point of care, has the potential to improve clinical decision-making for common and acute conditions." The AI system used for Cedars-Sinai Connect is developed by K Health, which created the technology to reduce the burdens of clinical intake and data entry, allowing doctors to focus more on patient care. K Health and Cedars-Sinai developed Cedars-Sinai Connect through a joint venture and collaborated on the research study. Investigators from Tel Aviv University, including first author Dan Zeltzer, PhD, also participated in the study. "We put AI to the test in real-world conditions, not contrived scenarios," said Ran Shaul, co-founder and chief product officer of K Health. "In the reality of everyday primary care, there are so many variables and factors -- you're dealing with complex human beings, and any given AI has to deal with incomplete data and a very diverse set of patients." Shaul said the investigators learned that if you train the AI on the treasure trove of de-identified clinical notes and use day-to-day provider care as an always-on reinforcement learning mechanism, "you can reach the level of accuracy you would expect from a human doctor."
[2]
Cedars-Sinai study reveals distinct roles for AI and physicians in virtual care
Cedars-Sinai Medical CenterApr 4 2025 Do physicians or artificial intelligence (AI) offer better treatment recommendations for patients examined through a virtual urgent care setting? A new Cedars-Sinai study shows physicians and AI models have distinct strengths. The late-breaking study presented at the American College of Physicians Internal Medicine Meeting and published simultaneously in the Annals of Internal Medicine compared initial AI treatment recommendations to final recommendations of physicians who had access to the AI recommendations but may or may not have reviewed them. We found that initial AI recommendations for common complaints in an urgent care setting were rated higher than final physician recommendations. Artificial intelligence, as an example, was especially successful in flagging urinary tract infections potentially caused by antibiotic-resistant bacteria and suggesting a culture be ordered before prescribing medications." Joshua Pevnick, MD, MSHS, co-director of the Cedars-Sinai Division of Informatics, associate professor of Medicine and co-senior author of the study However, Pevnick said that while AI was shown to be better at identifying critical red flags, "physicians were better at eliciting a more complete history from patients and adapting their recommendations accordingly." The retrospective study was conducted using data from Cedars-Sinai Connect, a virtual primary and urgent care program that began in 2023. An extension of Cedars-Sinai's in-person care, Cedars-Sinai Connect aims to expand virtual healthcare for patients in California through a mobile app that allows individuals to quickly and easily access Cedars-Sinai experts for acute, chronic and preventive care. The study reviewed 461 physician-managed visits with AI recommendations from June 12 through July 14, 2024. Key medical issues addressed during these virtual urgent care visits involved adults with respiratory, urinary, vaginal, vision or dental symptoms. Patients using the mobile app initiate visits by entering their medical concerns and, for first-time users, providing demographic information. An expert AI model conducts a structured dynamic interview, gathering symptom information and medical history. On average, patients answer 25 questions in five minutes. An algorithm uses the patient's answers as well as data from the patient's electronic health record to provide initial information about conditions with related symptoms. After presenting patients with possible diagnoses to explain their symptoms, the mobile app allows patients to initiate a video visit with a physician. The algorithm also suggests diagnosis and treatment recommendations that can be viewed by the Cedars-Sinai Connect treating physician, though during the time of the study, Cedars-Sinai Connect required physicians to scroll down to view them. "The major uncertainty of this study is whether physicians scrolled down to view the prescribing, ordering, referral or other management suggestions made by AI, and whether they incorporated these recommendations into their clinical decision-making," said Caroline Goldzweig, MD, Cedars-Sinai Medical Network chief medical officer and co-senior author of the study. "The fact that the AI recommendations were often rated as higher quality than physician decisions, however, suggests that AI decision support, when implemented effectively at the point of care, has the potential to improve clinical decision-making for common and acute conditions." The AI system used for Cedars-Sinai Connect is developed by K Health, which created the technology to reduce the burdens of clinical intake and data entry, allowing doctors to focus more on patient care. K Health and Cedars-Sinai developed Cedars-Sinai Connect through a joint venture and collaborated on the research study. Investigators from Tel Aviv University, including first author Dan Zeltzer, PhD, also participated in the study. "We put AI to the test in real-world conditions, not contrived scenarios," said Ran Shaul, co-founder and chief product officer of K Health. "In the reality of everyday primary care, there are so many variables and factors-you're dealing with complex human beings, and any given AI has to deal with incomplete data and a very diverse set of patients." Shaul said the investigators learned that if you train the AI on the treasure trove of de-identified clinical notes and use day-to-day provider care as an always-on reinforcement learning mechanism, "you can reach the level of accuracy you would expect from a human doctor." Cedars-Sinai Medical Center Journal reference: Zeltzer, D., et al. (2025). Comparison of Initial Artificial Intelligence (AI) and Final Physician Recommendations in AI-Assisted Virtual Urgent Care Visits. Annals of Internal Medicine. doi.org/10.7326/annals-24-03283.
[3]
AI-driven clinical recommendations may aid physician decision making to improve quality of care
A study comparing artificial intelligence (AI)-driven clinical recommendations to physician decision-making for common acute complaints in a virtual primary care setting found that AI support has the potential to improve quality of care when implemented responsibly. In the study, AI recommendations were largely rated as superior to physicians by expert adjudicators. Trained on a massive amount of high-quality, real-world clinical data, which is far beyond what any physician would see in their lifetime, AI uses modeling to calculate real probabilities for diagnosis and referrals and then derives treatment plans from these probabilities based on the latest medical guidelines and protocols. Combined with human physician discernment, this approach could improve patient care and outcomes. The findings, presented at the breaking news scientific plenary session "New in Annals of Internal Medicine: Hear it First from the Authors," held at the New Orleans Ernest N. Morial Convention Center during the American College of Physicians' (ACP) Internal Medicine Meeting 2025, are also published in Annals of Internal Medicine. Researchers from Tel Aviv University, Cedars-Sinai Medical Center, and K Health, Inc. compared initial AI-driven clinical recommendations to final physician decision-making at Cedars-Sinai Connect, a virtual primary care clinic run by Cedars-Sinai, leveraging the K Health Clinical AI platform. To evaluate AI versus physician performance, four expert adjudicators reviewed 461 real-life primary care visits involving acute urinary, vaginal, respiratory, eye, and dental chief complaints, conditions for which AI has shown high diagnostic accuracy, and used a checklist to rate both the AI and physician recommendations. Patients described their symptoms through an AI-guided medical intake with their medical records automatically synced from the EHR, allowing physicians to review the AI-generated diagnosis, treatment recommendation, and any suggested prescriptions, labs, or referrals before the virtual visit where doctors would make the final clinical decision. Physicians could access the initial AI decisions but the researchers were unable to determine whether they did before making their final care decisions. Overall, the expert adjudicators rated the AI's recommendations higher than physicians' decisions. In about two in three cases, doctors made the exact same clinical decisions as the AI. In the remaining one in three cases, AI recommendations were rated as superior twice as often as they were inferior. AI received only about half as many "potentially harmful" ratings compared to physicians. According to the researchers, these findings demonstrate the strengths of AI and how it complements human practice. AI is designed to follow medical guidelines much more strictly and can pick up on subtle details or medical record notes that doctors may miss. However, doctors have a clear advantage in clarifying symptoms during the visit. Where AI and good doctors align is in their level of caution. If AI is not confident, it will not make a recommendation, which is what happened in about 20% of cases. The author of an accompanying editorial from Tufts University School of Medicine points to the study strengths while offering notes of caution about incorporating AI into medical practice. A common-sense approach to using AI may include using it to confirm physician diagnoses and therapeutic recommendations rather than making them. If discrepancies exist, then physicians should seek to determine why, which can be the seeds for improving the AI tool. The author emphasizes that to enhance clinical relevance, experienced clinicians should be involved in AI program development.
[4]
Artificial Intelligence Has Potential to Aid Physician Decisions During Virtual Urgent Care | Newswise
LOS ANGELES (April 4, 2025) -- Do physicians or artificial intelligence (AI) offer better treatment recommendations for patients examined through a virtual urgent care setting? A new Cedars-Sinai study shows physicians and AI models have distinct strengths. The late-breaking study presented at the American College of Physicians Internal Medicine Meeting and published simultaneously in the Annals of Internal Medicine compared initial AI treatment recommendations to final recommendations of physicians who had access to the AI recommendations but may or may not have reviewed them. "We found that initial AI recommendations for common complaints in an urgent care setting were rated higher than final physician recommendations," said Joshua Pevnick, MD, MSHS, co-director of the Cedars-Sinai Division of Informatics, associate professor of Medicine and co-senior author of the study. "Artificial intelligence, as an example, was especially successful in flagging urinary tract infections potentially caused by antibiotic-resistant bacteria and suggesting a culture be ordered before prescribing medications." However, Pevnick said that while AI was shown to be better at identifying critical red flags, "physicians were better at eliciting a more complete history from patients and adapting their recommendations accordingly." The retrospective study was conducted using data from Cedars-Sinai Connect, a virtual primary and urgent care program that began in 2023. An extension of Cedars-Sinai's in-person care, Cedars-Sinai Connect aims to expand virtual healthcare for patients in California through a mobile app that allows individuals to quickly and easily access Cedars-Sinai experts for acute, chronic and preventive care. The study reviewed 461 physician-managed visits with AI recommendations from June 12 through July 14, 2024. Key medical issues addressed during these virtual urgent care visits involved adults with respiratory, urinary, vaginal, vision or dental symptoms. Patients using the mobile app initiate visits by entering their medical concerns and, for first-time users, providing demographic information. An expert AI model conducts a structured dynamic interview, gathering symptom information and medical history. On average, patients answer 25 questions in five minutes. An algorithm uses the patient's answers as well as data from the patient's electronic health record to provide initial information about conditions with related symptoms. After presenting patients with possible diagnoses to explain their symptoms, the mobile app allows patients to initiate a video visit with a physician. The algorithm also suggests diagnosis and treatment recommendations that can be viewed by the Cedars-Sinai Connect treating physician, though during the time of the study, Cedars-Sinai Connect required physicians to scroll down to view them. "The major uncertainty of this study is whether physicians scrolled down to view the prescribing, ordering, referral or other management suggestions made by AI, and whether they incorporated these recommendations into their clinical decision-making," said Caroline Goldzweig, MD, Cedars-Sinai Medical Network chief medical officer and co-senior author of the study. "The fact that the AI recommendations were often rated as higher quality than physician decisions, however, suggests that AI decision support, when implemented effectively at the point of care, has the potential to improve clinical decision-making for common and acute conditions." The AI system used for Cedars-Sinai Connect is developed by K Health, which created the technology to reduce the burdens of clinical intake and data entry, allowing doctors to focus more on patient care. K Health and Cedars-Sinai developed Cedars-Sinai Connect through a joint venture and collaborated on the research study. Investigators from Tel Aviv University, including first author Dan Zeltzer, PhD, also participated in the study. "We put AI to the test in real-world conditions, not contrived scenarios," said Ran Shaul, co-founder and chief product officer of K Health. "In the reality of everyday primary care, there are so many variables and factors -- you're dealing with complex human beings, and any given AI has to deal with incomplete data and a very diverse set of patients." Shaul said the investigators learned that if you train the AI on the treasure trove of de-identified clinical notes and use day-to-day provider care as an always-on reinforcement learning mechanism, "you can reach the level of accuracy you would expect from a human doctor." Other authors involved in the study include Dan Zeltzer, PhD; Zehavi Kugler, MD; Lior Hayat, MD; Tamar Brufman, MD; Ran Ilan Ber, PhD; Keren Leibovich, PhD; Tom Beer, MSc; and Ilan Frank, MSc. This work was supported with funding by K Health. Cedars-Sinai Health Sciences University is advancing groundbreaking research and educating future leaders in medicine, biomedical sciences and allied health sciences. Learn more about the university.
[5]
AI Outperforms Urgent Care Docs, Study Says
TUESDAY, April 8, 2025 (HealthDay News) -- Cough? Sore throat? UTI? Eye infection? Artificial intelligence (AI) might one day be seeing you for these sorts of conditions, a new study says. AI programs appear to outperform human doctors when it comes to urgent care, researchers reported in the Annals of Internal Medicine. A medical AI program agreed with the clinical decisions of human doctors in about two-thirds of cases, according to a head-to-head test. In the remaining third, AI's recommendations were rated superior to those of human doctors twice as often, researchers found. "The AI diagnosis and management recommendations were more likely to be rated as optimal compared with physicians and less likely to be rated as potentially harmful," wrote the research team led by Dan Zeltzer, an associate professor of economics at Tel Aviv University in Israel. If implemented responsibly under the guidance of human doctors, AI could improve patient care by providing an instant second opinion, researchers concluded. "This study suggests that AI can enhance clinical decision making for common acute symptoms in a virtual urgent care setting," researchers wrote. For the study, they asked an AI program and human doctors to review 461 real-life primary care visits involving urinary, vaginal, respiratory, eye and dental complaints. The AI was trained on a massive amount of high-quality real-world data, far beyond what any physician would see in their lifetimes, researchers said. Four expert physicians specializing in family, internal and emergency medicine, each with at least a decade of experience, then compared the AI evaluations to those of the doctors, to see which provided the better clinical recommendations. Most of the time, the AI's clinical judgments jibed with those of human doctors, researchers found. But in the third of cases where AI and human docs didn't agree, the evaluators felt that AI did a better job twice as often. AI also received half as many potentially harmful ratings, compared to human doctors. This might be because AI is designed to follow medical guidelines much more strictly, and can pick up on subtle details or medical record notes that doctors might miss, researchers said. Human doctors do have an advantage in that they can ask follow-up questions to clarify a patient's symptoms during a visit, researchers noted. However, AI shares human doctors' level of caution, in that if it isn't confident in its recommendation it will not offer one, researchers said. In about 20% of cases, AI didn't offer a recommendation due to caution, researchers said. Doctors who plan to use an AI program "should know the history of any program's use, its error rate, and its possible commercial bias," Dr. Jerome Kassirer, a professor at Tufts University School of Medicine in Boston, wrote in an accompanying editorial. "I believe that clinicians should first use AI programs only to confirm their own diagnoses and therapeutic recommendations," Kassirer wrote. "If the machine output does not agree with their own judgment, they should seek to determine why the discrepancy exists. Exploration of these discrepancies can be the seeds for improving the AI tool."
[6]
An AI Outperformed Human Doctors in a New Clinical Study. And It's Humble
Imagine a patient logging into Cedars-Sinai's virtual urgent care platform, complaining of a persistent cough. The first clinician to respond isn't a human. It's an AI trained on billions of data points of real-world medical records -- asking the right questions, evaluating the patient's history, and calculating likely diagnoses before a physician ever joins the call. It's a hypothetical scenario, but one that closely mirrors how K Health's technology already works. In a study announced today, researchers from Cedars-Sinai, Tel Aviv University and biotechnology company K Health, a panel of expert physicians reviewed 461 patient visits in which both K Health's new diagnostic AI model and a doctor made clinical recommendations. The AI's diagnoses and suggestions for treatment matched the doctors' in roughly 68 percent of cases. In the remaining third, expert reviewers rated the AI's treatment plan as superior. "We put AI to the test in real-world situations, not static textbook scenarios," said Ran Shaul, co-founder and chief product officer at K Health. "You're dealing with complex human beings, and AI has to deal with incomplete data and a very diverse set of patients."
[7]
AI may be better than physicians at making clinical decision, study says
According to a Tel Aviv University study, AI may be better than physicians at making clinical decisions. A new study by Tel Aviv University has found that artificial intelligence (AI) may be better than physicians at making clinical decisions in virtual urgent care settings - at least when it comes to common medical conditions. The study, conducted by Prof. Dan Zeltzer of the Berglas School of Economics at Tel Aviv University and published this week in the Annals of Internal Medicine, evaluated the performance of AI in Cedars-Sinai Connect, a virtual urgent care clinic in Los Angeles operated in collaboration with Israeli health tech startup K Health. The findings were also presented at the annual conference of the American College of Physicians (ACP). Researchers analyzed 461 adult patient visits between June and July 2024, focusing on relatively common complaints such as respiratory, urinary, eye, vaginal, and dental symptoms. Each visit began with an AI-powered intake system, which generated diagnostic and treatment recommendations -- including prescriptions, lab tests, and referrals -- before the patient's video consultation with a physician. AI scores higher than physicians in most cases According to the study, the AI system's recommendations were rated as "optimal" in 77% of cases, compared to 67% of the decisions made by the treating physicians. The AI recommendations were judged better than the physician's in 21% of visits, while physicians outperformed the AI in 11% of cases. In the remaining 68%, both recommendations received the same rating. The ratings were based on evaluations conducted by a panel of four experienced physicians, each with over ten years of clinical practice. They assessed the quality of recommendations on a four-point scale: optimal, reasonable, inadequate, or potentially harmful. The panel also found that the AI made fewer potentially harmful recommendations -- 2.8% compared to 4.6% for physicians. Guideline adherence and data sensitivity "The AI consistently adhered to clinical guidelines, avoided unnecessary antibiotics, and identified red flags in patient histories," said Prof. Zeltzer. "For example, it correctly flagged eye pain in a contact lens wearer as a potential sign of infection -- something that could be easily overlooked in a fast-paced virtual setting." The algorithm, trained on millions of anonymized medical records, only provides recommendations when it reaches a high confidence threshold. In around 20% of cases, the AI declined to offer guidance due to insufficient certainty. Human judgment still has its place While the AI demonstrated notable strengths, the study also highlighted areas where physicians maintain a distinct advantage. Human doctors were better able to interpret nuance, respond to new information during live consultations, and assess the patient's condition in real time. "In cases where a patient reported shortness of breath due to COVID-19, for instance, a physician might determine it was mild congestion and not require emergency care -- something the AI, based solely on structured inputs, might misclassify," Zeltzer explained. Stay updated with the latest news! Subscribe to The Jerusalem Post Newsletter Subscribe Now Study limitations and future applications One limitation of the study was that it did not track whether physicians reviewed the AI's recommendations before making their final decisions, making it difficult to assess the AI's influence on physician behavior. The study focused solely on comparing the quality of the recommendations themselves. "This is one of the first studies to evaluate AI in real-world medical practice -- not based on textbooks or test cases, but on actual patient visits," Zeltzer said. "The findings are promising and suggest that AI could serve as an effective clinical decision-support tool, especially in virtual care environments." Prof. Zeltzer added that while AI is unlikely to replace physicians, it could help reduce diagnostic errors, draw attention to critical data, and speed up clinical workflows. "The future of healthcare will likely be a partnership between medical professionals and AI systems," he said. Sign up for the Health & Wellness newsletter >>
Share
Share
Copy Link
A Cedars-Sinai study reveals that AI recommendations were often rated higher than physician decisions in virtual urgent care settings, suggesting potential for AI to enhance clinical decision-making when implemented effectively.
A groundbreaking study conducted by Cedars-Sinai has revealed that artificial intelligence (AI) may outperform human physicians in providing treatment recommendations for patients in virtual urgent care settings. The research, presented at the American College of Physicians Internal Medicine Meeting and published in the Annals of Internal Medicine, compared initial AI treatment recommendations to final recommendations made by physicians 1.
The retrospective study analyzed data from Cedars-Sinai Connect, a virtual primary and urgent care program launched in 2023. Researchers reviewed 461 physician-managed visits with AI recommendations from June 12 to July 14, 2024, focusing on adults with respiratory, urinary, vaginal, vision, or dental symptoms 2.
Key findings include:
The study highlighted distinct strengths of both AI and human physicians:
The AI system used in Cedars-Sinai Connect was developed by K Health, designed to reduce the burdens of clinical intake and data entry. Patients interact with the AI through a mobile app, answering an average of 25 questions in five minutes. The AI then provides initial diagnoses and treatment recommendations, which can be viewed by the treating physician 5.
Dr. Caroline Goldzweig, Cedars-Sinai Medical Network chief medical officer and co-senior author of the study, emphasized that AI decision support, when implemented effectively at the point of care, has the potential to improve clinical decision-making for common and acute conditions 1.
However, researchers noted that the major uncertainty of the study was whether physicians actually viewed and incorporated the AI recommendations into their clinical decision-making 2.
Dr. Jerome Kassirer, a professor at Tufts University School of Medicine, suggested in an accompanying editorial that clinicians should initially use AI programs to confirm their own diagnoses and therapeutic recommendations. He emphasized the importance of exploring discrepancies between AI and human judgments as a means to improve AI tools 5.
Reference
[1]
[2]
[3]
Medical Xpress - Medical and Health News
|AI-driven clinical recommendations may aid physician decision making to improve quality of care[4]
[5]
A new study reveals that AI-powered chatbots can improve physicians' clinical management reasoning, outperforming doctors using conventional resources and matching the performance of standalone AI in complex medical decision-making scenarios.
3 Sources
3 Sources
Recent studies highlight the potential of artificial intelligence in medical settings, demonstrating improved diagnostic accuracy and decision-making. However, researchers caution about the need for careful implementation and human oversight.
2 Sources
2 Sources
A collaborative research study explores the effectiveness of GPT-4 in assisting physicians with patient diagnosis, highlighting both the potential and limitations of AI in healthcare.
3 Sources
3 Sources
A recent study reveals that ChatGPT, when used alone, significantly outperformed both human doctors and doctors using AI assistance in diagnosing medical conditions, raising questions about the future of AI in healthcare.
6 Sources
6 Sources
AI tools like ChatGPT are increasingly being used for medical diagnoses and health advice, with some users reporting significant improvements in chronic conditions. However, experts warn of the risks associated with relying solely on AI for medical interpretation.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved