3 Sources
[1]
Research highlights challenges in AI-assisted clinical decision-making
A collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center and the University of Virginia has studied how well doctors use GPT-4 -- an artificial intelligence (AI) large language model system -- for diagnosing patients. The research is published in the journal JAMA Network Open. The study was conducted with 50 U.S.-licensed physicians in family medicine, internal medicine and emergency medicine. The research team found that the availability of GPT-4 to physicians as a diagnostic aid did not significantly improve clinical reasoning compared to conventional resources. Other key findings include: "The field of AI is expanding rapidly and impacting our lives inside and outside of medicine. It is important that we study these tools and understand how we best use them to improve the care we provide as well as the experience of providing it," said Andrew Olson, MD, a professor at the U of M Medical School and hospitalist with M Health Fairview. "This study suggests that there are opportunities for further improvement in physician-AI collaboration in clinical practice." These results underline the complexity of integrating AI into clinical practice. While GPT-4 alone showed promising results, the integration of GPT-4 as a diagnostic aid alongside clinicians did not significantly outperform the use of conventional diagnostic resources. This suggests a nuanced potential for AI in health care, emphasizing the importance of further exploration into how AI can best support clinical practice. Further, more studies are needed to understand how clinicians should be trained to use these tools. The four collaborating institutions have launched a bi-coastal AI evaluation network -- known as ARiSE -- to further evaluate GenAI outputs in health care.
[2]
AI in healthcare: New research shows promise and limitations of physicians working with GPT-4 for decision making
Published in JAMA Network Open, a collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center and the University of Virginia studied how well doctors used GPT-4 -- an artificial intelligence (AI) large language model system -- for diagnosing patients. The study was conducted with 50 U.S.-licensed physicians in family medicine, internal medicine and emergency medicine. The research team found that the availability of GPT-4 to physicians as a diagnostic aid did not significantly improve clinical reasoning compared to conventional resources. Other key findings include: "The field of AI is expanding rapidly and impacting our lives inside and outside of medicine. It is important that we study these tools and understand how we best use them to improve the care we provide as well as the experience of providing it," said Andrew Olson, MD, a professor at the U of M Medical School and hospitalist with M Health Fairview. "This study suggests that there are opportunities for further improvement in physician-AI collaboration in clinical practice." These results underline the complexity of integrating AI into clinical practice. While GPT-4 alone showed promising results, the integration of GPT-4 as a diagnostic aid alongside clinicians did not significantly outperform the use of conventional diagnostic resources. This suggests a nuanced potential for AI in healthcare, emphasizing the importance of further exploration into how AI can best support clinical practice. Further, more studies are needed to understand how clinicians should be trained to use these tools. The four collaborating institutions have launched a bi-coastal AI evaluation network -- known as ARiSE -- to further evaluate GenAI outputs in healthcare. Funding for this research was provided by the Gordon and Betty Moore Foundation.
[3]
Study reveals AI's potential and pitfalls in medical diagnosis
University of Minnesota Medical SchoolOct 28 2024 Published in JAMA Network Open, a collaborative team of researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center and the University of Virginia studied how well doctors used GPT-4 -; an artificial intelligence (AI) large language model system -; for diagnosing patients. The study was conducted with 50 U.S.-licensed physicians in family medicine, internal medicine and emergency medicine. The research team found that the availability of GPT-4 to physicians as a diagnostic aid did not significantly improve clinical reasoning compared to conventional resources. Other key findings include: GPT-4 alone demonstrated significantly better scores in diagnostic performance, surpassing the performance of clinicians using conventional diagnostic online resources and clinicians assisted by GPT-4. There was no significant enhancement in diagnostic performance with the addition of GPT-4 when assessing clinicians using GPT-4 against clinicians using conventional diagnostic resources. "The field of AI is expanding rapidly and impacting our lives inside and outside of medicine. It is important that we study these tools and understand how we best use them to improve the care we provide as well as the experience of providing it," said Andrew Olson, MD, a professor at the U of M Medical School and hospitalist with M Health Fairview. This study suggests that there are opportunities for further improvement in physician-AI collaboration in clinical practice." Andrew Olson, MD, Professor, University of Minnesota Medical School These results underline the complexity of integrating AI into clinical practice. While GPT-4 alone showed promising results, the integration of GPT-4 as a diagnostic aid alongside clinicians did not significantly outperform the use of conventional diagnostic resources. This suggests a nuanced potential for AI in healthcare, emphasizing the importance of further exploration into how AI can best support clinical practice. Further, more studies are needed to understand how clinicians should be trained to use these tools. The four collaborating institutions have launched a bi-coastal AI evaluation network -; known as ARiSE -; to further evaluate GenAI outputs in healthcare. Funding for this research was provided by the Gordon and Betty Moore Foundation. University of Minnesota Medical School Journal reference: Goh, E., et al. (2024) Large Language Model Influence on Diagnostic Reasoning. JAMA Network Open. doi.org/10.1001/jamanetworkopen.2024.40969.
Share
Copy Link
A collaborative research study explores the effectiveness of GPT-4 in assisting physicians with patient diagnosis, highlighting both the potential and limitations of AI in healthcare.
A groundbreaking study published in JAMA Network Open has shed light on the complexities of integrating artificial intelligence (AI) into clinical practice. Researchers from the University of Minnesota Medical School, Stanford University, Beth Israel Deaconess Medical Center, and the University of Virginia collaborated to investigate how effectively physicians utilize GPT-4, an advanced AI language model, for patient diagnosis 1.
The research involved 50 U.S.-licensed physicians specializing in family medicine, internal medicine, and emergency medicine. The primary objective was to assess the impact of GPT-4 on clinical reasoning and diagnostic performance. Surprisingly, the study revealed that the availability of GPT-4 as a diagnostic aid did not significantly enhance clinical reasoning compared to conventional resources 2.
Key findings from the study include:
Dr. Andrew Olson, a professor at the University of Minnesota Medical School and hospitalist with M Health Fairview, emphasized the importance of studying AI tools to improve patient care and the healthcare provider experience. He stated, "This study suggests that there are opportunities for further improvement in physician-AI collaboration in clinical practice" 1.
The research underscores the nuanced potential of AI in healthcare and highlights the need for further exploration into how AI can best support clinical practice. It also raises questions about the training required for clinicians to effectively utilize these tools 2.
In response to these findings, the four collaborating institutions have launched a bi-coastal AI evaluation network called ARiSE. This initiative aims to further evaluate GenAI outputs in healthcare, potentially paving the way for more effective integration of AI technologies in clinical settings 3.
The study, funded by the Gordon and Betty Moore Foundation, marks a significant step in understanding the role of AI in medical diagnosis. As the field of AI continues to expand rapidly, impacting various aspects of medicine, research like this becomes crucial in shaping the future of healthcare delivery and improving patient outcomes.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
6 hrs ago
9 Sources
Technology
6 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
22 hrs ago
7 Sources
Technology
22 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
14 hrs ago
6 Sources
Technology
14 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
14 hrs ago
3 Sources
Health
14 hrs ago