3 Sources
[1]
New guidelines for safely integrating ai into clinical settings
University of Texas Health Science Center at HoustonNov 28 2024 As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. The guidance was published today, Nov. 27, 2024, in the Journal of the American Medical Association. We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings. It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked." Dean Sittig, PhD, Professor with McWilliams School of Biomedical Informatics, UTHealth Houston Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems. "Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now." Some of the recommended actions for health care organizations are listed below: · Review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI's safety and effectiveness. · Establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. Committee members should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance. · Formally train clinicians on AI usage and risk, but also be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI's role in health care. · Maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks. · Develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes. "Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care." Also providing input to the article were Robert Murphy, MD, associate professor and associate dean, and Debora Simmons, PhD, RN, assistant professor, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics; and Trisha Flanagan, RN, MSN. Source: University of Texas Health Science Center at Houston Journal reference: Sittig, D. F., & Singh, H. (2024). Recommendations to Ensure Safety of AI in Real-World Clinical Care. JAMA. doi.org/10.1001/jama.2024.24598.
[2]
New guidance for ensuring AI safety in clinical care published in JAMA by UTHealth Houston, Baylor College of Medicine researchers | Newswise
As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. The guidance was published today, Nov. 27, 2024, in the Journal of the American Medical Association. "We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings," Sittig said. "It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked." Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems. "Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now." Some of the recommended actions for health care organizations are listed below: "Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care." Also providing input to the article were Robert Murphy, MD, associate professor and associate dean, and Debora Simmons, PhD, RN, assistant professor, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics; and Trisha Flanagan, RN, MSN.
[3]
Researchers publish new guidance for ensuring AI safety in clinical care
by Laura Frnka-Davis , University of Texas Health Science Center at Houston As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, Ph.D., professor at McWilliams School of Biomedical Informatics at UTHealth Houston, and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. The guidance was published today in the Journal of the American Medical Association. "We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings," Sittig said. "It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked." Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems. "Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now." Some of the recommended actions for health care organizations are listed below: "Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care."
Share
Copy Link
Researchers from UTHealth Houston and Baylor College of Medicine have published new guidance in JAMA for safely implementing and using AI in healthcare settings, emphasizing the need for robust governance and testing processes.
In a groundbreaking publication, researchers from the University of Texas Health Science Center at Houston (UTHealth Houston) and Baylor College of Medicine have outlined crucial guidelines for the safe implementation of artificial intelligence (AI) in clinical settings. The guidance, published in the Journal of the American Medical Association on November 27, 2024, addresses the growing prevalence of AI in healthcare and the need for robust safety measures 123.
The guidelines were co-authored by Dean Sittig, PhD, professor at McWilliams School of Biomedical Informatics at UTHealth Houston, and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. Their work draws from expert opinions, literature reviews, and extensive experience with health IT use and safety assessment 12.
The researchers have developed a pragmatic approach for healthcare organizations and clinicians to effectively monitor and manage AI systems. Some of the key recommendations include:
Dr. Sittig stressed the importance of shared responsibility among healthcare providers, AI developers, and electronic health record vendors in implementing AI safely. "By working together, we can build trust and promote the safe adoption of AI in healthcare," he stated 123.
Dr. Singh emphasized the need for healthcare delivery organizations to implement robust governance systems and testing processes. He urged all healthcare delivery organizations to review these recommendations and start preparing proactively for AI integration 12.
The guidelines also benefited from input provided by Robert Murphy, MD, Debora Simmons, PhD, RN, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics, and Trisha Flanagan, RN, MSN 123.
As AI continues to revolutionize medical care, these guidelines serve as a crucial framework for ensuring patient safety and building confidence in AI's role in healthcare. The researchers' work highlights the delicate balance between harnessing AI's potential and mitigating risks associated with its implementation in clinical settings.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago