Curated by THEOUTPOST
On Thu, 28 Nov, 8:04 AM UTC
3 Sources
[1]
New guidelines for safely integrating ai into clinical settings
University of Texas Health Science Center at HoustonNov 28 2024 As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. The guidance was published today, Nov. 27, 2024, in the Journal of the American Medical Association. We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings. It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked." Dean Sittig, PhD, Professor with McWilliams School of Biomedical Informatics, UTHealth Houston Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems. "Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now." Some of the recommended actions for health care organizations are listed below: 路 Review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI's safety and effectiveness. 路 Establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. Committee members should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance. 路 Formally train clinicians on AI usage and risk, but also be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI's role in health care. 路 Maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks. 路 Develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes. "Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care." Also providing input to the article were Robert Murphy, MD, associate professor and associate dean, and Debora Simmons, PhD, RN, assistant professor, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics; and Trisha Flanagan, RN, MSN. Source: University of Texas Health Science Center at Houston Journal reference: Sittig, D. F., & Singh, H. (2024). Recommendations to Ensure Safety of AI in Real-World Clinical Care. JAMA. doi.org/10.1001/jama.2024.24598.
[2]
New guidance for ensuring AI safety in clinical care published in JAMA by UTHealth Houston, Baylor College of Medicine researchers | Newswise
As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. The guidance was published today, Nov. 27, 2024, in the Journal of the American Medical Association. "We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings," Sittig said. "It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked." Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems. "Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now." Some of the recommended actions for health care organizations are listed below: "Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care." Also providing input to the article were Robert Murphy, MD, associate professor and associate dean, and Debora Simmons, PhD, RN, assistant professor, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics; and Trisha Flanagan, RN, MSN.
[3]
Researchers publish new guidance for ensuring AI safety in clinical care
by Laura Frnka-Davis , University of Texas Health Science Center at Houston As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, Ph.D., professor at McWilliams School of Biomedical Informatics at UTHealth Houston, and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. The guidance was published today in the Journal of the American Medical Association. "We often hear about the need for AI to be built safely, but not about how to use it safely in health care settings," Sittig said. "It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked." Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems. "Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes," Singh said. "All health care delivery organizations should check out these recommendations and start proactively preparing for AI now." Some of the recommended actions for health care organizations are listed below: "Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients," Sittig said. "By working together, we can build trust and promote the safe adoption of AI in health care."
Share
Share
Copy Link
Researchers from UTHealth Houston and Baylor College of Medicine have published new guidance in JAMA for safely implementing and using AI in healthcare settings, emphasizing the need for robust governance and testing processes.
In a groundbreaking publication, researchers from the University of Texas Health Science Center at Houston (UTHealth Houston) and Baylor College of Medicine have outlined crucial guidelines for the safe implementation of artificial intelligence (AI) in clinical settings. The guidance, published in the Journal of the American Medical Association on November 27, 2024, addresses the growing prevalence of AI in healthcare and the need for robust safety measures 123.
The guidelines were co-authored by Dean Sittig, PhD, professor at McWilliams School of Biomedical Informatics at UTHealth Houston, and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine. Their work draws from expert opinions, literature reviews, and extensive experience with health IT use and safety assessment 12.
The researchers have developed a pragmatic approach for healthcare organizations and clinicians to effectively monitor and manage AI systems. Some of the key recommendations include:
Dr. Sittig stressed the importance of shared responsibility among healthcare providers, AI developers, and electronic health record vendors in implementing AI safely. "By working together, we can build trust and promote the safe adoption of AI in healthcare," he stated 123.
Dr. Singh emphasized the need for healthcare delivery organizations to implement robust governance systems and testing processes. He urged all healthcare delivery organizations to review these recommendations and start preparing proactively for AI integration 12.
The guidelines also benefited from input provided by Robert Murphy, MD, Debora Simmons, PhD, RN, both from the Department of Clinical and Health Informatics at McWilliams School of Biomedical Informatics, and Trisha Flanagan, RN, MSN 123.
As AI continues to revolutionize medical care, these guidelines serve as a crucial framework for ensuring patient safety and building confidence in AI's role in healthcare. The researchers' work highlights the delicate balance between harnessing AI's potential and mitigating risks associated with its implementation in clinical settings.
Reference
[1]
[3]
Medical Xpress - Medical and Health News
|Researchers publish new guidance for ensuring AI safety in clinical careAn exploration of the challenges and opportunities in integrating AI into healthcare, focusing on building trust among medical professionals and ensuring patient safety through proper regulation and data integrity.
2 Sources
2 Sources
A recent survey reveals that one in five UK doctors are using generative AI tools in clinical practice, raising questions about patient safety and the need for proper regulations.
2 Sources
2 Sources
A recent study reveals that nearly half of FDA-approved medical AI devices lack proper clinical validation data, raising concerns about their real-world performance and potential risks to patient care.
3 Sources
3 Sources
A global initiative has produced a set of recommendations to address potential bias in AI-based medical technologies, aiming to ensure equitable and effective healthcare for all.
3 Sources
3 Sources
Smart hospitals are revolutionizing healthcare by integrating AI and data management. However, the implementation of AI in healthcare faces significant challenges that need to be addressed.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
漏 2025 TheOutpost.AI All rights reserved