Doctors Embrace AI in Clinical Practice, but Safety Concerns Persist

Curated by THEOUTPOST

On Tue, 5 Nov, 4:02 PM UTC

2 Sources

Share

A recent survey reveals that one in five UK doctors are using generative AI tools in clinical practice, raising questions about patient safety and the need for proper regulations.

Widespread Adoption of AI in Healthcare

A recent survey of approximately 1,000 general practitioners (GPs) in the UK has revealed that one in five doctors are already utilizing generative artificial intelligence (GenAI) tools, such as OpenAI's ChatGPT or Google's Gemini, in their clinical practice 12. This adoption comes as healthcare systems face numerous challenges, and both doctors and policymakers view AI as a potential solution for modernizing and transforming health services.

Current Applications of GenAI in Clinical Settings

Doctors reported using GenAI for various purposes in their daily practice:

  1. Generating documentation after patient appointments
  2. Assisting with clinical decision-making processes
  3. Providing information to patients, including comprehensible discharge summaries and treatment plans

Concerns Surrounding GenAI in Healthcare

Despite its potential benefits, experts warn that GenAI poses unique challenges to patient safety that require careful consideration:

  1. Lack of Specificity: Unlike traditional AI applications designed for specific tasks, GenAI is based on foundation models with generic capabilities. This broad applicability makes it difficult to determine how doctors can use it safely in specific medical contexts 12.

  2. Hallucinations: GenAI is prone to producing "hallucinations" – nonsensical or untruthful outputs based on input data. This occurs because GenAI operates on the principle of likelihood rather than human-like understanding, potentially leading to plausible but inaccurate information in medical records 12.

  3. Patient Safety Risks: The use of GenAI in healthcare settings could lead to various safety risks, including:

    • Changes in the frequency or severity of reported symptoms
    • Addition of symptoms not mentioned by patients
    • Inclusion of information never discussed during consultations

Challenges in Ensuring Safe Implementation

Several factors complicate the safe implementation of GenAI in healthcare:

  1. Constant Updates: Developers regularly update GenAI technologies, adding new capabilities that can alter the behavior of applications 12.

  2. Unpredictable Use Cases: The adaptable nature of GenAI means it can be used in ways that are difficult to anticipate and regulate 12.

  3. Contextual Safety: The safety of GenAI depends on its interactions within specific healthcare contexts, including how it works with people, fits with existing rules and pressures, and aligns with the culture and priorities of the larger health system 12.

Potential for Unintended Consequences

Even if GenAI appears to function safely, its implementation could lead to unintended consequences:

  1. Digital Divide: The introduction of GenAI conversational agents for triage could affect patients' willingness to engage with the healthcare system, particularly those with lower digital literacy, non-native English speakers, or non-verbal patients 12.

  2. Fragmented Care: In healthcare systems where patients often see different providers, inaccuracies in AI-generated notes could pose significant risks, including delays, improper treatment, and misdiagnosis 12.

Future Outlook

While healthcare could benefit tremendously from the adoption of GenAI and other AI tools, experts emphasize the need for:

  1. More responsive safety assurance and regulation mechanisms
  2. Continued research to reduce the likelihood of hallucinations
  3. A systems perspective approach to determine the safe use of GenAI in various healthcare contexts

As the technology evolves and its adoption in healthcare grows, striking a balance between innovation and patient safety remains a critical challenge for the medical community and policymakers alike.

Continue Reading
One in Five GPs Using AI for Daily Tasks, Raising Concerns

One in Five GPs Using AI for Daily Tasks, Raising Concerns and Opportunities

A recent survey reveals that 20% of general practitioners are utilizing AI tools like ChatGPT for various tasks, despite a lack of formal guidance. This trend highlights both potential benefits and risks in healthcare.

The Guardian logoMedical Xpress - Medical and Health News logoSky News logoThe Telegraph logo

4 Sources

The Guardian logoMedical Xpress - Medical and Health News logoSky News logoThe Telegraph logo

4 Sources

AI in Healthcare: Balancing Innovation with Trust and

AI in Healthcare: Balancing Innovation with Trust and Regulation

An exploration of the challenges and opportunities in integrating AI into healthcare, focusing on building trust among medical professionals and ensuring patient safety through proper regulation and data integrity.

Fast Company logoFortune logo

2 Sources

Fast Company logoFortune logo

2 Sources

New Guidelines for Safe Integration of AI in Clinical

New Guidelines for Safe Integration of AI in Clinical Settings

Researchers from UTHealth Houston and Baylor College of Medicine have published new guidance in JAMA for safely implementing and using AI in healthcare settings, emphasizing the need for robust governance and testing processes.

News-Medical.net logonewswise logoMedical Xpress - Medical and Health News logo

3 Sources

News-Medical.net logonewswise logoMedical Xpress - Medical and Health News logo

3 Sources

AI Models Excel in Medical Exams but Struggle with

AI Models Excel in Medical Exams but Struggle with Real-World Patient Interactions

A new study reveals that while AI models perform well on standardized medical tests, they face significant challenges in simulating real-world doctor-patient conversations, raising concerns about their readiness for clinical deployment.

ScienceDaily logoNews-Medical.net logoNew Scientist logo

3 Sources

ScienceDaily logoNews-Medical.net logoNew Scientist logo

3 Sources

Study Reveals ChatGPT's Limitations in Emergency Room

Study Reveals ChatGPT's Limitations in Emergency Room Decision-Making

A new study from UC San Francisco shows that AI models like ChatGPT are not yet ready to make critical decisions in emergency rooms, tending to overprescribe treatments and admissions compared to human doctors.

Borneo Bulletin Online logoMiami Herald logoU.S. News & World Report logoMedical Xpress - Medical and Health News logo

5 Sources

Borneo Bulletin Online logoMiami Herald logoU.S. News & World Report logoMedical Xpress - Medical and Health News logo

5 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved