Study Warns Against Relying on AI Chatbots for Drug Information

2 Sources

Share

A new study published in BMJ Quality & Safety cautions against using AI-powered search engines and chatbots for drug information, citing inaccuracies and potential harm to patients.

News article

AI Chatbots Unreliable for Drug Information, Study Finds

A recent study published in the journal BMJ Quality & Safety has raised significant concerns about the reliability of AI-powered search engines and chatbots in providing accurate and safe drug information to patients. Researchers found that a considerable number of answers generated by these AI systems were either incorrect or potentially harmful

1

2

.

Study Methodology and Findings

The study focused on the top 50 most frequently prescribed drugs in the US in 2020, using Bing copilot as the AI-powered chatbot. Researchers simulated patient queries by asking 10 questions about each drug, covering topics such as usage, mechanism of action, side effects, and contraindications

1

.

Key findings include:

  1. Readability: The average Flesch Reading Ease Score was just over 37, indicating that a degree-level education would be required to understand the chatbot's responses fully

    1

    2

    .

  2. Completeness: While some questions were answered with 100% completeness, others, such as "What do I have to consider when taking the drug?", had an average completeness of only 23%

    1

    .

  3. Accuracy: 26% of chatbot answers did not match the reference data, with 3% being fully inconsistent

    2

    .

  4. Scientific Consensus: In a subset of 20 answers evaluated by experts, only 54% aligned with scientific consensus, while 39% contradicted it

    1

    2

    .

Potential Harm and Safety Concerns

The study revealed alarming statistics regarding the potential harm that could result from following the chatbot's advice:

  • 3% of answers were rated as highly likely to cause harm
  • 29% were moderately likely to cause harm
  • 42% could lead to moderate or mild harm
  • 22% could potentially result in death or severe harm

    1

    2

Limitations of AI Chatbots in Healthcare

Researchers identified several drawbacks in using AI chatbots for medical information:

  1. Inability to understand the underlying intent of patient questions
  2. Difficulty in generating error-free information consistently
  3. Lack of context-awareness in providing personalized medical advice

    1

    2

Implications and Recommendations

While the study acknowledges that AI-powered chatbots can produce overall complete and accurate answers to patient questions, it emphasizes the need for caution. The researchers strongly advise against relying solely on these tools for medical information and stress the importance of consulting healthcare professionals

1

2

.

Dr. W. Andrikyan, the lead researcher, stated, "Despite their potential, it is still crucial for patients to consult their healthcare professionals, as chatbots may not always generate error-free information. Caution is advised in recommending AI-powered search engines until citation engines with higher accuracy rates are available."

2

As AI continues to evolve in the healthcare sector, this study underscores the need for improved accuracy, readability, and safety measures in AI-powered medical information systems. It also highlights the ongoing importance of human expertise in interpreting and applying medical knowledge.

Explore today's top stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo