Study Warns Against Relying on AI Chatbots for Drug Information

Curated by THEOUTPOST

On Fri, 11 Oct, 8:03 AM UTC

2 Sources

Share

A new study published in BMJ Quality & Safety cautions against using AI-powered search engines and chatbots for drug information, citing inaccuracies and potential harm to patients.

AI Chatbots Unreliable for Drug Information, Study Finds

A recent study published in the journal BMJ Quality & Safety has raised significant concerns about the reliability of AI-powered search engines and chatbots in providing accurate and safe drug information to patients. Researchers found that a considerable number of answers generated by these AI systems were either incorrect or potentially harmful 12.

Study Methodology and Findings

The study focused on the top 50 most frequently prescribed drugs in the US in 2020, using Bing copilot as the AI-powered chatbot. Researchers simulated patient queries by asking 10 questions about each drug, covering topics such as usage, mechanism of action, side effects, and contraindications 1.

Key findings include:

  1. Readability: The average Flesch Reading Ease Score was just over 37, indicating that a degree-level education would be required to understand the chatbot's responses fully 12.

  2. Completeness: While some questions were answered with 100% completeness, others, such as "What do I have to consider when taking the drug?", had an average completeness of only 23% 1.

  3. Accuracy: 26% of chatbot answers did not match the reference data, with 3% being fully inconsistent 2.

  4. Scientific Consensus: In a subset of 20 answers evaluated by experts, only 54% aligned with scientific consensus, while 39% contradicted it 12.

Potential Harm and Safety Concerns

The study revealed alarming statistics regarding the potential harm that could result from following the chatbot's advice:

  • 3% of answers were rated as highly likely to cause harm
  • 29% were moderately likely to cause harm
  • 42% could lead to moderate or mild harm
  • 22% could potentially result in death or severe harm 12

Limitations of AI Chatbots in Healthcare

Researchers identified several drawbacks in using AI chatbots for medical information:

  1. Inability to understand the underlying intent of patient questions
  2. Difficulty in generating error-free information consistently
  3. Lack of context-awareness in providing personalized medical advice 12

Implications and Recommendations

While the study acknowledges that AI-powered chatbots can produce overall complete and accurate answers to patient questions, it emphasizes the need for caution. The researchers strongly advise against relying solely on these tools for medical information and stress the importance of consulting healthcare professionals 12.

Dr. W. Andrikyan, the lead researcher, stated, "Despite their potential, it is still crucial for patients to consult their healthcare professionals, as chatbots may not always generate error-free information. Caution is advised in recommending AI-powered search engines until citation engines with higher accuracy rates are available." 2

As AI continues to evolve in the healthcare sector, this study underscores the need for improved accuracy, readability, and safety measures in AI-powered medical information systems. It also highlights the ongoing importance of human expertise in interpreting and applying medical knowledge.

Continue Reading
AI in Healthcare: Promises and Pitfalls of Medical Advice

AI in Healthcare: Promises and Pitfalls of Medical Advice Chatbots

Software developers are exploring the use of AI chatbots for medical advice, raising questions about accuracy and potential risks. While these tools show promise, experts caution against relying solely on AI for healthcare decisions.

CBS News logo

2 Sources

CBS News logo

2 Sources

AI Chatbots Enhance Physician Decision-Making in Clinical

AI Chatbots Enhance Physician Decision-Making in Clinical Management, Study Finds

A new study reveals that AI-powered chatbots can improve physicians' clinical management reasoning, outperforming doctors using conventional resources and matching the performance of standalone AI in complex medical decision-making scenarios.

ScienceDaily logoStanford News logonewswise logo

3 Sources

ScienceDaily logoStanford News logonewswise logo

3 Sources

AI Chatbots Provide Mostly Accurate but Incomplete

AI Chatbots Provide Mostly Accurate but Incomplete Information on Endometriosis, Study Finds

A UT Southwestern Medical Center study reveals that leading AI chatbots offer generally correct but not comprehensive information about endometriosis, highlighting the need for expert medical consultation.

newswise logoMedical Xpress - Medical and Health News logo

2 Sources

newswise logoMedical Xpress - Medical and Health News logo

2 Sources

BBC Study Reveals Significant Inaccuracies in AI-Generated

BBC Study Reveals Significant Inaccuracies in AI-Generated News Summaries

A BBC investigation finds that major AI chatbots, including ChatGPT, Copilot, Gemini, and Perplexity AI, struggle with accuracy when summarizing news articles, raising concerns about the reliability of AI in news dissemination.

MediaNama logoDataconomy logoZDNet logoArs Technica logo

14 Sources

MediaNama logoDataconomy logoZDNet logoArs Technica logo

14 Sources

AI Search Engines Struggle with Accuracy, Study Reveals 60%

AI Search Engines Struggle with Accuracy, Study Reveals 60% Error Rate

A new study by Columbia's Tow Center for Digital Journalism finds that AI-driven search tools frequently provide incorrect information, with an average error rate of 60% when queried about news content.

Ars Technica logoZDNet logoTechSpot logoGizmodo logo

11 Sources

Ars Technica logoZDNet logoTechSpot logoGizmodo logo

11 Sources

TheOutpost.ai

Your one-stop AI hub

The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.

© 2025 TheOutpost.AI All rights reserved