AI Chatbots Prone to Spreading Medical Misinformation, Mount Sinai Study Reveals

2 Sources

A new study by researchers at the Icahn School of Medicine at Mount Sinai finds that AI chatbots are highly vulnerable to repeating and elaborating on false medical information, highlighting the need for stronger safeguards in healthcare AI applications.

AI Chatbots Vulnerable to Medical Misinformation

A groundbreaking study conducted by researchers at the Icahn School of Medicine at Mount Sinai has revealed a significant vulnerability in AI chatbots when it comes to handling medical information. The study, published in the August 2 online issue of Communications Medicine, found that these widely used AI tools are highly susceptible to repeating and elaborating on false medical information 1.

Study Methodology and Findings

Source: Medical Xpress

Source: Medical Xpress

The research team, led by Dr. Mahmud Omar, created fictional patient scenarios containing fabricated medical terms such as made-up diseases, symptoms, or tests. These scenarios were then submitted to leading large language models for analysis 2.

The results were alarming:

  1. Without any additional guidance, the chatbots routinely elaborated on the fake medical details.
  2. The AI tools confidently generated explanations about non-existent conditions and treatments.
  3. Even a single made-up term could trigger a detailed, decisive response based entirely on fiction.

Implementing a Simple Safeguard

In a second round of testing, the researchers added a one-line caution to the prompt, reminding the AI that the information provided might be inaccurate. This simple addition yielded promising results:

  1. The errors were reduced significantly, nearly cutting them in half.
  2. The chatbots demonstrated more caution in their responses when faced with potentially inaccurate information.

Dr. Eyal Klang, Chief of Generative AI in the Windreich Department of Artificial Intelligence and Human Health at Mount Sinai, emphasized the importance of this finding: "That tells us these tools can be made safer, but only if we take prompt design and built-in safeguards seriously" 1.

Implications for Healthcare AI

The study underscores a critical vulnerability in how current AI systems handle misinformation in healthcare settings. Dr. Girish N. Nadkarni, Chair of the Windreich Department of Artificial Intelligence and Human Health at Mount Sinai, stressed that the solution is not to abandon AI in medicine, but to engineer tools that can spot dubious input, respond with caution, and ensure human oversight remains central 2.

Future Research and Applications

The research team plans to extend their approach to real, de-identified patient records and test more advanced safety prompts and retrieval tools. They hope their "fake-term" method can serve as a simple yet powerful tool for hospitals, tech developers, and regulators to stress-test AI systems before clinical use 1.

This study marks an important step in ensuring the safe and effective use of AI in healthcare. As these technologies continue to evolve rapidly, the findings highlight the critical need for stronger safeguards and ongoing research to address potential risks and vulnerabilities.

Explore today's top stories

Researchers Exploit Gemini AI to Control Smart Home Devices via Calendar Invites

Cybersecurity researchers demonstrate a novel "promptware" attack that uses malicious Google Calendar invites to manipulate Gemini AI into controlling smart home devices, raising concerns about AI safety and real-world implications.

Ars Technica logoWired logoCNET logo

13 Sources

Technology

23 hrs ago

Researchers Exploit Gemini AI to Control Smart Home Devices

Google Defends AI Search Features, Claiming Stable Web Traffic and Increased Click Quality

Google's search head Liz Reid responds to concerns about AI's impact on web traffic, asserting that AI features are driving more searches and higher quality clicks, despite conflicting third-party reports.

Ars Technica logoTechCrunch logoengadget logo

8 Sources

Technology

23 hrs ago

Google Defends AI Search Features, Claiming Stable Web

OpenAI Offers ChatGPT Enterprise to US Federal Agencies for $1 in Landmark Deal

OpenAI has struck a deal with the US government to provide ChatGPT Enterprise to federal agencies for just $1 per agency for one year, marking a significant move in AI adoption within the government sector.

Ars Technica logoTechCrunch logoWired logo

14 Sources

Technology

23 hrs ago

OpenAI Offers ChatGPT Enterprise to US Federal Agencies for

Microsoft Integrates OpenAI's GPT-5 into Copilot Ecosystem, Offering Free Access to Advanced AI

Microsoft announces the integration of OpenAI's newly released GPT-5 model across its Copilot ecosystem, including Microsoft 365, GitHub, and Azure AI. The update promises enhanced AI capabilities for users and developers.

The Verge logoEconomic Times logoBeebom logo

3 Sources

Technology

6 hrs ago

Microsoft Integrates OpenAI's GPT-5 into Copilot Ecosystem,

Google's AI Coding Agent Jules Exits Beta with Enhanced Features and Tiered Pricing

Google has officially launched its AI coding agent Jules, powered by Gemini 2.5 Pro, offering asynchronous coding assistance with new features and tiered pricing plans.

TechCrunch logoZDNet logoXDA-Developers logo

10 Sources

Technology

23 hrs ago

Google's AI Coding Agent Jules Exits Beta with Enhanced
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo