Google AI Overviews caught dispensing dangerous health advice in Guardian investigation

Reviewed byNidhi Govil

2 Sources

Share

An investigation by The Guardian reveals Google AI Overviews is providing false and misleading health information that experts call 'really dangerous.' The AI tool gave incorrect advice about pancreatic cancer, women's cancer tests, and mental health conditions. Medical experts warn that relying on AI for medical advice could seriously harm users' health or even lead to death.

Google AI Overviews Under Fire for Dangerous Health Information

Google AI Overviews, the search giant's AI-powered summary feature rolled out in May 2024, is now facing serious scrutiny after an investigation by The Guardian uncovered that it provides false and misleading health advice that could put users at risk

1

2

. The newspaper tested the tool with various health-related queries and asked medical experts to review the AI-generated responses. What they found raises urgent questions about the safety of relying on AI for medical advice, particularly when dealing with serious conditions.

Source: Futurism

Source: Futurism

In one case described by experts as "really dangerous," Google AI Overviews advised people with pancreatic cancer to avoid high-fat foods—the exact opposite of correct medical guidance

1

. The AI misinformation extended to women's cancer tests, where a search for "vaginal cancer symptoms and tests" incorrectly listed a pap test as a diagnostic tool for vaginal cancer, which experts confirmed was completely wrong

1

. Additionally, the tool provided misleading data about liver function tests with little context and no consideration for nationality, sex, ethnicity, or age

1

.

Source: ZDNet

Source: ZDNet

Mental Health Conditions Receive 'Very Dangerous Advice'

The problems with Google's AI-generated summaries weren't limited to physical health. Stephen Buckley, head of information at mental health charity Mind, told Google that some AI Overviews summaries for mental health conditions like psychosis and eating disorders displayed "very dangerous advice" that was "incorrect, harmful, or could lead people to avoid seeking help"

1

2

. Stephanie Parker, director of digital at end-of-life charity Marie Curie, emphasized the stakes: "People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health".

The Self-Diagnosis Dilemma and Growing Trust in AI

The timing of these revelations is particularly concerning given growing public reliance on AI for health information. An April 2025 survey by the University of Pennsylvania's Annenberg Public Policy Center found that nearly eight in ten adults said they're likely to go online for answers about health symptoms and conditions

2

. More troubling, nearly two-thirds found AI-generated results to be "somewhat or very reliable," indicating considerable trust in tools that medical professionals warn against using for self-diagnosis.

A separate MIT study revealed that participants deemed low-accuracy AI-generated responses "valid, trustworthy, and complete/satisfactory" and showed a high tendency to follow potentially harmful medical advice, even seeking unnecessary medical attention based on flawed AI responses

2

. These findings highlight a dangerous gap between user trust and the actual reliability of large language model-based tools, which continue to suffer from hallucinations and inconsistent outputs.

Google Defends Quality Controls Amid Criticism

In response to The Guardian's findings, a Google spokesperson stated that the company invests "significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information"

1

2

. Google also qualified some of the problematic examples, noting that the AI Overview doesn't say pap tests are meant for diagnosing vaginal cancer but that vaginal cancer can be found incidentally on such tests

1

.

However, independent testing revealed another critical issue: the way users phrase questions dramatically influences the answers they receive. The same query worded differently can produce vastly different responses—one partially accurate, the other potentially harmful

1

. This inconsistency compounds the risk for vulnerable users seeking urgent health guidance.

Medical Professionals Sound the Alarm

The Canadian Medical Association has labeled AI-generated health advice as "dangerous" on its website, warning that hallucinations, algorithmic biases, and outdated facts can "mislead you and potentially harm your health"

2

. Medical professionals now face the daunting task of dispelling myths and correcting misinformation spread by AI tools. Experts continue to advise consulting human doctors and licensed healthcare professionals instead of AI, though this remains a challenge given barriers to adequate care worldwide

2

. As AI tools become more prevalent in search results, the question isn't just whether they can provide accurate information, but whether users can distinguish reliable guidance from potentially life-threatening errors.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo