Google removes AI Overviews for medical queries after investigation finds dangerous health advice

Reviewed byNidhi Govil

14 Sources

Share

Google has removed some AI Overviews from health-related searches following a Guardian investigation that exposed dangerous flaws in AI-generated health summaries. The investigation found that Google's generative AI feature delivered inaccurate medical information, including wrong advice about pancreatic cancer and misleading liver function test results that could lead seriously ill patients to mistakenly believe they are healthy.

Google AI Overviews Pulled After Guardian Investigation Exposes Dangerous Health Advice

Google has removed several AI Overviews from medical queries after a Guardian investigation revealed that the AI-generated health summaries were putting users at risk with inaccurate medical information and misleading health information

1

. The investigation, published in early January, found that Google's generative AI feature delivered false guidance at the top of search results, potentially leading seriously ill patients to dangerous conclusions about their health

2

.

Source: Gadgets 360

Source: Gadgets 360

The Guardian documented multiple cases where Google AI Overviews provided dangerous health advice that contradicted established medical guidance. In one particularly alarming example, the AI advised people with pancreatic cancer to avoid high-fat foods—exactly the opposite of standard medical recommendations

3

. Medical experts warned this incorrect pancreatic cancer advice could increase the risk of patient death, as maintaining weight is critical for these patients

4

.

Source: ZDNet

Source: ZDNet

Misleading Liver Function Tests Information Raises Alarm

Google disabled specific queries including "what is the normal range for liver blood tests" after experts flagged the results as dangerous

1

. The investigation revealed that searching for liver function tests generated raw data tables listing specific enzymes like ALT, AST, and alkaline phosphatase without essential context. The AI feature failed to adjust these figures for patient demographics such as age, sex, and ethnicity.

Vanessa Hebditch, director of communications and policy at the British Liver Trust, told The Guardian that understanding liver function test results "is complex and involves a lot more than comparing a set of numbers."

1

She warned that the AI Overviews fail to inform users that someone can receive normal results for these tests while having serious liver disease requiring further medical care. "This false reassurance could be very harmful," she emphasized.

Design Flaw in Page Ranking System Fuels Misinformation

The recurring problems with Google AI Overviews stem from a fundamental design flaw in how the system works

1

. Google built the feature to show information backed up by top search results from its page ranking system, based on the assumption that highly ranked pages contain accurate information. However, Google's algorithm has long struggled with SEO-gamed content and spam. The system now feeds these unreliable results to its language model, which then summarizes them with an authoritative tone that can mislead users seeking expert advice on patient health matters.

Source: Ars Technica

Source: Ars Technica

Even when the AI draws from accurate sources, the language model can still draw incorrect conclusions from the data, producing flawed summaries of otherwise reliable information. The technology does not inherently provide factual accuracy but reflects whatever inaccuracies exist on the websites Google's algorithm ranks highly, presenting facts with an authority that makes errors appear trustworthy

1

.

Partial Removal Leaves Gaps in Safety Measures

While Google removes AI Overviews for specific queries like "what is the normal range for liver blood tests" and "what is the normal range for liver function tests," The Guardian found that slight variations such as "lft reference range" or "lft test reference range" still prompted AI Overviews to appear

2

. Hebditch called this a major concern, noting that the AI Overviews present lists of tests in bold, making it easy for readers to miss that these numbers might not be the right ones for their specific test

1

.

Several other examples that The Guardian originally highlighted to Google remain active. When asked why these AI Overviews had not been removed, Google said they linked to well-known and reputable sources and informed people when it was important to seek expert advice. A Google spokesperson told The Verge that the company's internal team of clinicians reviewed what was shared and "found that in many instances, the information was not inaccurate and was also supported by high-quality websites."

1

.

Ongoing Concerns About AI-Generated Medical Content

This is not the first controversy for AI Overviews. The feature has previously told people to put glue on pizza and eat rocks

3

. It has proven unpopular enough that users discovered inserting curse words into search queries disables AI Overviews entirely

1

. A report from September found that over 10% of Google AI Overviews cite AI-generated content.

Google declined to comment on the specific removals to The Guardian, with a spokesperson stating that the company does not "comment on individual removals within Search" but works to "make broad improvements." The company said AI Overviews only appear for queries where it has high confidence in the quality of responses and that it constantly measures and reviews the quality of its summaries across many different categories of information

1

.

Hebditch welcomed the removal as "excellent news" but added, "Our bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it's not tackling the bigger issue of AI Overviews for health."

2

The way users phrase medical queries influences the answer, with the same question asked two different ways potentially yielding one partially accurate response and another that is inaccurate and unhelpful

4

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo