2 Sources
2 Sources
[1]
Use Google AI Overview for health advice? It's 'really dangerous,' investigation finds
Don't rely solely on AI to research serious medical conditions. Turning to AI to answer health-related questions seems like an easy and convenient option that may spare you from a doctor's visit. That's particularly true with a tool like Google's AI Overviews, which summarizes the results that pop up during a regular Google Search. Also: How to get rid of AI Overviews in Google Search: 4 easy ways But that doesn't mean Google's AI is the best route to take, especially if you're researching a serious medical problem or condition. A recent investigation by British newspaper The Guardian concluded that Google's own AI Overviews put users at risk by providing false and misleading health information in its summaries. To conduct its tests, The Guardian used AI Overviews to research several health-related questions. The paper then asked different medical and health experts to review the responses. Also: 40 million people globally are using ChatGPT for healthcare - but is it safe? In one case described by experts as "really dangerous," Google's AI Overviews advised people with pancreatic cancer to avoid high-fat foods. But the experts said that this is the exact opposite of the correct advice. In another instance, the AI-generated information about women's cancer tests was "completely wrong," according to the experts. Here, a search for "vaginal cancer symptoms and tests" listed a pap test as a test for vaginal cancer, which the experts said was incorrect. In a third case, Google served up false information about critical liver function tests that could give people with serious liver disease the wrong impression. For example, searching for the phrase "what is the normal range for liver blood tests" resulted in misleading data with little context and no regard for nationality, sex, ethnicity, or age. Based on its investigation, The Guardian also said that AI Overviews offered inaccurate results on searches about mental health. In particular, some of the summaries for such conditions as psychosis and eating disorders displayed "very dangerous advice" and were "incorrect, harmful, or could lead people to avoid seeking help," Stephen Buckley, the head of information at mental health charity Mind, told Google. In response to a request for feedback on The Guardian's findings, a Google spokesperson send ZDNET the following statement: "Many of the examples shared with us are incomplete screenshots, but from what our internal team of clinicians could assess, the responses link to well-known, reputable sources and recommend seeking out expert advice. We invest significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information." Also: Using AI for therapy? Don't - it's bad for your mental health, APA warns Google also reviewed the specific questions posed by The Guardian and qualified some of the answers. For example, the AI Overview doesn't say that pap tests are meant for diagnosing vaginal cancer but says that vaginal cancer can be found incidentally on a pap test, according to Google. For the pancreatic cancer example, the overview cites Johns Hopkins University as one reputable source. For the mental health example, Google said the overview pointed to a source that links to a free, confidential, national crisis support line. For the liver test query, Google said that the AI Overview stated that normal ranges can vary from lab to lab and that a disclaimer advised people to consult a professional for medical advice or diagnosis. I tested AI Overviews by submitting some of the same questions that The Guardian asked. In one search, I told Google that I have pancreatic cancer and asked if I should avoid high-fat foods. The AI said that you do often need to limit high-fat foods with pancreatic cancer because your pancreas struggles to produce digestive enzymes. But you also need calories, so working with a registered dietitian to find the right balance of healthy fats and easy-to-digest options is crucial to avoid weight loss and malnutrition. Also: Google search chief talks future of news content amid AI scramble Next, I searched for "vaginal cancer symptoms and tests." Here, AI Overviews did list a pap test as one of several diagnostic tests but qualified it by saying that this type of test checks for abnormal cells on the cervix, which can sometimes find vaginal cancer. Asking Google AI for the normal range for liver blood tests gave me general ranges but also offered specific numbers by gender and age. Based on my own limited testing, the answers provided by AI Overviews seemed less cut and dry than those obtained by The Guardian. But that points to another challenge with AI and search in general. The way you phrase your question influences the answer. I could ask the same question two different ways and get two different responses, one that's partially accurate and potentially useful and the other inaccurate and unhelpful. In defending its AI Overviews, Google said that the system uses web rankings to try to ensure that the information is reliable and relevant. If any content from the web is misinterpreted or is lacking context, the company will use those examples to try to improve the process. Still, The Guardian's investigation points out the pitfalls of relying on AI for any critical research, especially involving serious health conditions. The key takeaway here is "Don't." Don't risk your health by assuming that the information provided by an AI is going to be correct. Also: Sick of AI in your search results? Try these 8 Google alternatives If you do insist on using AI for this purpose, double-check and triple-check the responses. Run the same search across different AIs to see if they're in agreement. Better yet, investigate the sources consulted for the answers to see if the AI interpreted them correctly. Best yet, talk to your doctor. If you are experiencing a serious medical condition, your doctor's office should always be your primary point of contact. Don't be afraid to reach out. Many medical offices provide email and messaging for patient questions. While it may be tempting to turn to AI for quick and easy answers, you don't want to run the risk of getting the wrong information.
[2]
Google's AI Overviews Caught Giving Dangerous "Health" Advice
"If the information they receive is inaccurate or out of context, it can seriously harm their health." In May 2024, Google threw caution to the wind by rolling out its controversial AI Overviews feature in a purported effort to make information easier to find. But the AI hallucinations that followed -- like telling users to eat rocks and put glue on their pizzas -- ended up perfectly illustrated the persistent issues that plague large language model-based tools to this day. And while not being able to reliably tell what year it is or making up explanations for nonexistent idioms might sound like innocent gaffes that at most lead to user frustration, some advice Google's AI Overviews feature is offering up could have far more serious consequences In a new investigation, The Guardian found that the tool's AI-powered summaries are loaded with inaccurate health information that could put people at risk. Experts warn that it's only a matter of time until the bad advice endangers users -- or, in a worst-case scenario, results in someone's death. The issue is severe. For instance, The Guardian found that it advised those with pancreatic cancer to avoid high-fat foods, despite doctors recommending the exact opposite. It also completely bungled information about women's cancer tests, which could lead to people ignoring real symptoms of the disease. It's a precarious situation as those who are vulnerable and suffering often turn to self-diagnosis on the internet for answers. "People turn to the internet in moments of worry and crisis," end-of-life charity Marie Curie director of digital Stephanie Parker told The Guardian. "If the information they receive is inaccurate or out of context, it can seriously harm their health." Others were alarmed by the feature turning up completely different responses to the same prompts, a well-documented shortcoming of large language model-based tools that can lead to confusion. Mental health charity Mind's head of information, Stephen Buckle, told the newspaper that AI Overviews offered "very dangerous advice" about eating disorders and psychosis, summaries that were "incorrect, harmful or could lead people to avoid seeking help." A Google spokesperson told The Guardian in a statement that the tech giant invests "significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information." But given the results of the newspaper's investigation, the company has a lot of work left to ensure that its AI tool isn't dispensing dangerous health misinformation. The risks could continue to grow. According to an April 2025 survey by the University of Pennsylvania's Annenberg Public Policy Center, nearly eight in ten adults said they're likely to go online for answers about health symptoms and conditions. Nearly two-thirds of them found AI-generated results to be "somewhat or very reliable," indicating a considerable -- and troubling -- level of trust. At the same time, just under half of respondents said they were uncomfortable with healthcare providers using AI to make decisions about their care. A separate MIT study found that participants deemed low-accuracy AI-generated responses "valid, trustworthy, and complete/satisfactory" and even "indicated a high tendency to follow the potentially harmful medical advice and incorrectly seek unnecessary medical attention as a result of the response provided." That's despite AI models continuing to prove themselves as strikingly poor replacements for human medical professionals. Meanwhile, doctors have the daunting task of dispelling myths and trying to keep patients from being led down the wrong path by a hallucinating AI. On its website, the Canadian Medical Association calls AI-generated health advice "dangerous," pointing out that hallucinations, as well as algorithmic biases and outdated facts, can "mislead you and potentially harm your health" if they choose to follow the generated advice. Experts continue to advise people to consult human doctors and other licensed healthcare professionals instead of AI, a tragically tall ask given the many barriers to adequate care around the world. At least AI Overviews sometimes appears to be aware of its own shortcomings. When queried if it should be trusted for health advice, the feature happily pointed us to The Guardian's investigation. "A Guardian investigation has found that Google's AI Overviews have displayed false and misleading health information that could put people at risk of harm," read the AI Overviews' reply.
Share
Share
Copy Link
An investigation by The Guardian reveals Google AI Overviews is providing false and misleading health information that experts call 'really dangerous.' The AI tool gave incorrect advice about pancreatic cancer, women's cancer tests, and mental health conditions. Medical experts warn that relying on AI for medical advice could seriously harm users' health or even lead to death.
Google AI Overviews, the search giant's AI-powered summary feature rolled out in May 2024, is now facing serious scrutiny after an investigation by The Guardian uncovered that it provides false and misleading health advice that could put users at risk
1
2
. The newspaper tested the tool with various health-related queries and asked medical experts to review the AI-generated responses. What they found raises urgent questions about the safety of relying on AI for medical advice, particularly when dealing with serious conditions.
Source: Futurism
In one case described by experts as "really dangerous," Google AI Overviews advised people with pancreatic cancer to avoid high-fat foods—the exact opposite of correct medical guidance
1
. The AI misinformation extended to women's cancer tests, where a search for "vaginal cancer symptoms and tests" incorrectly listed a pap test as a diagnostic tool for vaginal cancer, which experts confirmed was completely wrong1
. Additionally, the tool provided misleading data about liver function tests with little context and no consideration for nationality, sex, ethnicity, or age1
.
Source: ZDNet
The problems with Google's AI-generated summaries weren't limited to physical health. Stephen Buckley, head of information at mental health charity Mind, told Google that some AI Overviews summaries for mental health conditions like psychosis and eating disorders displayed "very dangerous advice" that was "incorrect, harmful, or could lead people to avoid seeking help"
1
2
. Stephanie Parker, director of digital at end-of-life charity Marie Curie, emphasized the stakes: "People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health".The timing of these revelations is particularly concerning given growing public reliance on AI for health information. An April 2025 survey by the University of Pennsylvania's Annenberg Public Policy Center found that nearly eight in ten adults said they're likely to go online for answers about health symptoms and conditions
2
. More troubling, nearly two-thirds found AI-generated results to be "somewhat or very reliable," indicating considerable trust in tools that medical professionals warn against using for self-diagnosis.A separate MIT study revealed that participants deemed low-accuracy AI-generated responses "valid, trustworthy, and complete/satisfactory" and showed a high tendency to follow potentially harmful medical advice, even seeking unnecessary medical attention based on flawed AI responses
2
. These findings highlight a dangerous gap between user trust and the actual reliability of large language model-based tools, which continue to suffer from hallucinations and inconsistent outputs.Related Stories
In response to The Guardian's findings, a Google spokesperson stated that the company invests "significantly in the quality of AI Overviews, particularly for topics like health, and the vast majority provide accurate information"
1
2
. Google also qualified some of the problematic examples, noting that the AI Overview doesn't say pap tests are meant for diagnosing vaginal cancer but that vaginal cancer can be found incidentally on such tests1
.However, independent testing revealed another critical issue: the way users phrase questions dramatically influences the answers they receive. The same query worded differently can produce vastly different responses—one partially accurate, the other potentially harmful
1
. This inconsistency compounds the risk for vulnerable users seeking urgent health guidance.The Canadian Medical Association has labeled AI-generated health advice as "dangerous" on its website, warning that hallucinations, algorithmic biases, and outdated facts can "mislead you and potentially harm your health"
2
. Medical professionals now face the daunting task of dispelling myths and correcting misinformation spread by AI tools. Experts continue to advise consulting human doctors and licensed healthcare professionals instead of AI, though this remains a challenge given barriers to adequate care worldwide2
. As AI tools become more prevalent in search results, the question isn't just whether they can provide accurate information, but whether users can distinguish reliable guidance from potentially life-threatening errors.Summarized by
Navi
20 Jul 2024

17 Nov 2025•Health

18 Aug 2025•Health

1
Policy and Regulation

2
Technology

3
Technology
