Google scraps What People Suggest tool that pulled AI health advice from Reddit and online forums

5 Sources

Share

Google has quietly discontinued What People Suggest, an AI-powered health search tool that summarized medical advice from Reddit, Quora, and other online forums. Launched in March 2025, the feature aimed to help users find health tips from people with similar conditions. Google claims the removal is part of simplifying its search page, not safety concerns, though the company faces mounting scrutiny over AI-generated medical advice that has provided misleading health information.

News article

Google Removes What People Suggest Feature After Brief Run

Google has quietly discontinued What People Suggest, an AI-powered health search tool that pulled crowdsourced medical advice from online forums and social media platforms. The feature, which launched in March 2025 on mobile devices in the US, used artificial intelligence to compile health tips from Reddit, Quora, X (formerly Twitter), and other discussion platforms

1

5

. The tool aimed to organize different perspectives from online discussions into easy-to-understand themes, helping users quickly grasp what people were saying about similar health conditions.

At the time of its launch, Karen DeSalvo, then Google's chief health officer, wrote that the feature showed "the potential of AI to transform health outcomes across the globe"

5

. Google provided an example of someone with arthritis wanting to know how others with the condition exercise, suggesting the feature would help users uncover real insights from people with similar experiences.

Safety Concerns Shadow AI Health Advice Tools

The removal comes as Google faces mounting scrutiny over how its AI handles medical information. In January, a Guardian investigation found that AI Overviews, Google's AI-generated summaries shown to 2 billion people monthly, were delivering misleading information that could put users' health at risk

5

. One example showed AI Overviews advising people with pancreatic cancer to avoid high-fat foods, which is exactly the opposite of what medical experts recommend and may increase a patient's risk of death

1

.

The AI-generated medical advice also displayed incorrect information about crucial liver function tests, which could leave people with serious liver disease wrongly thinking they are healthy

1

. Following these revelations, Google removed several AI Overviews and disabled AI results on certain health-related queries. Critics noted that even when information wasn't completely wrong, the lack of context around age, medical history, and other crucial details made the summarized health advice potentially dangerous

2

.

Google Claims Simplification of Search Page, Not Safety Issues

A Google spokesperson confirmed that What People Suggest had been removed but emphasized it had nothing to do with the quality or safety of the feature

1

4

. Instead, the company framed the decision as part of a "broader simplification" of its search page. The spokesperson stated that the feature was turned down months ago and that news of efforts to simplify the search results page was shared publicly.

However, when asked where this was shared, the spokesperson pointed to a November 2025 post from John Mueller, a search advocate at Google Switzerland, which made no specific mention of What People Suggest

4

5

. That post only vaguely referenced phasing out "lesser-used features" that weren't adding significant value to users. Three people familiar with the decision confirmed to The Guardian that the feature is now dead

5

.

The Risks of Crowdsourcing Amateur Medical Advice

The fundamental problem with What People Suggest was that it summarized health tips and stories from everyday users rather than medical professionals

2

. While the idea of finding insights from people with similar health issues seemed appealing at first, it raised a critical question: should a search engine summarize medical advice from strangers? Health advice requires more than just facts—contextual information matters significantly, and AI doesn't always understand those nuances

3

.

Disclaimers warning users to consult medical professionals weren't always prominently displayed, which made the AI responses feel more authoritative than they should have been

2

3

. When Google launched a feature based on health tips from Reddit and other online forums populated by non-experts, it didn't help ease existing concerns about AI-generated medical advice. For now, users can still find personal stories in online forums and communities, but they'll need to search for them manually rather than relying on AI to surface amateur medical advice.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo