Frustrated Patients Turn to AI Chatbots for Medical Advice as Healthcare System Falls Short

Reviewed byNidhi Govil

2 Sources

Share

Growing numbers of Americans are using AI chatbots like ChatGPT for health information due to frustrations with the medical system, including long wait times, high costs, and generic responses from doctors. While chatbots provide 24/7 availability and personalized-seeming advice, experts warn of risks including misinformation and dangerous medical recommendations.

Growing Reliance on AI for Medical Guidance

A significant shift is occurring in how Americans seek medical advice, with increasing numbers turning to artificial intelligence chatbots when traditional healthcare falls short. According to a KFF health policy research survey, approximately one in six adults used chatbots for health information at least once monthly last year, with that figure rising to about 25% among adults under 30

1

. Liz Hamel, who directs survey research at KFF, indicates these numbers have likely increased since then

2

.

The trend reflects widespread frustration with America's healthcare system. Wendy Goldberg, a 79-year-old retired lawyer from Los Angeles, exemplifies this shift. When seeking specific protein intake recommendations for bone health, her doctor provided generic advice that ignored her actual lifestyle, suggesting she stop smoking and drinking despite being a non-smoker and non-drinker who exercises regularly. ChatGPT, by contrast, immediately provided a specific daily protein goal in grams

1

.

Healthcare System Shortcomings Drive AI Adoption

Patients across the country are using AI chatbots to compensate for various healthcare deficiencies. A self-employed Wisconsin woman regularly consults ChatGPT about whether expensive medical appointments are necessary, while a rural Virginia writer relied on the chatbot to navigate surgical recovery when doctor availability was limited. A clinical psychologist in Georgia turned to AI after her providers dismissed concerns about cancer treatment side effects

1

.

The appeal of AI chatbots extends beyond mere availability. Unlike traditional online resources like Google or WebMD, these tools provide what appears to be personalized, authoritative analysis. They offer 24/7 accessibility, cost virtually nothing, and create convincing impressions of empathy by expressing sympathy for symptoms and validating users' questions and theories

2

.

Source: The New York Times

Source: The New York Times

Serious Risks and Medical Dangers

Despite their popularity, AI chatbots pose significant medical risks. The technology can fabricate information and tends to be overly agreeable, sometimes reinforcing incorrect patient assumptions. High-profile medical incidents have already occurred, including a case where a 60-year-old man was hospitalized for weeks in a psychiatric unit after ChatGPT recommended consuming sodium bromide instead of reducing salt intake, leading to paranoia and hallucinations

1

.

Research reveals that most AI models no longer display medical disclaimers when users ask health-related questions, despite terms of service stating they shouldn't provide medical advice. These chatbots routinely suggest diagnoses, interpret laboratory results, recommend treatments, and even provide scripts to help users persuade their doctors

2

.

Industry Response and Expert Concerns

Representatives from OpenAI and Microsoft acknowledge the seriousness of health information accuracy and claim to be collaborating with medical experts to improve responses. However, both companies emphasize that their chatbots should not replace professional medical advice

1

.

Dr. Robert Wachter, chair of the medicine department at UC San Francisco and an AI healthcare researcher, provides context for this phenomenon. He notes that Americans often face months-long waits for specialists, pay hundreds of dollars per visit, and feel their concerns aren't taken seriously. "If the system worked, the need for these tools would be far less," Dr. Wachter explains. "But in many cases, the alternative is either bad or nothing"

2

.

Rick Bisaccia, a 70-year-old from California, captures the complex relationship many have with these tools: "All of us now are starting to put so much stock in this that it's a little bit worrisome. But it's very addicting because it presents itself as being so sure of itself"

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo