ChatGPT Health launches with 230 million weekly users, but experts warn of data privacy risks

Reviewed byNidhi Govil

7 Sources

Share

OpenAI launched ChatGPT Health, allowing users to connect medical records and fitness data for personalized health insights. With 230 million people already seeking medical information online through ChatGPT weekly, the move signals AI's growing role in healthcare. But experts warn the tool lacks HIPAA protections and can provide inaccurate analysis, raising questions about whether AI medical advice improves on the old "Dr. Google" approach.

OpenAI Pushes Into Consumer Health AI With New ChatGPT Health Feature

OpenAI has officially launched ChatGPT Health, a dedicated experience within its flagship chatbot designed specifically for health and wellness queries. The timing reflects a significant shift in how people approach seeking medical information online. According to OpenAI, 230 million people already ask ChatGPT health-related questions each week, moving beyond the era of "Dr. Google" into an age where AI chatbots serve as medical information gatekeepers

1

.

Source: MIT Tech Review

Source: MIT Tech Review

The new feature allows users to connect various health data sources, including medical records, Apple Health, Peloton, MyFitnessPal, Weight Watchers, and other fitness apps, promising to help users "understand patterns over time" and receive personalized health insights

3

. Days after OpenAI's announcement, competitor Anthropic introduced Claude for Healthcare, a HIPAA-ready product targeting both consumers and healthcare providers, signaling that consumer health AI has become a major battleground for AI labs

2

.

Source: Slate

Source: Slate

Sharing Sensitive Medical Information Raises Serious Data Privacy Concerns

While OpenAI actively encourages users to share sensitive medical information with ChatGPT Health, experts warn that the protections offered fall far short of what healthcare providers must provide. Unlike actual medical providers, ChatGPT Health isn't bound by HIPAA regulations. "You are not protected by law, and it is allowed to change terms of use over time," explains Hannah van Kolfschooten, a researcher in digital health law at the University of Basel

2

. OpenAI promises that health data will be encrypted by default, kept in a separate space from regular chats, and won't be used to train AI models. However, these assurances amount to what Harvard Law School's Carmel Shachar calls essentially "their word" — the company could change its privacy practices at any time

2

. The situation becomes more confusing because OpenAI launched two similarly named products simultaneously: ChatGPT Health for consumers and ChatGPT for Healthcare for medical professionals, with the latter offering stronger protections that comply with healthcare privacy obligations. Many people conflate the two, presuming the consumer product has enterprise-level security when it doesn't

2

.

AI Medical Advice Accuracy Remains Questionable Despite Promising Test Results

The accuracy of AI chatbots in healthcare presents a complex picture. Large Language Models score well on medical licensing examinations, with one study finding GPT-4o answered medical questions correctly about 85% of the time on realistic prompts — comparable to human doctors who misdiagnose patients 10% to 15% of the time

1

.

Source: Japan Times

Source: Japan Times

However, real-world testing reveals significant problems. When Washington Post columnist Geoffrey Fowler let ChatGPT analyze a decade of his Apple Health data, including 29 million steps and 6 million heartbeat measurements, the bot gave him an F grade for cardiac health. After he connected his medical records and asked again, the grade improved only to a D. His actual doctor said he was at such low risk that insurance wouldn't cover additional testing

4

. Cardiologist Eric Topol of the Scripps Research Institute called ChatGPT's analysis "baseless" and "not ready for any medical advice," noting the bot relied heavily on Apple Watch estimates of VO2 max and heart-rate variability — metrics that independent researchers have found can be inaccurate by an average of 13 percent

4

.

AI Chatbots Fill Healthcare Gaps But Don't Fix Systemic Problems

For the 25 million Americans without health insurance, AI chatbots may represent the closest thing to a second opinion they can afford. Unlike doctors who average 18-minute appointments, AI chatbots offer "unlimited patience and unlimited time" to answer questions at 2 a.m.

5

. Some patients have found genuine value: Alex P., a writer in his mid-40s, pushed back against his doctors' dismissal of his concerns after ChatGPT suggested his calcium score indicated serious risk. He eventually received a CT scan revealing a 95% blockage requiring an immediate stent

5

. Harvard Medical School's Marc Succi notes that patients now ask "questions at the level of something an early med student might ask" rather than arriving with anxiety from Google searches and misinformation

1

. However, the launch came at an inauspicious moment — two days after news broke that teenager Sam Nelson died of an overdose following extensive ChatGPT conversations about combining drugs

1

. Both OpenAI and Anthropic emphasize their products aren't meant to replace doctors, but experts note that giving people better tools to navigate a broken healthcare system doesn't actually fix the underlying problems. Some users, wary of data privacy concerns, have adopted a strategy of fragmenting their health information across multiple AI platforms to avoid creating "one treasure trove that, once hacked, belongs to the entire world"

5

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo