ChatGPT Health promises personalized medical insights, but experts warn of serious risks

Reviewed byNidhi Govil

11 Sources

Share

OpenAI launched ChatGPT Health, allowing users to connect medical records and fitness apps for AI-driven health advice. But independent testing reveals troubling inaccuracies—one journalist received an F grade for heart health that doctors called baseless. With 230 million people already seeking health guidance from ChatGPT weekly, experts question whether the benefits outweigh the risks of misinformation and data privacy concerns.

OpenAI Launches ChatGPT Health Amid Growing Demand for AI Health Advice

OpenAI has unveiled ChatGPT Health, a dedicated tab within its popular chatbot designed specifically for health and wellness inquiries. According to OpenAI, approximately 230 million people ask ChatGPT health-related questions each week, making health one of the platform's most common use cases

1

. The new feature allows users to connect medical records and integrate data from wellness apps including Apple Health, MyFitnessPal, Weight Watchers, Peloton, and Function to receive what the company describes as more personalized health information

3

. Almost simultaneously, Anthropic introduced Claude for Healthcare, a HIPAA-ready product targeting both consumers and healthcare providers, signaling that AI for health advice has become a key battleground among tech giants

2

.

Source: Slate

Source: Slate

The timing of ChatGPT Health's launch proved inauspicious. Just two days before the announcement, news broke about Sam Nelson, a teenager who died of an overdose after extensive conversations with ChatGPT about combining various drugs

1

. The incident immediately raised questions about the wisdom of relying on AI tools that could potentially cause extreme harm, even as OpenAI emphasizes that ChatGPT Health is intended as additional support rather than a replacement for medical care.

How ChatGPT Health Works to Analyze Health Data

ChatGPT Health operates as a separate experience within the ChatGPT interface, designed to help users navigate scattered health information across provider portals, wearable apps, and personal notes

3

. Users can upload lab results, visit summaries, clinical history, and connect various health-tracking apps to provide the AI with a more complete picture. OpenAI suggests the tool can help people prepare for doctor appointments, plan questions, receive customized diet plans or workout routines, and understand patterns related to their medical profile

3

.

Source: The Conversation

Source: The Conversation

The feature includes medical records integration capabilities, though these are currently available only in the United States

4

. OpenAI claims conversations and files are encrypted by default at rest and in transit, with ChatGPT Health building on this foundation with additional layered protections including purpose-built encryption and isolation

3

. The company states that Health conversations exist in their own memory space and won't be used for foundation model training

3

.

Independent Testing Reveals Troubling Inaccuracies

When put to the test, ChatGPT Health's ability to personalize health information showed significant flaws. A Washington Post columnist granted the chatbot access to a decade of Apple Watch data—29 million steps and 6 million heartbeat measurements—and asked it to grade his cardiac health. ChatGPT Health assigned him an F grade

5

. After connecting his medical records with weight, blood pressure, and cholesterol data, the grade improved only to a D.

Source: Washington Post

Source: Washington Post

Cardiologist Eric Topol of the Scripps Research Institute, an expert on both longevity and AI in medicine, reviewed the analysis and called it "baseless" and "not ready for any medical advice"

5

. The journalist's actual doctor confirmed he was at such low risk for heart attack that insurance likely wouldn't cover additional cardio fitness testing. ChatGPT Health had based much of its negative assessment on Apple Watch's VO2 max estimate—a measurement that independent researchers have found can run low by an average of 13 percent—and heart-rate variability metrics that Topol described as having "lots of fuzziness"

5

.

Similarly, Anthropic's Claude for Healthcare graded the same individual's cardiac health a C, relying on questionable analysis

5

. Independent research consistently shows generative AI tools sometimes give unsafe health advice, even when they have access to medical records

4

.

Data Privacy Concerns and Limited Legal Protections

Using ChatGPT Health requires handing over intimate health information to an AI company, raising substantial risks of sharing medical information. While OpenAI encourages users to share sensitive data like medical records, lab results, and wellness app information in exchange for deeper insights

2

, the protections are far from watertight. ChatGPT Health is not a healthcare provider, meaning it isn't covered by HIPAA, the federal health privacy law

5

.

Sara Gerke, a law professor at the University of Illinois Urbana-Champaign, explains that data protection for AI tools like ChatGPT Health "largely depends on what companies promise in their privacy policies and terms of use" since most states haven't enacted comprehensive privacy laws

2

. Hannah van Kolfschooten, a researcher in digital health law at the University of Basel, notes that while ChatGPT states in current terms that it will keep data confidential and not use it to train models, "you are not protected by law, and it is allowed to change terms of use over time"

2

.

Carmel Shachar, an assistant clinical professor of law at Harvard Law School, emphasizes the limited protection: "There's very limited protection. Some of it is their word, but they could always go back and change their privacy practices"

2

. The confusion is compounded by OpenAI launching ChatGPT for Healthcare—an enterprise product with stronger protections for hospitals and clinicians—just one day after ChatGPT Health, leading many to mistakenly assume the consumer product has the same level of security

2

.

Evaluating Effectiveness Against Dr. Google

Some medical professionals see health chatbots as potentially improving upon the "Dr. Google" era of medical information seeking. Marc Succi, an associate professor at Harvard Medical School and practicing radiologist, notes that treating patients who searched symptoms on Google required "a lot of attacking patient anxiety [and] reducing misinformation," but now he sees patients "asking questions at the level of something an early med student might ask"

1

. The key question is whether Dr. ChatGPT represents an improvement over Dr. Google in terms of reducing medical misinformation and unnecessary health anxiety.

However, evaluating effectiveness remains challenging. Danielle Bitterman, clinical lead for data science and AI at Mass General Brigham, states: "It's exceedingly difficult to evaluate an open-ended chatbot"

1

. While large language models score well on medical licensing examinations, those multiple-choice tests don't reflect how people actually use chatbots for health information. When Sirisha Rambhatla, an assistant professor at the University of Waterloo, evaluated GPT-4o on licensing exam questions without multiple-choice options, medical experts scored only about half the responses as entirely correct

1

.

A different study testing GPT-4o on realistic prompts found it answered medical questions correctly about 85% of the time

1

. Amulya Yadav, who led the study at Pennsylvania State University, noted that human doctors misdiagnose patients 10% to 15% of the time, though he personally remains skeptical of patient-facing medical AI tools.

Who Uses AI for Health and What Risks Remain

Research from 2024 estimated almost one in ten Australians had asked ChatGPT a health query in the previous six months, with usage more common among people born in non-English speaking countries, those who spoke another language at home, and people with limited health literacy

4

. Among those who hadn't recently used ChatGPT for health, 39% were considering using it soon

4

.

OpenAI worked with more than 260 clinicians in 60 countries including Australia to provide feedback on ChatGPT Health outputs

4

. However, the tool has not been independently tested, and it remains unclear whether ChatGPT Health would be considered a medical device requiring regulation in Australia

4

. The tool's responses may not reflect Australian clinical guidelines or meet the needs of priority populations including First Nations people, those from culturally and linguistically diverse backgrounds, people with disability and chronic conditions, and older adults

4

.

Health questions requiring clinical expertise to answer carry more risk of serious consequences, including finding out what symptoms mean, asking for advice about treatment, and interpreting test results

4

. Even with access to consumer health data, AI tools demonstrate well-documented tendencies to agree with users and generate hallucination rather than admit ignorance

1

. When doctors are unavailable or unable to help, people will turn to alternatives, making the accuracy and safety of these tools a pressing concern for millions seeking diagnosis support and medical guidance online.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo