9 Sources
9 Sources
[1]
40 million people globally are using ChatGPT for healthcare - but is it safe?
5% of messages to ChatGPT globally concern healthcare.Users ask about symptoms and insurance advice, for example.Chatbots can provide dangerously inaccurate information. More than 40 million people worldwide rely on ChatGPT for daily medical advice, according to a new report from OpenAI shared exclusively with Axios. The report, based on an anonymized analysis of ChatGPT interactions and a user survey, also sheds light on some of the specific ways people are using AI to navigate the sometimes complex intricacies of healthcare. Some are prompting ChatGPT with queries regarding insurance denial appeals and possible overcharges, for example, while others are describing their symptoms, hoping to receive a diagnosis or treatment advice. It should come as no surprise that a large number of people are using ChatGPT for sensitive personal matters. The three-year-old chatbot, along with others like Google's Gemini and Microsoft's Copilot, has become a confidant and companion for many users, a guide through some of life's thornier moments. Also: Can you trust an AI health coach? A month with my Pixel Watch made the answer obvious Last spring, an analysis conducted by Harvard Business Review found that psychological therapy was the most common use of generative AI. The new OpenAI report is therefore just another brick in a rising edifice of evidence showing that generative AI will be -- indeed already is -- much more than simply a search engine on steroids. What's most jarring about the report is the sheer scale at which users are turning to ChatGPT for medical advice. It also underscores some urgent questions about the safety of this type of AI use at a time when many millions of Americans are suddenly facing new and major healthcare-related challenges. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) According to Axios, the OpenAI report found that more than 5% of all messages sent to ChatGPT globally are related to healthcare. As of July of last year, the chatbot reportedly processed around 2.5 billion prompts per day -- that means it's responding to at least 125 million health-care related questions every day (and likely more than that now, since its user-base is still growing). Also: Using AI for therapy? Don't - it's bad for your mental health, APA warns Many of those conversations -- around 70%, according to Axios -- are happening outside the normal working hours of medical clinics, underscoring a key benefit of this kind of AI use: unlike human doctors, it's always available. Some people have also leveraged chatbots to help spot billing errors and cases in which exorbitantly high medical costs can be disputed. The widespread embrace of ChatGPT as an automated medical expert is coinciding with what, for many Americans, has been a stressful start to the year due to a sudden spike in the cost of healthcare coverage. With the expiration of pandemic-era Affordable Care Act tax subsidies, over 20 million ACA enrollees have reportedly had their monthly premiums increase by an average of 114%. It's likely that some of those people, especially younger, healthier, and more cash-strapped Americans, will opt to forego health insurance entirely, perhaps turning instead to chatbots like ChatGPT for medical advice. AI might always be available to chat, but it's also prone to hallucination -- fabricating information that's delivered with the confidence of fact -- and therefore no substitute for an actual, flesh-and-blood medical expert. One study conducted by a cohort of physicians and posted to the preprint server site arXiv in July, for example, found that some industry-leading chatbots frequently responded to medical questions with dangerously inaccurate information. The rate at which this kind of response was generated by OpenAI's GPT-4o and Meta's Llama was especially high: 13% in each case. Also: AI model for tracking your pet's health data launches at CES "This study suggests that millions of patients could be receiving unsafe medical advice from publicly available chatbots, and further work is needed to improve the clinical safety of these powerful tools," the authors of the July paper noted. OpenAI is currently working to improve its models' abilities to safely respond to health-related queries, according to Axios. For the time being, generative AI should be approached like WebMD: It's often useful for answering basic questions about medical conditions or the complexities of the healthcare system, but it probably wouldn't be recommended as a definitive source for, say, diagnosing a chronic ailment or seeking advice for treating a serious injury. Also: Anthropic says Claude helps emotionally support users - we're not convinced And given its propensity to hallucinate, it's best to treat AI's responses with an even bigger grain of salt than that with which you might take information gleaned from a quick Google search -- especially when it comes to more sensitive personal questions.
[2]
OpenAI sees big opportunity in US health queries
One man's failing healthcare system is another man's opportunity About sixty percent of American adults have turned to AI like ChatGPT for health or healthcare in the past three months. Instead of seeing that as an indictment of the state of US healthcare, OpenAI sees an opportunity to shape policy. A study published by OpenAI on Monday claims more than 40 million people worldwide ask ChatGPT healthcare-related questions each day, accounting for more than five percent of all messages the chatbot receives. About a quarter of ChatGPT's regular users submit healthcare-related prompts each week, and OpenAI understands why many of those people are users in the United States. "In the United States, the healthcare system is a long-standing and worsening pain point for many," OpenAI surmised in its study. Studies and first-hand accounts from medical professionals bear that out. Results of a Gallup poll published in December found that a mere 16 percent of US adults were satisfied with the cost of US healthcare, and only 24 percent of Americans have a positive view of their healthcare coverage. It's not hard to see why. Healthcare spending has skyrocketed in recent years, and with Republican elected officials refusing to extend Affordable Care Act subsidies, US households are due to see another spike in insurance costs in 2026. Based on Gallup's findings, it seems that American insureds, who pay the highest per capita healthcare costs in the world, don't think they're getting their money's worth. According to OpenAI, more Americans are turning to its AI to close healthcare gaps, and the company doesn't seem at all troubled by that. "For both patients and providers in the US, ChatGPT has become an important ally, helping people navigate the healthcare system, enabling them to self-advocate, and supporting both patients and providers for better health outcomes," OpenAI said in its study. According to the report, which used a combination of a survey of ChatGPT users and anonymized message data, nearly 2 million messages a week come from people trying to navigate America's labyrinthine health insurance ecosystem, but they're still not the majority of US AI healthcare answer seekers. Fifty-five percent of US adults who used AI to help manage their health or healthcare in the past three months said they were trying to understand symptoms, and seven in ten healthcare conversations in ChatGPT happened outside normal clinic hours. Individuals in "hospital deserts," classified in the report as areas where people are more than a 30-minute drive from a general medical or children's hospital, were also frequent users of ChatGPT for healthcare-related questions. In other words, when clinic doors are closed or care is hard to reach, care-deprived Americans are turning to an AI for potentially urgent healthcare questions instead. As The Guardian reported last week, relying on AI for healthcare information can lead to devastating outcomes. The Guardian's investigation of healthcare-related questions put to Google AI Overviews found that inaccurate answers were frequent, with Google AI giving incorrect information about the proper diet for cancer patients, liver function tests, and women's healthcare. OpenAI rebuffed the idea that it could be providing bad information to Americans seeking healthcare information in an email to The Register. A spokesperson told us that OpenAI has a team dedicated solely to handling accurate healthcare information, and that it works with clinicians and healthcare professionals to safety-test its models, suss out where risks might be found, and improve health-related results. OpenAI also told us that GPT-5 models have scored higher than previous iterations on the company's homemade healthcare benchmarking system. It further claims that GPT-5 has greatly reduced all of its major failure modes (i.e., hallucinations, errors in urgent situations, and failures to account for global healthcare contexts). None of those data points actually get to the point of how often ChatGPT could be wrong in critical healthcare situations, however. What does that matter to OpenAI, though, when there's potentially heaps of money to be made on expanding in the medical industry? The report seems to conclude that its increasingly large role in the US healthcare industry, again, isn't an indictment of a failing system as much as it is the inevitable march of technological progress, and included several "policy concepts" that it said are a preview of a full AI-in-healthcare policy blueprint it intends to publish in the near future. Leading the recommendations, naturally, is a call for opening and securely connecting publicly funded medical data so OpenAI's AI can "learn from decades of research at once." OpenAI is also calling for new infrastructure to be built out that incorporates AI into medical wet labs, support for helping healthcare professionals transition into being directly supported by AI, new frameworks from the US Food and Drug Administration to open a path to consumer AI medical devices, and clarified medical device regulation to "encourage ... AI services that support doctors." ®
[3]
More Than 40 Million People Use ChatGPT Daily for Healthcare Advice, OpenAI Claims
ChatGPT users around the world send billions of messages every week asking the chatbot for healthcare advice, OpenAI shared in a new report on Monday. Roughly 200 million of ChatGPT's more than 800 million regular users submit a prompt about healthcare every single week, and more than 40 million do so every single day. According to anonymized ChatGPT user data, more than half of users ask ChatGPT to check or explore symptoms, while others use it to decode medical jargon or get more information about treatment options. Nearly 2 million of these weekly messages also focused on health insurance, asking ChatGPT to help compare plans or handle claims and billing. The numbers are somewhat reflective of the troubled state of the American healthcare system, especially as patients struggle to pay exorbitant medical bills. In its own research, OpenAI found that three in five Americans viewed the current system as broken, with the most major pain point being hospital costs. The study found that 7-in-10 healthcare-related conversations happen outside of normal clinic hours. On top of that, an average of more than 580,000 healthcare inquiries were sent in "hospital deserts," aka places in the United States that are more than a 30-minute drive from a general medical or children's hospital. The report also showed increasing AI adoption among healthcare professionals. Citing data from the American Medical Association, OpenAI said that 46% of American nurses reported using AI weekly. The report comes as OpenAI increases its bet on healthcare AI, despite the concerns about accuracy and privacy that come with the technology's deployment. The company's CEO of applications, Fidji Simo, said she is "most excited for the breakthroughs that AI will generate in healthcare," in a press release announcing her new role in July 2025 OpenAI isn't alone in its big healthcare bet as well. Big tech giants from Google to Palantir have been working on product offerings in the healthcare AI space for years. Many people think Health AI is a promising field with a lot of potential to ease the burden on medical workers. But it's also contentious, because AI is prone to mistakes. While a hallucinated response can be an annoying hurdle in many other areas of use, in healthcare, it could have the potential to be a life-or-death matter. These AI-driven risks are not confined to the world of hypotheticals. According to a report from August 2025, a 60-year-old with no past psychiatric or medical history was hospitalized due to bromide poisoning after following ChatGPT's recommendation to take the supplement. As the tech stands today, no one should use a chatbot to self-diagnose or treat a medical condition, full stop. As investment in the technology builds up, so do policy conversations. There is no comprehensive federal framework on AI, much less healthcare AI, but the Trump administration has made it clear that it intends to change that. In July, OpenAI CEO Sam Altman was one of many tech executives in attendance at the White House's "Make Health Tech Great Again" event, where Trump announced a private sector initiative to use AI assistants for patient care and share the medical records of Americans across apps and programs from 60 companies. The FDA is also looking to revamp how it regulates AI deployment in health. The agency published a request for public comment in September 2025, seeking feedback from the medical sector on health AI deployment and evaluations. OpenAI's latest report seems to be their own attempt at putting a comment on the public record. The company pairs its findings with sample policy concepts, like asking for full access to the world's medical data and a clearer regulatory pathway to make AI-infused medical devices for consumer use. "We urge FDA to move forward and work with industry towards a clear and workable regulatory policy that will facilitate innovation of safe and effective AI medical devices," the company said in the report. In the next few months, OpenAI is preparing to release a full policy blueprint for how it wants healthcare AI to be regulated, the company added in the report.
[4]
OpenAI says 40 million people use ChatGPT for healthcare every day
200 million ChatGPT users ask AI at least once a week about health-related matters OpenAI has published a report claiming that 40 million people are using ChatGPT for health-related questions every single day, a number that would have sounded wild a couple of years ago but now feels almost inevitable. The company describes its chatbot as a healthcare ally, saying users regularly ask about symptoms, medications, treatment options, and how to navigate often overwhelmed health systems. The report suggests more than five percent of all ChatGPT prompts are about health, and 200 million of the chatbot's 800 million weekly users ask at least one health-related prompt every week. Most of those are people trying to figure out whether a headache is serious, what a complicated diagnosis actually means, or whether a new prescription is supposed to make them feel this tired. I will admit I have done the same after a late-night indigestion spiral, something I used to turn to Google for only a couple of years ago. OpenAI's report asked 1,042 US adults who used AI for healthcare in the past 3 months just exactly how they use the chatbot for health-related matters. 55% used AI to "Check or explore symptoms", 52% used a chatbot to "Ask healthcare questions at any time of day", 48% for "understanding medical terms or instructions", and 44% used AI to "learn about treatment options". OpenAI says these stats show "how Americans are using AI for healthcare navigation: organizing information, translating jargon, and generating drafts they can verify. " One example the company highlighted was of Ayrin Santoso from San Francisco, who "used ChatGPT to help coordinate urgent care for her mother in Indonesia after her mother suffered sudden vision loss that her family attributed to fatigue." According to OpenAI, Santoso "entered symptoms, prior advice, and context, and received a clear warning from ChatGPT that her mother's condition could signal a hypertensive crisis and possible stroke." From ChatGPT's initial response, Santoso's mother was hospitalized in Indonesia and has since "recovered 95% of her vision in the affected eye." OpenAI argues that AI can help outside clinic hours when real doctors are hard to reach. That makes sense on paper with confusing health information, but there are serious risks, especially when you take ChatGPT's word as gospel. A chatbot cannot replace a doctor; it does not have your full medical history, and it can still get things wrong in ways that matter. OpenAI says it is working with hospitals and researchers to improve accuracy and safety, but the core message is clear: millions of people have already decided AI is part of their health routine, whether the rest of us like it or not. 40 million daily users is a wild milestone, but while it's easy to get carried away with such a landmark number, it's worth remembering that people have been using technology like Google for health-related queries for well over a decade. That said, Google's top search results used to be led by reliable health-related websites like the UK's NHS or WebMD. Now, AI Overviews add an element of AI uncertainty. And even more so when you're turning to an AI chatbot like ChatGPT, capable of making up the most ridiculous information. I don't think using AI for quick tips on health-related matters is a bad thing, especially in countries like the United States, where you need to pay to see a doctor about a simple skin irritation. But how do you know it's a simple skin irritation? And do you trust ChatGPT enough to take the risk?
[5]
Exclusive: 40 million Americans turn to ChatGPT for health care
Why it matters: Americans are turning to AI tools to navigate the notoriously complex and opaque U.S. health care system. The big picture: Patients see ChatGPT as an "ally" in navigating their health care, according to analysis of anonymized interactions with ChatGPT and a survey of ChatGPT users by the AI-powered tool Knit. * Users turn to ChatGPT to decode medical bills, spot overcharges, appeal insurance denials, and when access to doctors is limited, some even use it to self-diagnose or manage their care. By the numbers: More than 5% of all ChatGPT messages globally are about health care. * OpenAI found that users ask 1.6 to 1.9 million health insurance questions per week for guidance comparing plans, handling claims and billing and other coverage queries. * In underserved rural communities, OpenAI says users send an average of nearly 600,000 health care-related messages every week. * Seven in 10 health care conversations in ChatGPT happen outside of normal clinic hours. Zoom in: Patients can enter symptoms, prior advice from doctors, and context around their health-care issues and ChatGPT can deliver warnings on the severity of certain conditions. * When care isn't available, this can help patients decide if they should wait for appointments or if they need to seek emergency care. * "Reliability improves when answers are grounded in the right patient-specific context such as insurance plan documents, clinical instructions, and health care portal data," OpenAI says in the report. Reality check: ChatGPT can give wrong and potentially dangerous advice, especially in conversations around mental health. * OpenAI currently faces multiple lawsuits from people who say loved ones harmed or killed themselves after interacting with the technology. * States have enacted new laws focused on use of AI-enabled chatbots, banning apps or services from offering mental health and therapeutic decision-making. The intrigue: Multiple viral stories highlight how people have uploaded itemized bills to AI for analysis, uncovering errors like duplicate charges, improper coding, or violations of Medicare rules. Behind the scenes: OpenAI says it's working to strengthen how ChatGPT responds in health contexts. * The company is continuing to evaluate models to reduce harmful or misleading responses, and work with clinicians to identify risks and improve. * GPT-5 models are more likely to ask follow-up questions from the user, browse the internet for the latest research, use hedging language, and direct users to professional evaluation when needed, per the company. 💠Our thought bubble: The end of enhanced Affordable Care Act subsidies could accelerate this quiet shift as uninsured and underinsured patients lean on chatbots for health care guidance. What we're watching: How accuracy, liability, and access to patient data evolve as more Americans rely on AI for medical guidance without a doctor in the loop.
[6]
40 Million People Use ChatGPT Daily for Advice on Health, OpenAI Report Reveals | AIM
AI tools are being used at scale to navigate healthcare systems, particularly for insurance-related queries, after-hours guidance and administrative tasks, according to a January 2026 report by OpenAI analysing anonymised ChatGPT data. The report finds that over 5% of global interactions on ChatGPT are related to healthcare. On average, more than 40 million people turn to the platform daily with questions related to healthcare, and one in four users asks healthcare-related questions per week, indicating sustained use. A large share of this activity relates to non-clinical tasks. The analysis estimates that 1.6 million to 1.9 million messages per week focus on health insurance, including plan comparisons, billing issues, claims, eligibility and cost-sharing. Users primarily seek help organising information, understanding terminology and preparing documents rather than medical diagnosis. Timing data suggests AI is often used when traditional healthcare access is limited. Around 70% of healthcare-related interactions occur outside standard clinic hours, indicating a demand for information at night and on weekends. Geographic disparities also shape usage patterns. Users in rural and underserved areas generate close to six lakh healthcare-related interactions per week. In areas defined as 'hospital deserts', locations more than 30 minutes from the nearest general hospital, AI tools recorded over 5.8 lakh healthcare-related messages per week during a four-week period in late 2025. States including Wyoming, Oregon and Montana ranked highest by share of such interactions. Healthcare professionals are also using AI tools, largely for administrative support. Citing industry surveys, the report notes that 66% of US-based physicians reported using AI in 2024, up from 38% the previous year. Nearly half of US-based nurses report weekly use, primarily for documentation, billing and workflow support rather than clinical decision-making. Meanwhile, OpenAI has released a new benchmark, HealthBench, designed to evaluate AI systems' capabilities in healthcare. The benchmark aims to help large language models support patients and clinicians with health discussions that are trustworthy, meaningful and open to continuous improvement. HealthBench looks at seven key areas, including emergency care, managing uncertainty and global health.
[7]
OpenAI Says Over 40 Million Users Have Asked ChatGPT Healthcare Queries
OpenAI said that 7 out of 10 healthcare chats occur outside clinic hours OpenAI's ChatGPT is reportedly drawing a large volume of healthcare-related questions. As per the report, the San Francisco-based artificial intelligence (AI) giant claimed that more than 40 million users globally have sent the AI chatbot questions seeking healthcare and medical information. A significant portion of these messages is said to come from the underserved rural communities, and one of the most asked topics is around health insurance. Notably, in August 2025, when the company released the GPT-5 AI model, it had said that a big focus was on health-related performance. OpenAI Reportedly Claims High Volume of Healthcare Queries The AI giant shared several user data metrics with Axios on how individuals interact with the chatbot when it comes to healthcare and medical queries. Notably, the abovementioned 40 million healthcare messages make up north of five percent of all ChatGPT messages globally. The company reportedly also revealed that between 1.6 and 1.9 million messages per week are asking for guidance about health insurance, with primary questions around plan comparison, claims and billing, and coverage. Apart from this, the report also claimed that as many as 6,00,000 healthcare-related questions per week come from users residing in underserved rural communities, and seven out of 10 conversations occur at a time when clinics are generally closed. OpenAI also shared results from a survey it conducted in December 2025 with the publication. It asked several user behaviour questions to 1,042 adults in the US. As per the data shared in the report, 55 percent of the respondents stated that they use ChatGPT to check or explore physical symptoms they're facing, while 48 percent use the chatbot to understand medical terms and instructions. Another 44 percent admitted using AI to learn about treatment options. The data highlights two things immediately. First is the lack of accessibility of healthcare and medical information in the public domain. While Google has been a popular source for people to look up healthcare information, unless users know the right keywords to search for and have the knowledge to decipher the technical medical language, the knowledge is not readily available. Second is the accessibility of healthcare professionals. Many individuals, especially those living in areas with limited healthcare infrastructure, often do not visit doctors and healthcare professionals for minor ailments and resort to home remedies. OpenAI's data shows how AI is filling both of these gaps with informative and science-backed knowledge. However, there are concerns. With AI hallucination still an issue in 2026, the reliability of the information shared by a chatbot remains a big question. Although OpenAI told the publication that it is working on improving the healthcare-related responses continuously, the window of error and the reliance on a massive user base can quickly become a recipe for disaster if ChatGPT starts spreading misinformation.
[8]
OpenAI Reports More Than 40 Million Users Turn to ChatGPT Daily for Health Guidance
Millions Turn to ChatGPT for Health Guidance Every Day, OpenAI Reveals OpenAI's new report shows more than 40 million people opting for ChatGPT when seeking health advice. This sharp rise reflects a global trend as users depend on artificial intelligence to navigate complex healthcare systems. The data shows roughly 5% of all ChatGPT interactions worldwide are now about health topics or medical questions. Driven by high costs and limited access, users depend on the platform to understand insurance terms, check symptoms, and get ready for doctor visits. Notably, 70% of these questions come up after active hours, providing a 24/7 digital safety net.
[9]
Over 40 mn people use ChatGPT daily for health advice: OpenAI
Over five percent of all messages sent to ChatGPT globally are about healthcare. More than 40 million people around the world now turn to ChatGPT every day for health-related advice, according to a new report from OpenAI, the company behind the chatbot. The report says that over five percent of all messages sent to ChatGPT globally are about healthcare. People use ChatGPT for many health-related reasons. More than half said they used it to check or explore symptoms. Nearly half said it helped them understand medical terms or instructions, while about 44 percent used it to learn more about treatment options. The findings, first reported by Axios, highlight how quickly AI chatbots are becoming part of the US healthcare system. Some states, including California and Texas, have tried to limit how AI can be used in healthcare. At the same time, Congress has been slow to act, and the Trump administration is working to weaken state-level AI laws. Also read: Apple iPhone 17 Pro price drops by over Rs 12,900: How to get this deal OpenAI is also facing lawsuits that claim ChatGPT contributed to suicides or made mental health problems worse, a concern some experts call AI psychosis. Despite these concerns, OpenAI's report presents ChatGPT as a helpful tool for people struggling with a complex and costly healthcare system, especially in rural areas with few hospitals or doctors. "For both patients and providers in the US, ChatGPT has become an important ally, helping people navigate the healthcare system, enabling them to self-advocate, and supporting both patients and providers for better health outcomes," the company said. "Americans are using AI and ChatGPT to equip themselves with information to gain more agency over their health, particularly when dealing with a system that's difficult to navigate and makes decisions without a lot of context." Also read: Google Pixel 10 Pro price drops by over Rs 12,550: How to get this deal OpenAI also noted that people in rural "hospital deserts" send about 580,000 healthcare-related messages to ChatGPT each week. "AI will not, on its own, reopen a shuttered hospital, restore a discontinued obstetrics, or replace other critical but vanishing services," the company said. "But it can make a near-term contribution by helping people in underserved areas interpret information, prepare for care, and navigate gaps in access, while helping rare clinicians reclaim time and reduce burnout."
Share
Share
Copy Link
OpenAI reveals that more than 40 million people worldwide turn to ChatGPT daily for healthcare advice, from checking symptoms to navigating insurance denials. With over 5% of all ChatGPT messages globally related to healthcare, the company sees an opportunity to shape policy—even as experts warn about dangerously inaccurate medical information and hallucination risks.
More than 40 million people worldwide now use ChatGPT for healthcare every single day, according to a new report from OpenAI shared exclusively with Axios
5
. The analysis, based on anonymized ChatGPT interactions and user surveys, reveals that healthcare-related queries account for more than 5% of all messages sent to the chatbot globally1
. With ChatGPT processing around 2.5 billion prompts per day as of July last year, this translates to at least 125 million healthcare-related questions daily—a figure likely higher now as the user base continues expanding1
.
Source: Digit
Among ChatGPT's more than 800 million regular users, roughly 200 million submit healthcare prompts weekly
3
. The sheer scale underscores how rapidly AI reliance has grown for medical matters, transforming what was once a search engine query into an interactive conversation with an artificial intelligence system.According to OpenAI's survey of 1,042 US adults who used AI for healthcare in the past three months, 55% turned to ChatGPT to check or explore symptoms and diagnoses, while 52% valued the ability to ask healthcare questions at any time of day
4
. Nearly half—48%—used the chatbot for understanding medical terms or instructions, and 44% sought information about treatment options4
.
Source: Axios
Users also leverage ChatGPT for insurance advice and billing issues, with OpenAI finding that 1.6 to 1.9 million messages per week focus on comparing health insurance plans, handling claims, and navigating coverage queries
5
. Some patients upload itemized medical bills to the AI for analysis, uncovering errors like duplicate charges, improper coding, or violations of Medicare rules5
.Seven in ten healthcare conversations happen outside normal clinic hours, highlighting a key benefit: unlike human doctors, ChatGPT remains available 24/7
1
. In underserved areas—classified as hospital deserts where residents live more than a 30-minute drive from medical facilities—users send an average of nearly 600,000 healthcare-related messages weekly5
.The widespread adoption of ChatGPT for healthcare coincides with mounting financial pressures on Americans. With the expiration of pandemic-era Affordable Care Act tax subsidies, over 20 million ACA enrollees have seen their monthly premiums increase by an average of 114%
1
. OpenAI's own research found that three in five Americans view the current healthcare system as broken, with hospital costs representing the most significant pain point3
.According to a Gallup poll published in December, only 16% of US adults were satisfied with healthcare costs, and just 24% held a positive view of their coverage
2
. Americans pay the highest per capita healthcare costs globally, yet many feel they're not receiving adequate value2
. This environment creates conditions where younger, healthier, and cash-strapped individuals may forego professional medical advice entirely, turning instead to free AI alternatives.While AI might always be available, it remains prone to hallucination—fabricating information delivered with the confidence of fact
1
. A study conducted by physicians and posted to the preprint server arXiv in July found that industry-leading chatbots frequently responded to medical questions with dangerously inaccurate information1
. The rate of unsafe responses from OpenAI's GPT-4o and Meta's Llama was especially alarming at 13% each1
.
Source: Analytics Insight
"This study suggests that millions of patients could be receiving unsafe medical advice from publicly available chatbots, and further work is needed to improve the clinical safety of these powerful tools," the authors noted
1
. The risks extend beyond hypotheticals: a report from August 2025 documented a 60-year-old with no psychiatric or medical history who was hospitalized due to bromide poisoning after following ChatGPT's recommendation to take the supplement3
.OpenAI currently faces multiple lawsuits from people who say loved ones harmed or killed themselves after interacting with the technology, particularly in mental health contexts
5
. Several states have enacted laws banning AI-enabled chatbots from offering therapeutic decision-making or mental health services5
. An investigation by The Guardian found that Google AI Overviews frequently provided incorrect information about cancer patient diets, liver function tests, and women's healthcare2
.Related Stories
Rather than viewing the surge in healthcare queries as an indictment of America's struggling medical system, OpenAI sees a business opportunity and a chance to influence regulation
2
. The company's report includes sample policy concepts and previews a full policy blueprint for healthcare AI regulation set for release in coming months3
.Leading OpenAI's recommendations is a call for opening and securely connecting publicly funded medical data so AI systems can "learn from decades of research at once"
2
. The company also advocates for new infrastructure incorporating AI into medical laboratories, support for healthcare professionals transitioning to AI-assisted workflows, and clearer FDA regulatory pathways for AI-infused medical devices3
."We urge FDA to move forward and work with industry towards a clear and workable regulatory policy that will facilitate innovation of safe and effective AI medical devices," OpenAI stated in the report
3
. The FDA published a request for public comment in September 2025, seeking feedback on healthcare AI deployment and evaluations3
.OpenAI CEO Sam Altman attended the White House's "Make Health Tech Great Again" event in July, where the Trump administration announced a private sector initiative to deploy AI assistants for patient care and share medical records across apps from 60 companies
3
. The company's CEO of applications, Fidji Simo, stated she is "most excited for the breakthroughs that AI will generate in healthcare"3
.OpenAI claims it has a dedicated team handling accurate healthcare information and works with clinicians to safety-test models and reduce risks
2
. The company says GPT-5 models score higher on internal healthcare benchmarks and have reduced major failure modes including hallucination, errors in urgent situations, and failures to account for global healthcare contexts2
. These newer models are more likely to ask follow-up questions, browse the internet for recent research, use hedging language, and direct users to professional medical advice when needed5
.However, none of these improvements address how often ChatGPT provides wrong answers in critical healthcare situations
2
. For now, experts recommend approaching generative AI like WebMD or Google: useful for answering basic questions about medical conditions or healthcare system complexities, but not recommended as a definitive source for diagnosing chronic ailments or treating serious injuries1
. Given AI's propensity to hallucinate, responses should be treated with even greater skepticism1
.The end of enhanced Affordable Care Act subsidies could accelerate this shift as uninsured and underinsured patients increasingly lean on chatbots without professional oversight
5
. Key questions remain about accuracy, liability, privacy, and access to patient care as millions navigate health decisions with AI assistance rather than doctors in the loop5
. Big tech companies from Google to Microsoft and Palantir have been developing healthcare AI offerings for years, making this a competitive space with significant financial stakes3
.Summarized by
Navi
[2]
1
Policy and Regulation

2
Technology
3
Technology
