54 Sources
54 Sources
[1]
ChatGPT Health lets you connect medical records to an AI that makes things up
On Wednesday, OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for "health and wellness conversations" intended to connect a user's health and medical records to the chatbot in a secure way. But mixing generative AI technology like ChatGPT with health advice or analysis of any kind has been a controversial idea since the launch of the service in late 2022. Just days ago, SFGate published an investigation detailing how a 19-year-old California man died of a drug overdose in May 2025 after 18 months of seeking recreational drug advice from ChatGPT. It's a telling example of what can go wrong when chatbot guardrails fail during long conversations and people follow erroneous AI guidance. Despite the known accuracy issues with AI chatbots, OpenAI's new Health feature will allow users to connect medical records and wellness apps like Apple Health and MyFitnessPal so that ChatGPT can provide personalized health responses like summarizing care instructions, preparing for doctor appointments, and understanding test results. OpenAI says more than 230 million people ask health questions on ChatGPT each week, making it one of the chatbot's most common use cases. The company worked with more than 260 physicians over two years to develop ChatGPT Health and says conversations in the new section will not be used to train its AI models. "ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life," wrote Fidji Simo, OpenAI's CEO of applications, in a blog post. But despite OpenAI's talk of supporting health goals, the company's terms of service directly state that ChatGPT and other OpenAI services "are not intended for use in the diagnosis or treatment of any health condition." It appears that policy is not changing with ChatGPT Health. OpenAI writes in its announcement, "Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time -- not just moments of illness -- so you can feel more informed and prepared for important medical conversations." A cautionary tale The SFGate report on Sam Nelson's death illustrates why maintaining that disclaimer legally matters. According to chat logs reviewed by the publication, Nelson first asked ChatGPT about recreational drug dosing in November 2023. The AI assistant initially refused and directed him to health care professionals. But over 18 months of conversations, ChatGPT's responses reportedly shifted. Eventually, the chatbot told him things like "Hell yes -- let's go full trippy mode" and recommended he double his cough syrup intake. His mother found him dead from an overdose the day after he began addiction treatment. While Nelson's case did not involve the analysis of doctor-sanctioned health care instructions like the type ChatGPT Health will link to, his case is not unique, as many people have been misled by chatbots that provide inaccurate information or encourage dangerous behavior, as we have covered in the past. That's because AI language models can easily confabulate, generating plausible but false information in a way that makes it difficult for some users to distinguish fact from fiction. The AI models that services like ChatGPT use statistical relationships in training data (like the text from books, YouTube transcripts, and websites) to produce plausible responses rather than necessarily accurate ones. Moreover, ChatGPT's outputs can vary widely depending on who is using the chatbot and what has previously taken place in the user's chat history (including notes about previous chats). Then there's the issue of unreliable training data, which companies like OpenAI use to create the models. Fundamentally, all major AI language models rely on information pulled from sources of information collected online. Rob Eleveld of the AI regulatory watchdog Transparency Coalition told SFGate: "There is zero chance, zero chance, that the foundational models can ever be safe on this stuff. Because what they sucked in there is everything on the Internet. And everything on the Internet is all sorts of completely false crap." So when summarizing a medical report or analyzing a test result, ChatGPT could make a mistake that the user, not being trained in medicine, would not be able to spot. Even with these hazards, it's likely that the quality of health-related chats with the AI bot can vary dramatically between users because ChatGPT's output partially mirrors the style and tone of what users feed into the system. For example, anecdotally, some users claim to find ChatGPT useful for medical issues, though some successes for a few users who know how to navigate the bot's hazards do not necessarily mean that relying on a chatbot for medical analysis is wise for the general public. That's doubly true in the absence of government regulation and safety testing. In a statement to SFGate, OpenAI spokesperson Kayla Wood called Nelson's death "a heartbreaking situation" and said the company's models are designed to respond to sensitive questions "with care." ChatGPT Health is rolling out to a waitlist of US users, with broader access planned in the coming weeks.
[2]
OpenAI unveils ChatGPT Health, says 230 million users ask about health each week | TechCrunch
OpenAI announced ChatGPT Health on Wednesday, which the company said will offer a dedicated space for users to have conversations with ChatGPT about their health. People already use ChatGPT to ask about medical issues; OpenAI says that over 230 million people ask health and wellness questions on the platform each week. But the ChatGPT Health product silos these conversations away from your other chats. That way, the context of your health won't come up in standard conversations with ChatGPT. If people start chats about their health outside of the Health section, then the AI aims to nudge them to switch over. Within Health, the AI might reference things you've discussed in its standard experience. If you ask ChatGPT for help constructing a marathon training plan, for example, then the AI would know you're a runner when you talk in Health about your fitness goals. ChatGPT Health will also be able to integrate with your personal information or medical records from wellness apps like Apple Health, Function, and MyFitnessPal. OpenAI notes that it will not use Health conversations to train its models. The CEO of Applications at OpenAI, Fidji Simo, wrote in a blog post that she sees ChatGPT Health as a response to existing issues in the healthcare space, like cost and access barriers, overbooked doctors, and a lack of continuity in care. While the healthcare system has its drawbacks, using AI chatbots for medical advice creates a new slew of challenges. Large language models (LLMs) like ChatGPT operate by predicting the most likely response to prompts, not the most correct answer, since LLMs don't have a concept of what is true or not. AI models are also prone to hallucinations. In its own terms of service, OpenAI states that it is "not intended for use in the diagnosis or treatment of any health condition." The feature is expected to roll out in the coming weeks.
[3]
OpenAI Would Like You to Share Your Health Data with its ChatGPT
I agree my information will be processed in accordance with the Scientific American and Springer Nature Limited Privacy Policy. We leverage third party services to both verify and deliver email. By providing your email address, you also consent to having the email address shared with third parties for those purposes. OpenAI wants your health data. On Wednesday, OpenAI, the company behind the wildly popular artificial-intelligence chatbot ChatGPT, announced some users will be able to feed their health information into the bot, from medical records to test results to lifestyle app data. In return, OpenAI says users can expect ChatGPT to give them more personalized meal planning, nutrition advice, and lab test insights. In a blog post explaining ChatGPT Health on Wednesday, OpenAI said more than 230 million people a week ask their AI health-related questions. The new feature was designed in collaboration with physicians and is mean to help people "take a more active role in understanding and managing their health and wellness" while "supporting, not replacing, care from clinicians," according to the company. If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. But as Scientific American and many other outlets have previously reported, some health experts have urged caution when using ChatGPT for health care reasons, and especially for mental health. The company has faced legal scrutiny in recent years after several teenagers died by suicide after interacting with ChatGPT. OpenAI did not immediately respond to a request for comment. Other experts are more positive. Peter D. Chang, an associate professor of radiological sciences and computer science at the University of California, Irvinesays the tool represents a "step in the right direction" toward more personalized medical care. But he also cautioned that users should approach any AI-generated medical advice with a grain of salt. "Maybe don't do exactly what it says, but use it as a starting point to learn more." "Absolutely there's nothing preventing the model from going off the rails to give you a nonsensical result," Chang says.
[4]
OpenAI Launches ChatGPT Health: A Dedicated Tab for Medical Inquiries
ChatGPT is expanding its presence in the health care realm. OpenAI said Wednesday that its popular AI chatbot will begin rolling out ChatGPT Health, a new tab dedicated to addressing all your medical inquiries. The goal of this new tab is to centralize all your medical records and provide a private area for your wellness issues. Looking for answers about a plethora of health issues is a top use for the chatbot. According to OpenAI, "hundreds of millions of people" sign in to ChatGPT every week to ask a variety of health and wellness questions. Additionally, ChatGPT Health (currently in beta testing) will encourage you to connect any wellness apps you also use, such as Apple Health and MyFitnessPal, resulting in a more connected experience with more information about you to draw from. Online privacy, especially in the age of AI, is a significant concern, and this announcement raises a range of questions regarding how your personal health data will be used and the safeguards that will be implemented to keep sensitive information secure -- especially with the proliferation of data breaches and data brokers. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. "The US doesn't have a general-purpose privacy law, and HIPAA only protects data held by certain people like health care providers and insurance companies," Andrew Crawford, senior counsel for privacy and data at the Center for Democracy and Technology, said in an emailed statement. He continued: "The recent announcement by OpenAI introducing ChatGPT Health means that a number of companies not bound by HIPAA's privacy protections will be collecting, sharing and using people's health data. And since it's up to each company to set the rules for how health data is collected, used, shared and stored, inadequate data protections and policies can put sensitive health information in real danger." OpenAI says the new tab will have a separate chat history and a memory feature that can keep your health chat history separate from the rest of your ChatGPT usage. Further protections, such as encryption and multi-factor authentication, will defend your data and keep it secure, the company says. Health conversations won't be used to train the chatbot, according to the company. Privacy issues aside, another concern is how people intend to use ChatGPT Health. OpenAI's blog post states the service "is not intended for diagnosis or treatment." The slope is slippery here. In August 2025, a man was hospitalized after allegedly being advised by the AI chatbot to replace salt in his diet with sodium bromide. There are other examples of AI providing incorrect and potentially harmful advice to individuals, leading to hospitalization. OpenAI's announcement also doesn't touch on mental health concerns, but a blog post from October 2025 says the company is working to strengthen its responses in sensitive conversations. Whether these mental health guardrails will be enough to keep people safe remains to be seen. OpenAI didn't immediately respond to a request for comment. If you're interested in ChatGPT Health, you can join a waitlist, as the tab isn't yet live. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
[5]
Use ChatGPT for medical advice? Try its new Health mode - here's how
OpenAI announced a Health mode for ChatGPT. You can connect health apps and upload your personal files. The advice isn't intended to replace actual medical care. OpenAI wants to make sure that any medical information you get from ChatGPT is as accurate as possible. Approximately 40 million people a day rely on ChatGPT for medical questions. In response, OpenAI announced ChatGPT Health, a "dedicated experience" in ChatGPT that's centered around health and wellness. The feature will enable you to combine your medical records and wearable data with the AI's intelligence, "to ground conversations in your own health information," according to OpenAI. You can use it to help you prepare for your next appointment, plan questions to ask your doctor, receive customized diet plans or workout routines, and more. Also: I've studied AI for decades - why you must be polite to chatbots (and it's not for the AI's sake) OpenAI notes that the feature is not intended to replace medical care, nor is it designed for diagnosis or treatment. Instead, the goal is for you to ask everyday questions and understand patterns related to your whole medical profile. According to OpenAI, one of the biggest challenges people face when seeking medical guidance online is that information is scattered across provider-specific portals, wearable apps, and personal notes. As a result, it can be hard to get a good overview. Also: 10 ChatGPT prompt tricks I use - to get the best results, faster This leads people to turn to ChatGPT, which often only gets a partial view of the picture, too. However, now you can provide ChatGPT with data from your personal medical records and from any health-tracking apps you use. OpenAI said that you can connect a variety of sources: OpenAI acknowledged that health information is "deeply personal," so it's adding extra protections to ChatGPT Health. The company said, "Conversations and files across ChatGPT are encrypted by default at rest and in transit as part of our core security architecture. . . . Health builds on this foundation with additional, layered protections -- including purpose-built encryption and isolation -- to keep health conversations protected and compartmentalized." Also: True agentic AI is years away - here's why and how we get there The company also said that Health conversations will not be used for its foundation model training. Health is its own memory and its own space in ChatGPT, and your Health conversations, connected apps, and files won't mix with your other chats. However, some information from your regular ChatGPT sessions may surface in your Health space when applicable. To try out ChatGPT Health, you'll need to join the waitlist. OpenAI said it's providing access to a small group of early users to "continue refining the experience." As it makes improvements, the company will expand the feature to all users across the web and iOS over the next few weeks. When you have access, select Health from the sidebar menu in ChatGPT to get started.
[6]
OpenAI launches ChatGPT Health, encouraging users to connect their medical records
OpenAI has been dropping hints this week about AI's role as a "healthcare ally" -- and today, the company is announcing a product to go along with that idea: ChatGPT Health. ChatGPT Health is a sandboxed tab within ChatGPT that's designed for users to ask their health-related questions in a more secure and personalized environment, with a separate chat history and memory feature than the rest of ChatGPT. The company is encouraging users to connect their personal medical records and wellness apps, such as Apple Health, Peloton, MyFitnessPal,Weight Watchers, and Function, "to get more personalized, grounded responses to their questions." It suggests connecting medical records so that ChatGPT can analyze lab results, visit summaries, and clinical history; MyFitnessPal and Weight Watchers for food guidance; Apple Health for health and fitness data, including movement, sleep, and activity patterns"; and Function for insights into lab tests. "ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns," OpenAI writes in the blog post. On the medical records front, OpenAI says it's partnered with b.well, which will provide back-end integration for users to upload their medical records, since the company works with about 2.2 million providers. For now, ChatGPT Health requires users to sign up for a waitlist to request access, as it's starting with a beta group of early users, but the product will roll out gradually to all users regardless of subscription tier. In a blog post, OpenAI wrote that based on its "de-identified analysis of conversations," more than 230 million people around the world already ask ChatGPT questions related to health and wellness each week. OpenAI also said that over the past two years, it's worked with more than 260 physicians to provide feedback on model outputs more than 600,000 times over 30 areas of focus, to help shape the product's responses. The company makes sure to mention in the blog post that ChatGPT Health is "not intended for diagnosis or treatment," but it can't fully control how people end up using AI when they leave the chat. By the company's own admission, in underserved rural communities, users send nearly 600,000 healthcare-related messages weekly, on average, and seven in 10 healthcare conversations in ChatGPT "happen outside of normal clinic hours." In August, physicians published a report on a case of a man being hospitalized for weeks with an 18th-century medical condition, after taking ChatGPT's alleged advice to replace salt in his diet with sodium bromide. Google's AI Overview made headlines for weeks after its launch over dangerous advice, such as putting glue on pizza, and a recent investigation by The Guardian found that dangerous health advice has continued, with false advice for liver function tests, women's cancer tests, and recommended diets for those with pancreatic cancer. One part of health that OpenAI seemed to carefully avoid mentioning in its blog post: mental health. There are a number of examples of adults or minors dying by suicide after confiding with ChatGPT, and in the blog post, OpenAI stuck to a vague mention that users can customize instructions in the Health product "to avoid mentioning sensitive topics." When asked during a Wednesday briefing with reporters whether ChatGPT Health would also summarize mental health visits and provide advice in that realm, OpenAI's CEO of Applications Fidji Simo said, "Mental health is certainly part of health in general, and we see a lot of people turning to ChatGPT for mental health conversations," adding that the new product "can handle any part of your health including mental health ... We are very focused on making sure that in situations of distress we respond accordingly and we direct toward health professionals," as well as loved ones or other resources. It's also possible that the product could worsen health anxiety conditions, such as hypochondria. When asked whether OpenAI had introduced any safeguards to help prevent people with such conditions from spiraling while using ChatGPT Health, Simo said, "We have done a lot of work on tuning the model to make sure that we are informative without ever being alarmist and that if there is action to be taken we direct to the healthcare system." When it comes to security concerns, OpenAI says that ChatGPT Health "operates as a separate space with enhanced privacy to protect sensitive data" and that the company introduced several layers of purpose-built encryption (but not end-to-end encryption), according to the briefing. Conversations within the Health product aren't used to train its foundation models, by default, and if a user begins a health-related conversation in regular ChatGPT, the chatbot will suggest moving it into the Health product for "additional protections," per the blog post. But OpenAI has had security breaches in the past, most notably a March 2023 issue that allowed some users to see chat titles, initial messages, names, email addresses, and payment information from other users. In the event of a court order, OpenAI would still need to provide access to the data "where required through valid legal processes or in an emergency situation," OpenAI head of health Nate Gross said during the briefing. When asked if ChatGPT Health is compliant with the Health Insurance Portability and Accountability Act (HIPAA), Gross said that "in the case of consumer products, HIPAA doesn't apply in this setting -- it applies toward clinical or professional healthcare settings."
[7]
ChatGPT Health wants access to sensitive medical records
It's for less consequential health-related matters, where being wrong won't kill customers Could a bot take the place of your doctor? According to OpenAI, which launched ChatGPT Health this week, an LLM should be available to answer your questions and even examine your health records. But it should stop short of diagnosis or treatment. "Designed in close collaboration with physicians, ChatGPT Health helps people take a more active role in understanding and managing their health and wellness - while supporting, not replacing, care from clinicians," the company said, noting that every week more than 230 million people globally prompt ChatGPT for health- and wellness-related questions. ChatGPT Health arrives in the wake of a study published by OpenAI earlier this month titled "AI as a Healthcare Ally." It casts AI as the panacea for a US healthcare system that three in five Americans say is broken. The service is currently invitation-only and there's a waitlist for those undeterred by at least nine pending lawsuits against OpenAI alleging mental health harms from conversations with ChatGPT. ChatGPT users in the European Economic Area, Switzerland, and the United Kingdom are ineligible presently and medical record integrations, along with some apps, are US only. ChatGPT Health in the web interface takes the form of a menu entry labeled "Health" on the left-hand sidebar. It's designed to allow users to upload medical records and Apple Health data, to suggest questions to be asked of healthcare providers based on imported lab results, and to offer nutrition and exercise recommendations. A ChatGPT user might ask, OpenAI suggests, "Can you summarize my latest bloodwork before my appointment?" The AI model is expected to emit a more relevant set of tokens that it might otherwise have through the availability of personal medical data - bloodwork data in this instance. "You can upload photos and files and use search, deep research, voice mode and dictation," OpenAI explains. "When relevant, ChatGPT can automatically reference your connected information to provide more relevant and personalized responses." OpenAI insists that it can adequately protect the sensitive health information of ChatGPT users by compartmentalizing Health "memories" - prior conversations with the AI model. The AI biz says "Conversations and files across ChatGPT are encrypted by default at rest and in transit as part of our core security architecture," and adds that Health includes "purpose-built encryption and isolation" to protect health conversations. "Conversations in Health are not used to train our foundation models," the company insists. The Register asked OpenAI whether the training exemption applies to customer health data uploaded to or shared with ChatGPT Health and whether company partners might have access to conversations or data. A spokesperson responded that by default ChatGPT Health data is not used for training and third-party apps can only access health data when a user has chosen to connect them; data is made available to ChatGPT to ground responses to the user's context. With regard to partners, we're told only the minimum amount of information is shared and partners are bound by confidentiality and security obligations. And employees, we're told, have more restricted access to product data flows based on legitimate safety and security purposes. OpenAI currently has no plans to offer ads in ChatGPT Health, a company spokesperson explained, but the biz, known for its extravagant datacenter spending, is looking at how it might integrate advertising into ChatGPT generally. As for the encryption, it can be dissolved by OpenAI if necessary, because the company and not the customer holds the private encryption keys. A federal judge recently upheld an order requiring OpenAI to turn over a 20-million-conversation sample of anonymized ChatGPT logs to news organizations including The New York Times as part of a consolidated copyright case. So it's plausible that ChatGPT Health conversations may be sought in future legal proceedings or demanded by government officials. While academics acknowledge that AI models can provide helpful medical decision-making support, they also raise concerns about "recurrent ethical concerns connected to fairness, bias, non-maleficence, transparency, and privacy." For example, a 2024 case study, "Delayed diagnosis of a transient ischemic attack caused by ChatGPT," describes it as "a case where an erroneous ChatGPT diagnosis, relied upon by the patient to evaluate symptoms, led to a significant treatment delay and a potentially life-threatening situation." The study, from The Central European Journal of Medicine, describes how a man went to an emergency room, concerned about double vision following treatment for atrial fibrillation. He did so on the third onset of symptoms rather than the second - as advised by his physician - because "he hoped ChatGPT would provide a less severe explanation [than stroke] to save him a trip to the ER." Also, he found the physician's explanation of his situation "partly incomprehensible" and preferred the "valuable, precise and understandable risk assessment" provided by ChatGPT. The diagnosis ultimately was transient ischemic attack, which involves symptoms similar to a stroke though it's generally less severe. The study implies that ChatGPT's tendency to be sycophantic, common among commercial AI models, makes its answers more appealing. "Although not specifically designed for medical advice, ChatGPT answered all questions to the patient's satisfaction, unlike the physician, which may be attributable to satisfaction bias, as the patient was relieved by ChatGPT's appeasing answers and did not seek further clarification," the paper says. The research concludes by suggesting that AI models will be more valuable in supporting overburdened healthcare professionals than patients. This may help explain why ChatGPT Health "is not intended for diagnosis or treatment." ®
[8]
OpenAI Unveils ChatGPT Health to Review Test Results, Diets
OpenAI stressed that the service is designed to supplement, not replace, the judgment of doctors, and plans to add enhanced privacy features to wall off health conversations from other parts of the app. OpenAI is introducing a new feature in ChatGPT that will allow users to analyze medical test results, prepare for doctors appointments and seek guidance on diets and workout routines -- marking the company's biggest push yet into the health care sector. ChatGPT Health, announced Wednesday, is intended to help provide useful health and fitness information but stop short of making formal diagnoses. The new feature can connect with peoples' electronic medical records, wearable devices and wellness apps, such as Apple Health and MyFitnessPal, the company said. Initially, OpenAI will let users sign up for a waitlist to try out the product. The company plans to expand access more widely in the coming weeks. A growing number of tech firms are targeting the lucrative health care market, betting that advances in artificial intelligence can help parse patterns in users' health data to provide individualized medical recommendations. But those moves also raise concerns about the privacy and safety risks of AI services handling more sensitive personal data and offering suggestions for higher stakes health matters. More than 200 million people already ask ChatGPT health and wellness questions every week, according to the company. OpenAI said it has also consulted with more than 260 physicians over two years in refining its AI technology's health capabilities. Additionally, it plans to wall off health conversations from other parts of the app and add enhanced privacy features. OpenAI's team stressed the service is designed to supplement, not replace, the judgment of doctors. "Doctors don't have enough time or bandwidth. They can't spend as much time understanding everything about you, and they don't have time to explain what's going on in a way that you can understand," said Fidji Simo, OpenAI's chief executive officer of applications, during a media briefing Wednesday. "Meanwhile, when you look at AI, it doesn't have any of these constraints."
[9]
OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls
Artificial intelligence (AI) company OpenAI on Wednesday announced the launch of ChatGPT Health, a dedicated space that allows users to have conversations with the chatbot about their health. To that end, the sandboxed experience offers users the optional ability to securely connect medical records and wellness apps, including Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton, to get tailored responses, lab test insights, nutrition advice, personalized meal ideas, and suggested workout classes. The new feature is rolling out for users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the U.K. "ChatGPT Health builds on the strong privacy, security, and data controls across ChatGPT with additional, layered protections designed specifically for health -- including purpose-built encryption and isolation to keep health conversations protected and compartmentalized," OpenAI said in a statement. Stating that over 230 million people globally ask health and wellness-related questions on the platform every week, OpenAI emphasized that the tool is designed to support medical care, not replace it or be used as a substitute for diagnosis or treatment. The company also highlighted the various privacy and security features built into the Health experience - Furthermore, OpenAI pointed out that it has evaluated the model that powers Health against clinical standards using HealthBench, a benchmark the company revealed in May 2025 as a way to better measure the capabilities of AI systems for health, putting safety, clarity, and escalation of care in focus. "This evaluation-driven approach helps ensure the model performs well on the tasks people actually need help with, including explaining lab results in accessible language, preparing questions for an appointment, interpreting data from wearables and wellness apps, and summarizing care instructions," it added. OpenAI's announcement follows an investigation from The Guardian that found Google AI Overviews to be providing false and misleading health information. OpenAI and Character.AI are also facing several lawsuits claiming their tools drove people to suicide and harmful delusions after confiding in the chatbot. A report published by SFGate earlier this week detailed how a 19-year-old died of a drug overdose after trusting ChatGPT for medical advice.
[10]
OpenAI says ChatGPT won't use your health information to train its models
OpenAI has promised that it won't use your private health information and conversations to train its AI models, as it rolls out ChatGPT Health, a dedicated space for health conversations. ChatGPT Health allows you to have a private space for health conversations, and it's also the company's attempt to capitalize on the increasing use of ChatGPT for health. "We're introducing ChatGPT Health, a dedicated experience that securely brings your health information and ChatGPT's intelligence together, to help you feel more informed, prepared, and confident navigating your health," OpenAI announced. If you've early access to ChatGPT Health, it will appear as a new space in the sidebar on the desktop and hamburger menu on mobile. In addition, regular chat might push you to continue the conversation in the Health space if it detects the question is related to health. If you're interested in getting access as it becomes available, you can sign up for the waitlist. "We're starting by providing access to a small group of early users to learn and continue refining the experience," OpenAI noted. In our tests, BleepingComputer observed that OpenAI clearly promises ChatGPT won't use your health information to train its foundational models when you open Health space. "By default, Health information won't be used to train our foundation models. Your health data is subject to our Health Privacy Notice. Add multi-factor authentication for even more protection," an alert within the ChatGPT Health dashboard reads. OpenAI also warns that ChatGPT can't replace your doctor, and responses shouldn't be taken as medical advice and aren't intended for diagnosis or treatment ChatGPT Health is rolling out to everyone with ChatGPT Free, Go, Plus, and Pro, but it won't appear in the EEA, Switzerland, and the UK for now.
[11]
40 million people globally are using ChatGPT for healthcare - but is it safe?
5% of messages to ChatGPT globally concern healthcare.Users ask about symptoms and insurance advice, for example.Chatbots can provide dangerously inaccurate information. More than 40 million people worldwide rely on ChatGPT for daily medical advice, according to a new report from OpenAI shared exclusively with Axios. The report, based on an anonymized analysis of ChatGPT interactions and a user survey, also sheds light on some of the specific ways people are using AI to navigate the sometimes complex intricacies of healthcare. Some are prompting ChatGPT with queries regarding insurance denial appeals and possible overcharges, for example, while others are describing their symptoms, hoping to receive a diagnosis or treatment advice. It should come as no surprise that a large number of people are using ChatGPT for sensitive personal matters. The three-year-old chatbot, along with others like Google's Gemini and Microsoft's Copilot, has become a confidant and companion for many users, a guide through some of life's thornier moments. Also: Can you trust an AI health coach? A month with my Pixel Watch made the answer obvious Last spring, an analysis conducted by Harvard Business Review found that psychological therapy was the most common use of generative AI. The new OpenAI report is therefore just another brick in a rising edifice of evidence showing that generative AI will be -- indeed already is -- much more than simply a search engine on steroids. What's most jarring about the report is the sheer scale at which users are turning to ChatGPT for medical advice. It also underscores some urgent questions about the safety of this type of AI use at a time when many millions of Americans are suddenly facing new and major healthcare-related challenges. (Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) According to Axios, the OpenAI report found that more than 5% of all messages sent to ChatGPT globally are related to healthcare. As of July of last year, the chatbot reportedly processed around 2.5 billion prompts per day -- that means it's responding to at least 125 million health-care related questions every day (and likely more than that now, since its user-base is still growing). Also: Using AI for therapy? Don't - it's bad for your mental health, APA warns Many of those conversations -- around 70%, according to Axios -- are happening outside the normal working hours of medical clinics, underscoring a key benefit of this kind of AI use: unlike human doctors, it's always available. Some people have also leveraged chatbots to help spot billing errors and cases in which exorbitantly high medical costs can be disputed. The widespread embrace of ChatGPT as an automated medical expert is coinciding with what, for many Americans, has been a stressful start to the year due to a sudden spike in the cost of healthcare coverage. With the expiration of pandemic-era Affordable Care Act tax subsidies, over 20 million ACA enrollees have reportedly had their monthly premiums increase by an average of 114%. It's likely that some of those people, especially younger, healthier, and more cash-strapped Americans, will opt to forego health insurance entirely, perhaps turning instead to chatbots like ChatGPT for medical advice. AI might always be available to chat, but it's also prone to hallucination -- fabricating information that's delivered with the confidence of fact -- and therefore no substitute for an actual, flesh-and-blood medical expert. One study conducted by a cohort of physicians and posted to the preprint server site arXiv in July, for example, found that some industry-leading chatbots frequently responded to medical questions with dangerously inaccurate information. The rate at which this kind of response was generated by OpenAI's GPT-4o and Meta's Llama was especially high: 13% in each case. Also: AI model for tracking your pet's health data launches at CES "This study suggests that millions of patients could be receiving unsafe medical advice from publicly available chatbots, and further work is needed to improve the clinical safety of these powerful tools," the authors of the July paper noted. OpenAI is currently working to improve its models' abilities to safely respond to health-related queries, according to Axios. For the time being, generative AI should be approached like WebMD: It's often useful for answering basic questions about medical conditions or the complexities of the healthcare system, but it probably wouldn't be recommended as a definitive source for, say, diagnosing a chronic ailment or seeking advice for treating a serious injury. Also: Anthropic says Claude helps emotionally support users - we're not convinced And given its propensity to hallucinate, it's best to treat AI's responses with an even bigger grain of salt than that with which you might take information gleaned from a quick Google search -- especially when it comes to more sensitive personal questions.
[12]
ChatGPT is launching a new dedicated Health portal
OpenAI is launching a new facet for its AI chatbot called ChatGPT Health. This new feature will allow users to connect medical records and wellness apps to ChatGPT in order to get more tailored responses to queries about their health. The company noted that there will be additional privacy safeguards for this separate space within ChatGPT, and said that it will not use conversations held in Health for training foundational models. ChatGPT Health is currently in a testing stage, and there are some regional restrictions on which health apps can be connected to the AI company's platform. The announcement from OpenAI acknowledges that this new development "is not intended for diagnosis or treatment," but it's worth repeating. No part of ChatGPT, or any other artificial intelligence chatbot, is qualified to provide any kind of medial advice. Not only are these platforms capable of making dangerously incorrect statements, but feeding such personal and private information into a chatbot is generally not a recommended practice. It seems especially unwise to share with a company that only bothered paying even cursory lip service to the psychological impacts of its product after at least one teenager used the chatbot to plan suicide.
[13]
OpenAI launches ChatGPT Health to review your medical records
OpenAI has launched a new ChatGPT feature in the US which can analyse people's medical records to give them better answers, but campaigners warn it raises privacy concerns. The firm wants people to share their medical records along with data from apps like MyFitnessPal, which will be analysed to give personalised advice. OpenAI said conversations in ChatGPT Health would be stored separately to other chats and would not be used to train its AI tools - as well as clarifying it was not intended to be used for "diagnosis or treatment". Andrew Crawford, of US non-profit the Center for Democracy and Technology, said it was "crucial" to maintain "airtight" safeguards around users' health information.
[14]
OpenAI sees big opportunity in US health queries
One man's failing healthcare system is another man's opportunity About sixty percent of American adults have turned to AI like ChatGPT for health or healthcare in the past three months. Instead of seeing that as an indictment of the state of US healthcare, OpenAI sees an opportunity to shape policy. A study published by OpenAI on Monday claims more than 40 million people worldwide ask ChatGPT healthcare-related questions each day, accounting for more than five percent of all messages the chatbot receives. About a quarter of ChatGPT's regular users submit healthcare-related prompts each week, and OpenAI understands why many of those people are users in the United States. "In the United States, the healthcare system is a long-standing and worsening pain point for many," OpenAI surmised in its study. Studies and first-hand accounts from medical professionals bear that out. Results of a Gallup poll published in December found that a mere 16 percent of US adults were satisfied with the cost of US healthcare, and only 24 percent of Americans have a positive view of their healthcare coverage. It's not hard to see why. Healthcare spending has skyrocketed in recent years, and with Republican elected officials refusing to extend Affordable Care Act subsidies, US households are due to see another spike in insurance costs in 2026. Based on Gallup's findings, it seems that American insureds, who pay the highest per capita healthcare costs in the world, don't think they're getting their money's worth. According to OpenAI, more Americans are turning to its AI to close healthcare gaps, and the company doesn't seem at all troubled by that. "For both patients and providers in the US, ChatGPT has become an important ally, helping people navigate the healthcare system, enabling them to self-advocate, and supporting both patients and providers for better health outcomes," OpenAI said in its study. According to the report, which used a combination of a survey of ChatGPT users and anonymized message data, nearly 2 million messages a week come from people trying to navigate America's labyrinthine health insurance ecosystem, but they're still not the majority of US AI healthcare answer seekers. Fifty-five percent of US adults who used AI to help manage their health or healthcare in the past three months said they were trying to understand symptoms, and seven in ten healthcare conversations in ChatGPT happened outside normal clinic hours. Individuals in "hospital deserts," classified in the report as areas where people are more than a 30-minute drive from a general medical or children's hospital, were also frequent users of ChatGPT for healthcare-related questions. In other words, when clinic doors are closed or care is hard to reach, care-deprived Americans are turning to an AI for potentially urgent healthcare questions instead. As The Guardian reported last week, relying on AI for healthcare information can lead to devastating outcomes. The Guardian's investigation of healthcare-related questions put to Google AI Overviews found that inaccurate answers were frequent, with Google AI giving incorrect information about the proper diet for cancer patients, liver function tests, and women's healthcare. OpenAI rebuffed the idea that it could be providing bad information to Americans seeking healthcare information in an email to The Register. A spokesperson told us that OpenAI has a team dedicated solely to handling accurate healthcare information, and that it works with clinicians and healthcare professionals to safety-test its models, suss out where risks might be found, and improve health-related results. OpenAI also told us that GPT-5 models have scored higher than previous iterations on the company's homemade healthcare benchmarking system. It further claims that GPT-5 has greatly reduced all of its major failure modes (i.e., hallucinations, errors in urgent situations, and failures to account for global healthcare contexts). None of those data points actually get to the point of how often ChatGPT could be wrong in critical healthcare situations, however. What does that matter to OpenAI, though, when there's potentially heaps of money to be made on expanding in the medical industry? The report seems to conclude that its increasingly large role in the US healthcare industry, again, isn't an indictment of a failing system as much as it is the inevitable march of technological progress, and included several "policy concepts" that it said are a preview of a full AI-in-healthcare policy blueprint it intends to publish in the near future. Leading the recommendations, naturally, is a call for opening and securely connecting publicly funded medical data so OpenAI's AI can "learn from decades of research at once." OpenAI is also calling for new infrastructure to be built out that incorporates AI into medical wet labs, support for helping healthcare professionals transition into being directly supported by AI, new frameworks from the US Food and Drug Administration to open a path to consumer AI medical devices, and clarified medical device regulation to "encourage ... AI services that support doctors." ®
[15]
ChatGPT Health has arrived
OpenAI's newest feature ties users' health data to AI insights I've said this many times: the products we see on the market are rarely visionary leaps. Most of the time, they are mirrors. They reflect people's habits, shortcuts, fears, and small daily behaviours. Design follows behaviour. Always has. Think about it. You probably know at least one person who already uses ChatGPT for health-related questions. Not occasionally. Regularly. As a second opinion. As a place to test concerns before saying them out loud. Sometimes even as a therapist, a confidant, or a space where embarrassment does not exist. When habits become consistent, companies stop observing and start building. At that point, users are no longer just customers. They are co-architects. Their behaviour quietly shapes the product roadmap. That context matters when looking at what OpenAI announced on January 7, 2026: the launch of ChatGPT Health, a dedicated AI-powered experience focused on healthcare. OpenAI describes it as "a dedicated experience that securely brings your health information and ChatGPT's intelligence together." Access is limited for now. The company is rolling it out through a waitlist, with gradual expansion planned over the coming weeks. Anyone with a ChatGPT account, Free or paid, can sign up for early access, except users in the European Union, the UK, and Switzerland, where regulatory alignment is still pending. The stated goal is simple on paper: help people feel more informed, more prepared, and more confident when managing their health. But the numbers behind that decision say more than the press language ever could. According to OpenAI, over 230 million people worldwide already ask health or wellness questions on ChatGPT every week. That figure raises uncomfortable questions. Why do so many people turn to an AI for health questions? Is it speed? The promise of an immediate answer? The way our expectations have shifted toward instant clarity, even for complex or sensitive issues? Is it discomfort, the reluctance to speak openly with a doctor about certain topics? Or something deeper, a quiet erosion of trust in human systems, paired with a growing confidence in machines that do not judge, interrupt, or rush? ChatGPT Health does not answer those questions. It simply formalises the behaviour. According to OpenAI, the new Health space allows users to securely connect personal health data, including medical records, lab results, and information from fitness or wellness apps. Someone might upload recent blood test results and ask for a summary. Another might connect a step counter and ask how their activity compares to previous months. The system can integrate data from platforms such as Apple Health, MyFitnessPal, Peloton, AllTrails, and even grocery data from Instacart. The promise is contextual responses, not generic advice. Lab results explained in plain language. Patterns highlighted over time. Suggestions for questions to ask during a medical appointment. Diet or exercise ideas grounded in what the data actually shows. What ChatGPT Health is careful not to do matters just as much as what it can do. OpenAI is explicit: this is not medical advice. It does not diagnose conditions. It does not prescribe treatments. It is designed to support care, not replace it. The framing is intentional. ChatGPT Health positions itself as an assistant, not an authority. A tool for understanding patterns and preparing conversations, not making decisions in isolation. That distinction is crucial. And, frankly, it is one I hope users take seriously. Behind the scenes, OpenAI says the system was developed with extensive medical oversight. Over the past two years, more than 260 physicians from roughly 60 countries reviewed responses and provided feedback, contributing over 600,000 individual evaluation points. The focus was not only accuracy, but tone, clarity, and when the system should clearly encourage users to seek professional care. To support that process, OpenAI built an internal evaluation framework called HealthBench. It scores responses based on clinician-defined standards, including safety, medical correctness, and appropriateness of guidance. It is an attempt to bring structure to a domain where nuance matters and mistakes carry weight. Privacy is another pillar OpenAI insists on emphasising. ChatGPT Health operates in a separate, isolated environment within the app. Health-related conversations, connected data sources, and uploaded files are kept entirely separate from regular chats. Health data does not flow into the general ChatGPT memory, and conversations in this space are encrypted. OpenAI also states that health conversations are not used to train its core models. Whether that assurance will be enough for all users remains to be seen, but the architectural separation signals an awareness of how sensitive this territory is. In the United States, the system goes a step further. Through a partnership with b.well Connected Health, ChatGPT Health can access real electronic health records from thousands of providers, with user permission. This allows it to summarise official lab reports or condense long medical histories into readable overviews. Outside the US, functionality is more limited, largely due to regulatory differences. There are potential downstream effects for healthcare providers as well. If patients arrive at appointments already familiar with their data, already aware of trends, already prepared with focused questions, consultations could become more efficient. Less time decoding numbers. More time discussing decisions. And inevitably, the question surfaces: should doctors be worried? No. Not seriously. AI is not a medical professional. It does not examine patients. It does not carry legal responsibility. It cannot replace clinical judgement, experience, or accountability. ChatGPT Health does not change that. It is not a shortcut to self-diagnosis, and it should not be treated as one. What it does change is the starting point of the conversation. Used responsibly, ChatGPT Health can help people engage with their own health information instead of avoiding it. Misused, it could reinforce false certainty or delay necessary care. The responsibility, as always, is shared between the tool and the person using it.
[16]
OpenAI Launches ChatGPT Health, Wants Access to Your Medical Records
ChatGPT users who have been utilizing the chatbot for (often dubious) health advice will now have a chatbot specialized just for that. On Wednesday, OpenAI launched ChatGPT Health, a health-specific segment of the popular AI chatbot with the ability to connect to medical records, wellness apps, and wearables. "ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns," OpenAI said in the announcement. Users can connect apps like Apple Health to share sleep and activity patterns, MyFitnessPal to receive nutrition advice, AllTrails for hiking ideas, Peloton to get workout suggestions, and even Instacart so that ChatGPT can make you a shoppable list based on what diet it thinks you should follow. OpenAI says it has been working on ChatGPT Health for over two years alongside more than 260 physicians from 60 countries. ChatGPT Health has not yet fully launched. For now, the company is providing access to only a small group of early users for any final refinements. There is a link to a waitlist to sign up for it, although it doesn't seem to work at the time of writing. The medical record integration function is only available in the U.S., but the rest is available globally, except for users in the European Union, Switzerland, and the United Kingdom, all of which have very strict digital privacy laws in place. ChatGPT has been at the center of privacy concerns after a poor design feature made some user queries public and searchable by search engines. But the company insists that the new offering is safe and protected through purpose-built encryption and isolation, and it has made these privacy guardrails Health's main differentiator from the regular ChatGPT. The Health part of ChatGPT is supposed to have "separate memories," so that the information stays confined to that chat, although the Health chats will be able to access information about you gathered from non-Health chats. Conversations in Health will also not be used to train foundation models. OpenAI has been slowly upping its investments in the healthcare arena for some time now. In May 2025, OpenAI unveiled HealthBench, a new benchmark to evaluate AI systems' capabilities, which was used to train ChatGPT Health. Then, over the summer, the AI giant made a few high-profile hires to its healthcare AI team, including the healthcare business networking tool Doximity's co-founder, Nate Gross, to lead the co-creation of new healthcare tech with clinicians and researchers. Around the same time, OpenAI also announced a partnership with Kenya-based primary care provider Penda Health, underscored its latest model GPT-5's ability to "proactively flag" potential health concerns and create treatment plans while announcing the model, and was named as a partner in a Trump-led private sector initiative to use AI assistants in patient care and allow the sharing of medical records across apps and programs from 60 companies. It was also over the summer that OpenAI hired its new CEO of applications, Fidji Simo, who has pinpointed healthcare as the AI use case she is most excited about and has since called the launch of Health "really personal" to her. OpenAI's big healthcare AI bet is indicative of a growing industry-wide acceptance of AI, despite some concerning cases. The winds of regulation seem to be blowing in healthcare AI's favor, from Utah okaying AI-prescribed medication renewals to the FDA saying it will regulate wellness software and wearables with a light touch as long as the companies don't claim their product is "medical grade." "We want to let companies know, with very clear guidance, that if their device or software is simply providing information, they can do that without FDA regulation," FDA Commissioner Marty Makary told Fox Business on Tuesday. But even mere health and wellness suggestions have the capacity to lead to disastrous consequences for users if proven incorrect. ChatGPT has been under considerable heat for that over the past year, especially due to the numerous fatal mental health episodes it has been accused of triggering in the absence of adequate safety controls. OpenAI has been trying to solidify its product's place in healthcare alongside the steadily growing investment. Earlier this week, the company published a report claiming that more than 40 million ChatGPT users ask for health advice every single day, and paired the findings with sample policy concepts like asking for full access to the world's medical data and requesting a clearer regulatory pathway for consumer-focused health AI. The company also said that it is preparing to release a more comprehensive health AI policy blueprint in the coming months.
[17]
OpenAI launches ChatGPT Health -- bringing medical records and wellness data into ChatGPT
A dedicated health experience designed to make medical data easier to understand -- not diagnose OpenAI officially entered the connected health space today with the launch of ChatGPT Health. This dedicated space promises to be a secure environment and allows users to move beyond generic wellness questions and ground conversations in their own real-world data. ChatGPT Health is a separate space within the sidebar designed for high-stakes wellness management. OpenAI reported on the ChatGPT Health blog, that each week over 230 million people ask health questions on the platform. This update formalizes that experience by allowing the AI to "read" your personal context. Instead of relying on generic advice, ChatGPT Health allows you to ground conversations in your actual medical history. By bringing your personal data into the chat, the AI moves away from "thin air" responses to insights based on your real-world health picture. Through a strategic partnership with b.well, U.S. users can now securely link their electronic medical records directly to the AI. This creates a designated, private space where you can: Crucially, OpenAI emphasizes that ChatGPT Health is not a diagnostic tool. It is designed to make you feel more informed and confident before you speak with a clinician, ensuring that a human professional always makes the final medical decisions. OpenAI developed the tool over two years in collaboration with more than 260 physicians worldwide. To address the sensitive nature of this data, the company has implemented several strict guardrails: ChatGPT Health is rolling out initially to a small group of early users with ChatGPT Free, Go, Plus and Pro plans. Only users outside the European Economic Area, Switzerland and the United Kingdom are eligible at launch, and broader access to web and iOS users is expected in the "coming weeks." Some data integrations -- like Apple Health -- are currently limited to U.S. users or require iOS devices. For OpenAI, it's also a strategic shift. By creating a dedicated health experience instead of letting medical questions live alongside casual chats, the company is acknowledging both the demand -- and the responsibility -- that comes with being a go-to source for health information. Whether users ultimately trust AI with something as personal as their medical history remains to be seen. But with ChatGPT Health, OpenAI is making its strongest case yet that AI assistants can play a meaningful role in how people navigate their health -- without trying to replace the doctor.
[18]
More Than 40 Million People Use ChatGPT Daily for Healthcare Advice, OpenAI Claims
ChatGPT users around the world send billions of messages every week asking the chatbot for healthcare advice, OpenAI shared in a new report on Monday. Roughly 200 million of ChatGPT's more than 800 million regular users submit a prompt about healthcare every single week, and more than 40 million do so every single day. According to anonymized ChatGPT user data, more than half of users ask ChatGPT to check or explore symptoms, while others use it to decode medical jargon or get more information about treatment options. Nearly 2 million of these weekly messages also focused on health insurance, asking ChatGPT to help compare plans or handle claims and billing. The numbers are somewhat reflective of the troubled state of the American healthcare system, especially as patients struggle to pay exorbitant medical bills. In its own research, OpenAI found that three in five Americans viewed the current system as broken, with the most major pain point being hospital costs. The study found that 7-in-10 healthcare-related conversations happen outside of normal clinic hours. On top of that, an average of more than 580,000 healthcare inquiries were sent in "hospital deserts," aka places in the United States that are more than a 30-minute drive from a general medical or children's hospital. The report also showed increasing AI adoption among healthcare professionals. Citing data from the American Medical Association, OpenAI said that 46% of American nurses reported using AI weekly. The report comes as OpenAI increases its bet on healthcare AI, despite the concerns about accuracy and privacy that come with the technology's deployment. The company's CEO of applications, Fidji Simo, said she is "most excited for the breakthroughs that AI will generate in healthcare," in a press release announcing her new role in July 2025 OpenAI isn't alone in its big healthcare bet as well. Big tech giants from Google to Palantir have been working on product offerings in the healthcare AI space for years. Many people think Health AI is a promising field with a lot of potential to ease the burden on medical workers. But it's also contentious, because AI is prone to mistakes. While a hallucinated response can be an annoying hurdle in many other areas of use, in healthcare, it could have the potential to be a life-or-death matter. These AI-driven risks are not confined to the world of hypotheticals. According to a report from August 2025, a 60-year-old with no past psychiatric or medical history was hospitalized due to bromide poisoning after following ChatGPT's recommendation to take the supplement. As the tech stands today, no one should use a chatbot to self-diagnose or treat a medical condition, full stop. As investment in the technology builds up, so do policy conversations. There is no comprehensive federal framework on AI, much less healthcare AI, but the Trump administration has made it clear that it intends to change that. In July, OpenAI CEO Sam Altman was one of many tech executives in attendance at the White House's "Make Health Tech Great Again" event, where Trump announced a private sector initiative to use AI assistants for patient care and share the medical records of Americans across apps and programs from 60 companies. The FDA is also looking to revamp how it regulates AI deployment in health. The agency published a request for public comment in September 2025, seeking feedback from the medical sector on health AI deployment and evaluations. OpenAI's latest report seems to be their own attempt at putting a comment on the public record. The company pairs its findings with sample policy concepts, like asking for full access to the world's medical data and a clearer regulatory pathway to make AI-infused medical devices for consumer use. "We urge FDA to move forward and work with industry towards a clear and workable regulatory policy that will facilitate innovation of safe and effective AI medical devices," the company said in the report. In the next few months, OpenAI is preparing to release a full policy blueprint for how it wants healthcare AI to be regulated, the company added in the report.
[19]
ChatGPT Health sounds promising -- but I don't trust it with my medical data just yet
Convenience isn't worth giving up control over my most sensitive data OpenAI promised new ways to support your medical care with AI when launching ChatGPT Health this month. The AI chatbot's newest feature focuses on health in a dedicated virtual clinic capable of processing your electronic medical records (EMRs) and data from fitness and health apps for personalized responses about lab results, diet, exercise, and preparation for doctor visits. With more than 230 million people asking health and wellness questions every week, ChatGPT Health was inevitable. But despite OpenAI's hyping of ChatGPT Health's value and its assurance of extra security and privacy, I won't be signing up any time soon. The central issue is trust. As impressed as I've been with ChatGPT and most of its features, I remain skeptical that any tool capable of casually hallucinating nonsense should be relied upon for anything but the most basic of health questions. And even if I do trust ChatGPT as a personal health advisor, I don't want to give it my actual medical data. I've already shared more than a few details of my life with ChatGPT while testing its abilities and occasionally felt uneasy about doing so. I feel far less sanguine about adding my EMRs to the list. I instinctively recoil from the idea of giving a company with a profit motive and an imperfect history of data security access to my most sensitive health information. I can see why ChatGPT Health might entice people, even with all its caveats about what it can and can't do for you. Healthcare systems are complicated, often overstretched, and can be financially draining. ChatGPT Health can provide clearer explanations of medical jargon, highlight things worth discussing with a doctor or nurse before an appointment, and immediately parse confusing test results. ChatGPT Health entices with its ability to deliver insights that were previously only available through professional channels. It's an easy sell, especially if you lack easy access to regular medical care. But valuable, personal healthcare information and AI's track record for hallucination make for an unhealthy brew. Misleading or fictional answers are bad enough even without the need for medical nuance. It's not that I'm against asking ChatGPT questions about my health or fitness, but that's a far cry from what ChatGPT Health suggests sharing. "Exploring an isolated health question is fundamentally different from giving a platform access to your full medical record and lifetime health history," explained Alexander Tsiaras, founder and CEO of medical data platform StoryMD. "Trust will be the central challenge, not just for individuals but for the healthcare system as a whole. That trust depends on transparency: how data is ingested, how hallucinations are prevented, how longitudinal clinical evidence is incorporated over time, and whether the platform operates within established regulatory frameworks, including HIPAA." OpenAI is not a healthcare provider. HIPAA (Health Insurance Portability and Accountability Act) and its strict legal obligations and safeguards for your medical information don't apply. So, while OpenAI might sincerely promise to treat any health data you freely upload or connect to ChatGPT Health with matching care and security, it's unclear if they have the same legal motivation to do so. And when data is outside HIPAA's reach, you have to trust a company's own policies and intentions without relying on HIPAA's enforceable standards. Some people might argue that OpenAI's own privacy commitments are sufficient (lawyers at OpenAI, for instance). Most people would agree that intimate medical information should be locked down under the tightest legal protections possible. But ChatGPT Health users will be in a precarious position, especially because policy language and terms of service can change with little notice and limited recourse. ChatGPT Health's isolating of health conversations and allowing users to delete their data are good ideas, but once your data is out of the traditional healthcare system, you have to consider a whole different set of vulnerabilities. Headlines underscoring how difficult it is for major tech platforms to guarantee airtight data protection against leaks and hacks are far too frequent. And there's more risk to your data, even from legitimate agencies. Private health data could be subject to subpoenas, legal discovery, or other forms of compelled disclosure. In some cases, companies can be forced to hand over private records to satisfy court orders or government requests. HIPAA has much stronger defenses against such legal pressures than the standard consumer tech privacy laws under which ChatGPT operates. Affordable, accessible healthcare is a cause worth pursuing. But privacy, trust, and meaningful human oversight cannot be casualties in that pursuit. People are already concerned about how their data is collected and used by AI platforms. Centralizing even more sensitive information with a single commercial entity won't reduce those concerns. There's a bigger conversation to be had about incorporating AI into systems that human beings depend on. AI can be a huge boon for providing healthcare. But it won't be predicated on surrendering personal data to uncertain security and safety. "These are foundational requirements," Tsiaras said. "What people actually need is a clinically sound, longitudinal medical record that supports meaningful patient and patient-provider interaction, not another layer of noise."
[20]
OpenAI's new ChatGPT health feature draws mixed reactions
Why it matters: Health has become a go-to topic for chatbot queries, with more than 40 million people consulting ChatGPT daily for health advice and hundreds of millions doing so each week. Catch-up quick: OpenAI is adding a new health tab within ChatGPT that includes the ability to upload electronic medical records and connect with Apple Health, MyFitnessPal and other fitness apps. * The new features formalize how many people already use the chatbot: uploading test results, asking questions about symptoms and trying to navigate the complex healthcare ecosystem. * OpenAI says it will keep the new health information separate from other types of chats and not train its models on this data. Yes, but: Health information shared with ChatGPT doesn't have the same protections as medical data shared with a health provider and even those protections vary by country. * "The U.S. doesn't have a general-purpose privacy law, and HIPAA only protects data held by certain people like healthcare providers and insurance companies," Andrew Crawford, senior counsel for privacy and data at the Center for Democracy & Technology, told Axios in a statement. * "And since it's up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger," he says. * OpenAI is starting with a small group of early testers, notably not those in the European Economic Area, Switzerland, and the United Kingdom where local regulations require additional compliance measures. The big picture: Many AI enthusiasts on social media welcomed the tools, pointing to how ChatGPT already helps them. * Yana Welinder, head of AI for Amplitude has been using ChatGPT "constantly" for health queries in recent months, both for herself and family members. "The only downside was that all of this lived alongside my very heavy other usage of ChatGPT," she wrote on X. "Projects helped a bit, but I really wanted a dedicated space ...So excited about this." Chatbots aren't a replacement for doctors, OpenAI says, at the same time highlighting what the chatbot can provide. * "It's great at synthesizing large amounts of information," Fidji Simo, CEO of Applications at OpenAI, said Wednesday on a call with reporters. "It has infinite time to research and explain things. It can put every question in the context of your entire medical history." The other side: AI skeptics question giving medical information to a chatbot that has shown a propensity to reinforce delusions and even encourage suicide. * "What could go wrong when an LLM trained to confirm, support, and encourage user bias meets a hypochondriac with a headache?" Aidan Moher wrote on BlueSky. * Anil Dash, advocate for more humane technology, agreed on BlueSky that it "isn't a good idea," but also wrote that "it's vastly more understandable than most medical jargon, far more accessible than 99% of people's healthcare that they can afford, and very often pretty accurate in broad strokes, especially compared to WebMD or Reddit." Between the lines: Like other information shared with ChatGPT, health information could potentially be made available to litigants or government agencies via a subpoena or other court order. * That seems particularly noteworthy at a time when access to reproductive health care and gender affirming care are under threat at both the state and federal levels. * User data could get swept up in other ways too. As part of their copyright battle against OpenAI, news organizations have obtained access to millions of ChatGPT logs, including from temporary chats that were meant to be deleted after 30 days. * Sam Altman has called for some sort of legal privilege to protect sensitive health and legal information. What to watch: OpenAI said it has more health features on its road map and will talk soon about additional work with various health care systems.
[21]
Is Giving ChatGPT Health Your Medical Records a Good Idea?
OpenAI, which has a licensing and technology agreement that allows the company to access TIME's archives, notes that Health is designed to support health care -- not replace it -- and is not intended to be used for diagnosis or treatment. The company says it spent two years working with more than 260 physicians across dozens of specialities to shape what the tool can do, as well as how it responds to users. That includes how urgently it encourages people to follow-up with their provider, the ability to communicate clearly without oversimplifying, and prioritizing safety when people are in mental distress. OpenAI partnered with b.well, a data connectivity infrastructure company, to allow users to securely connect their medical records to the tool. The Health tab will have "enhanced privacy," including a separate chat history and memory feature than other tabs, according to the announcement. OpenAI also said that "conversations in Health are not used to train our foundation models," and Health information won't flow into non-Health chats. Plus, users can "view or delete Health memories at any time." Still, some experts urge caution. "The most conservative approach is to assume that any information you upload into these tools, or any information that may be in applications you otherwise link to the tools, will no longer be private," Bitterman says. No federal regulatory body governs the health information provided to AI chatbots, and ChatGPT provides technology services that are not within the scope of HIPAA. "It's a contractual agreement between the individual and OpenAI at that point," says Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center. "If you are providing data directly to a technology company that is not providing any health care services, then it is buyer beware." In the event that there was a data breach, ChatGPT users would have no specific rights under HIPAA, he adds, though it's possible the Federal Trade Commission could step in on your behalf, or that you could sue the company directly. As medical information and AI start to intersect, the implications so far are murky.
[22]
OpenAI Launches ChatGPT Health, Which Ingests Your Entire Medical Records, But Warns Not to Use It for "Diagnosis or Treatment"
AI chatbots may be explosively popular, but they're known to dispense some seriously wacky -- and potentially dangerous -- health advice, in a flood of easily accessible misinformation that has alarmed experts. Their advent has turned countless users into armchair experts, who often end up relying on obsolete, misattributed, or completely made-up advice. A recent investigation by The Guardian, for instance, found that Google's AI Overviews, which accompany most search results pages, doled out plenty of inaccurate health information that could lead to grave health risks if followed. But seemingly unperturbed by experts' repeated warnings that AI's health advice shouldn't be trusted, OpenAI is doubling down by launching a new feature called ChatGPT Health, which will ingest your medical records to generate responses "more relevant and useful to you." Yet despite being "designed in close collaboration with physicians" and built on "strong privacy, security, and data controls," the feature is "designed to support, not replace, medical care." In fact, it's shipping with a ludicrously self-defeating caveat: that the bespoke health feature is "not intended for diagnosis or treatment." "ChatGPT Health helps people take a more active role in understanding and managing their health and wellness -- while supporting, not replacing, care from clinicians," the company's website reads. In reality, users are certain to use it for exactly the type of health advice that OpenAI is warning against in the fine print, which is likely to bring fresh new embarrassments for the company. It'll only be heightening existing problems for the company. As Business Insider reports, ChatGPT is "making amateur lawyers and doctors out of everyone," to the dismay of legal and medical professionals. Miami-based medical malpractice attorney Jonathan Freidin told the publication that people will use chatbots like ChatGPT to fill out his firm's client contact sheet. "We're seeing a lot more callers who feel like they have a case because ChatGPT or Gemini told them that the doctors or nurses fell below the standard of care in multiple different ways," he said. "While that may be true, it doesn't necessarily translate into a viable case." Then there's the fact that users are willing to surrender medical histories, including highly sensitive and personal information -- a decision that OpenAI is now encouraging with ChatGPT Health -- despite federal law, like HIPAA, not applying to consumer AI products. Case in point, billionaire Elon Musk encouraged people last year to upload their medical data to his ChatGPT competitor Grok, leading to a flood of confusion as users received hallucinated diagnoses after sharing their X-rays and PET scans. Given the AI industry's track record when it comes to privacy protection and struggles with significant data leaks, all these risks are as pertinent as ever. "New AI health tools offer the promise of empowering patients and promoting better health outcomes, but health data is some of the most sensitive information people can share and it must be protected," Center for Democracy and Technology senior counsel Andrew Crawford told the BBC. "Especially as OpenAI moves to explore advertising as a business model, it's crucial that separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight," he added. "Since it's up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger." "ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time," Electronic Privacy Information Center senior counsel Sara Geoghegan told The Record. Then there are concerns over highly sensitive data, like reproductive health information, being passed on to the police against the user's wishes. "How does OpenAI handle [law enforcement] requests?" Crawford told The Record. "Do they just turn over the information? Is the user in any way informed?" "There's lots of questions there that I still don't have great answers to," he added.
[23]
OpenAI launches ChatGPT Health in a push to become a hub for personal health data | Fortune
Now OpenAI is leaning into this trend as it seeks to build out more products to keep users engaged and establish itself as the new digital interface that will mediate interactions between different industry sectors -- from e-commerce to finance to, yes, wellness and healthcare -- and their customers. Today OpenAI announced the debut of ChatGPT Health -- a dedicated experience inside ChatGPT where it says users can securely connect medical records and wellness apps such as Apple Health, Function, and MyFitnessPal to further personalize conversations. OpenAI said it would not train its models on personal medical data. During a press preview, Fidji Simo, OpenAI's CEO of applications, introduced ChatGPT Health by sharing a personal story about how ChatGPT helped her after she was hospitalized for a kidney stone last year and developed an infection. A resident had prescribed a standard antibiotic, but she checked it against her medical history in ChatGPT, which flagged that the medication could reactivate a serious life-threatening infection she had suffered years before. "The resident was relieved I spoke up, she told me she only has a few minutes per patient during rounds, and that health records aren't organized in a way that makes it easy to see," she said. "I've heard many stories like this from people who are using AI to help connect the dots in their healthcare system that really wasn't built to see the full picture." Five months ago, OpenAI signalled its push into healthcare with two high-profile hires: Nate Gross, the cofounder and former chief strategy officer of Doximity, a leading digital platform for medical professionals, leads OpenAI's healthcare strategy. Ashley Alexander, former co-head of product at Instagram, leads healthcare product. But Karan Singhal, who leads health AI at OpenAI, said during the press preview that the company had been laying the groundwork for ChatGPT Health for about two years. According to an OpenAI blog post, the company analyzed deidentified ChatGPT conversations and found that more than 230 million people globally ask health- and wellness-related questions on ChatGPT every week. Even with that massive user base, however, that doesn't mean the company has an easy road ahead: They are joining a fast-moving race among Big Tech and startups to become the AI front door for consumer healthcare. For example, for ChatGPT Health OpenAI has partnered with b.well, a health management platform which combines patients' health records, financial information, and wearable and other healthcare data. But Google also announced a partnership with b.well in October 2025 - potentially setting the stage for future AI-driven consumer health tools, though the internet giant has not yet announced a health-specific feature set for its AI chatbot product Gemini. ChatGPT Health will not be widely available immediately. There is a waitlist for access to a small group of early users, but the company said it will make Health available to all users on web and iOS in the coming weeks. Electronic Health Records (EHR) integrations and some apps are available in the U.S. only. OpenAI does not describe ChatGPT Health as HIPAA compliant (consumer health apps are not covered by HIPAA) but says it adds layered protections for sensitive health data and excludes health conversations from model training by default. It also says users can enable multi-factor authentication (MFA) to add an extra layer of protection to help prevent unauthorized access, and that access to medical records can be removed at any time in the "Apps" section of Settings." ChatGPT Health also fits a broader pattern at OpenAI: building vertical-specific experiences on top of its core models, rather than relying solely on general-purpose chat. In education, the company launched Study Mode, a ChatGPT spinoff for students, in July to compete with Google's Gemini for Education. It has also rolled out agentic shopping and shopping research features. There are also reports that a finance-specific experience is also in the works. OpenAI also clarified that ChatGPT Health was not part of the company's eight-week long "code red," announced in early December by Sam Altman in an internal memo. "[ChatGPT Health is] actually sitting outside of Code Red," said Simo. "We have been working on health for a very long time...we know this is a core use case of ChatGPT, we've known that we wanted to make that use case even better and serve even more of people's needs with the changes that we've made."
[24]
ChatGPT Health wants to be your medical AI assistant, but don't expect a diagnosis
With tens of millions already turning to ChatGPT for health advice, OpenAI is formalizing the experience with ChatGPT Health, a more careful, curated, and medically grounded version of its AI assistant. After dropping hints earlier this week, OpenAI has officially launched ChatGPT Health, which is designed to handle health, illness, and treatment-related questions with utmost care and responsibility. The company wants you to use this ChatGPT extension as your personal AI-powered health assistant. According to OpenAI, more than 40 million people use ChatGPT daily to seek health-related guidance, such as curating a workout plan, getting help with diet changes, understanding the side effects of certain medications, identifying the symptoms of different diseases, and finding the right line of treatment. Built to support professional medical care, not replace it Rather than answering all those questions in the all-star home tab, ChatGPT wants you to ask them in the new Health section. "Health is designed to support, not replace, medical care," OpenAI writes in its official release, suggesting the company is cautious about its claims and promises for the new tool. Recommended Videos When you ask the tool a health-related question, it routes the request through a curated pipeline that sources its information from authoritative medical content, uses conservative language, and includes expanded safety checks. The company says that it has worked with "more than 260 physicians" to understand how to handle health-related queries. It also lets you upload your medical records or connect fitness-tracking apps like Apple Health for providing more personalized (and accurate) suggestions. For instance, you can ask Health about your cholesterol or blood sugar trends, and it can either draw information from the uploaded documents or a connected app to provide the required answer. Moreover, you can use the tool to understand your blood or physical test results, prepare you for appointments with your doctor (highlight significant symptoms and curate an easy-to-share list), and even compare insurance plans based on your healthcare patterns. The company also promises enhanced privacy in the Health section, stating that conversations there aren't used to train AI models. For now, ChatGPT Health isn't available for all users. Instead, the company gets you on a waitlist and informs you when you can access it. "Users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom are eligible" to enter the waitlist. In the long term, OpenAI could deepen integrations with healthcare systems, insurers, and other wellness tools.
[25]
OpenAI's ChatGPT Health Push Raises Questions About Data Security - Decrypt
Privacy advocates warn that health data shared with AI tools often falls outside U.S. medical privacy laws. On Wednesday, OpenAI announced a new feature in ChatGPT, allowing users to connect medical records and wellness data, raising concerns among some experts and advocacy groups over the use of personal data. The San Francisco, California-based AI giant said the tool, dubbed ChatGPT Health, developed with physicians, is designed to support care rather than diagnose or treat ailments. The company is positioning it as a way to help users better understand their health. For many users, ChatGPT has already become the go-to platform for questions about medical care and mental health. OpenAI told Decrypt that ChatGPT Health only shares general, "factual health information" and does not provide "personalized or unsafe medical advice." For higher-risk questions, it will provide high-level information, flag potential risks, and encourage people to talk with a pharmacist or healthcare provider who knows their specific situation. The move follows shortly after the company reported in October that more than 1 million users discuss suicide with the chatbot each week. That amounted to roughly 0.15% of all ChatGPT users at the time. While those figures represent a relatively small share of the overall user population, most will need to address security and data privacy concerns, experts say. "Even when companies claim to have privacy safeguards, consumers often lack meaningful consent, transparency, or control over how their data is used, retained, or repurposed," Public Citizen's big-tech accountability advocate J.B. Branch told Decrypt. "Health data is uniquely sensitive, and without clear legal limits and enforceable oversight, self-policed safeguards are simply not enough to protect people from misuse, re-identification, or downstream harm." OpenAI said in its statement that health data in ChatGPT Health is encrypted by default, stored separately from other chats, and not used to train its foundation models. According to Center for Democracy and Technology senior policy counsel Andrew Crawford, many users mistakenly assume health data is protected based on its sensitivity, rather than on who holds it. "When your health data is held by your doctor or your insurance company, the HIPAA privacy rules apply," Crawford told Decrypt. "The same is not true for non-HIPAA-covered entities, like developers of health apps, wearable health trackers, or AI companies." Crawford said the launch of ChatGPT Health also underscores how the burden of responsibility falls on consumers in the absence of a comprehensive federal privacy law governing health data held by technology companies. "It's unfortunate that our current federal laws and regulations place that burden on individual consumers to analyze whether they're comfortable with how the technology they use every day handles and shares their data," he said. OpenAI said ChatGPT Health will roll out first to a small group of users. The waitlist is open to ChatGPT users outside the European Union and the UK, with broader access planned in the coming weeks on web and iOS. OpenAI's announcement did not mention Google or Android devices.
[26]
ChatGPT launches dedicated Health feature - here's how it works
OpenAI says the feature, designed with physicians, aims to help people understand patterns in their health and prepare for medical conversations - not replace doctors. As AI tools aimed at boosting people's health gain traction, ChatGPT is introducing new health-focused capabilities designed to help users better understand and manage their well-being. OpenAI announced on Wednesday the launch of ChatGPT Health, a dedicated space within its ChatGPT platform that allows users to connect their health information, such as medical records and data from wellness apps like MyFitnessPal, to receive more personalised and contextual responses. "ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life," Fidji Simo, CEO of applications at OpenAI, wrote in a post on Substack. Health is already one of the most common reasons people turn to ChatGPT. According to OpenAI, over 230 million people globally ask the chatbot health and wellness-related questions every week. OpenAI said that ChatGPT Health will operate as a standalone experience within the AI platform. Health conversations, files, and connected apps are stored separately from users' other chats and memories. According to the company, health information and memories "never flow back into your non-Health chats," and users can view or delete Health memories at any time via settings. A key feature of ChatGPT Health is the ability to connect different apps containing your personal health data, the company said. "You can now securely connect medical records and wellness apps - like Apple Health, Function, and MyFitnessPal - so ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the trade-offs of different insurance options based on your healthcare patterns," OpenAI said. Apps connected to Health require explicit user permission and undergo additional privacy and security reviews, OpenAI added. OpenAI stressed that ChatGPT Health is not intended to diagnose or treat medical conditions. "Health is designed to support, not replace, medical care," the company said. "It helps you navigate everyday questions and understand patterns over time - not just moments of illness - so you can feel more informed and prepared for important medical conversations." The tool was reportedly developed in close collaboration with physicians. Over the past two years, OpenAI said it worked with more than 260 physicians across 60 countries and dozens of specialties, who have provided feedback on model outputs more than 600,000 times. Access to ChatGPT Health is currently limited. OpenAI said it is starting with a small group of early users with ChatGPT Free, Go, Plus and Pro accounts, who can sign up for a waitlist to join the platform. Notably excluded from the early rollout are users in the European Economic Area, Switzerland and the United Kingdom, where local health and data regulations are more robust. Medical record integrations and some apps will also only be available in the US, OpenAI said. The company said it plans to make Health available to all users on web and iOS "in the coming weeks" as the experience is refined.
[27]
ChatGPT Health is a new space for medical questions that works with your health data -- but OpenAI insists it's not designed to replace your doctor
You can securely upload your medical records and connect apps like Apple Health Hot on the heels of news that 40 million people are already using ChatGPT for healthcare questions every day, OpenAI has announced a new dedicated health tool called ChatGPT Health. While medical matters are already one of the most common things people ask ChatGPT about, it will now have its own dedicated space inside the app, designed specifically for these types of conversations. The key feature of ChatGPT Health is the ability to securely connect your own medical records and wellness apps (such as Apple Health, Function, and MyFitnessPal), grounding responses in your personal health data. The result is a more personalized and, ultimately, more useful experience when you're asking questions, deciphering messages from your doctor, or trying to understand test results. Regarding apps like Apple Health, ChatGPT says you'll be in full control of how much access they have: "The first time you connect an app, we'll help you understand what types of data may be collected by the third party. And you're always in control: disconnect an app at any time and it immediately loses access." OpenAI says it has worked with more than 260 physicians across 60 countries and dozens of specialties to help refine how ChatGPT Health responds, when it should recommend follow-ups with a physician, and how to communicate clearly without oversimplifying. We all know that AI chatbots can hallucinate answers to questions, which is OpenAI's way of saying that it can just make stuff up, so the immediate question that springs to mind is: can you trust AI for medical advice? OpenAI neatly dodges this issue by saying, "Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time -- not just moments of illness -- so you can feel more informed and prepared for important medical conversations." By removing the responsibility for diagnosis and treatment, OpenAI is reminding you that ChatGPT shouldn't be relied on for solid answers all that time, especially when it comes to something as important as medical advice, and is shifting its role to more of a supporting service to help you understand the information you've been given. Videos posted by OpenAI on X show ChatGPT Health appearing as an option in the main left-hand menu bar under 'Health', and being used for general queries such as, "I have my annual checkup with my physician tomorrow -- what should I ask?" Another video shows ChatGPT Health pulling in data from Apple Health and suggesting a workout routine designed to improve your fitness based on your current activity levels. There's also an example of how the tool works once you've uploaded your medical records, including comparing insurance plans to help you decide which option might be best for you. Giving OpenAI access to your health records inevitably raises concerns about privacy and security. Health information is among the most sensitive data we possess, and the thought of handing it over to a partly for-profit company will make many people uneasy. To address those concerns, OpenAI says: "We recognize that people share personal and sensitive information with ChatGPT. That understanding shapes how we design the security, privacy, and data controls for all of our products -- from the start. Even before introducing ChatGPT Health, we built foundational protections across ChatGPT to give you meaningful control over your data, including temporary chats, the ability to delete chats from OpenAI's systems within 30 days, and training our models not to retain personal information from user chats." It continues: "Conversations and files across ChatGPT are encrypted by default at rest and in transit as part of our core security architecture. Due to the sensitive nature of health data, Health builds on this foundation with additional, layered protections -- including purpose-built encryption and isolation -- to keep health conversations protected and compartmentalized. Conversations in Health are not used to train our foundation models." In short, your health data won't mix with your everyday ChatGPT conversations -- it's kept entirely separate. ChatGPT Health isn't widely available yet. To access it, you'll need to join a waitlist. ChatGPT users on Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom are currently eligible. That restriction may not last long, though. OpenAI says it plans to expand access to all users on web and iOS in the coming weeks. OpenAI describes ChatGPT Health as "supporting, not replacing, care from clinicians," but with millions of people already relying on ChatGPT for medical guidance this launch makes one thing clear: AI is set to play an increasingly central role in how we engage with healthcare in the future.
[28]
ChatGPT Health is dedicated tab for health. Here's how it works
Why it matters: Seeking answers for health-related queries is already a top use of ChatGPT, with more than 40 million people using it daily for medical and health insurance questions. * Even before the introduction of the dedicated tab, people have been flocking to ChatGPT to make sense of lab tests, navigate health insurance claims and figure out what is causing various symptoms. Driving the news: The new feature, dubbed ChatGPT Health, aims to expand on what's already being done in chats by allowing users to connect to other apps and data sources. * OpenAI says that information shared in the health tab won't be used in other types of chats and that people can easily view and delete their health-related "memories." * A small group will test the feature first. Those interested in access to the health features can sign up for a waitlist, while OpenAI says it plans to make the health tab available to all users on the web and iOS in the coming weeks. The big picture: OpenAI says it's trying to address a world in which people have lots of data from medical devices, fitness trackers and electronic health records, but often struggle to make sense of the information. * Meanwhile, ChatGPT is available 24/7 while doctors typically have only a few minutes to spend with a patient in any given visit. What they're saying: AI doesn't replace medical care, but can play an important role in helping people navigate a complicated health care system, OpenAI CEO of Applications Fidji Simo said Wednesday on a call with reporters. * "It's great at synthesizing large amounts of information," Simo said. "It has infinite time to research and explain things. It can put every question in the context of your entire medical history." Between the lines: Privacy is obviously a big issue when it comes to health data. The company says it won't use user data to train its models and there remains the option to use temporary chat so the information won't be stored. Yes, but: Like other information shared with ChatGPT, health information could potentially be made available to litigants or government agencies via a subpoena or other court order. * That seems particularly noteworthy at a time when access to reproductive health care and gender affirming care are under threat at both the state and federal levels. * User data could get swept up in other ways too. As part of their copyright battle against OpenAI, news organizations have obtained access to millions of ChatGPT logs, including from temporary chats that were meant to be deleted after 30 days. * Sam Altman has called for some sort of legal privilege to protect sensitive health and legal information. What to watch: OpenAI said it has more health features on its road map and will talk soon about additional work with various health care systems.
[29]
ChatGPT Health Just Wants to Save Your Doctor's Time, Nothing More | AIM
The new feature operates as a separate space within ChatGPT, with purpose-built encryption, data isolation and controls designed for sensitive health information. OpenAI is drawing a clearer boundary between general-purpose AI and sensitive personal data. The company is reportedly working on a new audio model and a dedicated device, while also expanding its efforts in the healthcare sector. On January 7, the company announced ChatGPT Health, a dedicated health experience within ChatGPT that allows users to securely connect personal medical records and wellness apps, while keeping health data isolated from the main chat interface. The move reflects OpenAI's emphasis on data separation and privacy. The new health experience operates as a separate space within ChatGPT, with purpose-built encryption, data isolation and controls designed specifically for sensitive health information. "Conversations in Health are not used to train our foundation
[30]
OpenAI tries tackling privacy concerns with new ChatGPT Health
More than 230m people globally ask health and wellness-related questions on ChatGPT every week, OpenAI said. OpenAI is releasing ChatGPT Health, a dedicated service within the AI chatbot for health-related queries. The new service is marketed as the secure and a 'one stop shop' way of asking ChatGPT personal health questions. Users can connect their medical records and wellness apps such as Apple Health and MyFitnessPal to ChatGPT, allowing the bot access to analyse the data and output informed responses, said OpenAI. This can include understanding test results, preparing for appointments, and advice on diet and workouts. According to OpenAI, more than 230m people globally ask health and wellness-related questions on ChatGPT every week. However, these are not entirely private. Speaking in a podcast last year, CEO Sam Altman said that ChatGPT does not offer doctor-patient confidentiality the same way a trained human professional would. "People talk about the most personal shit in their lives to ChatGPT." This, he said, could lead to privacy concerns. Plus, if left to its default settings, ChatGPT also takes user inputs and content to train its models. 'Health' tries to mitigate these issues. Conversations from the service are not used to train OpenAI's foundation models, the company said. And although ChatGPT might take data from non-Health chats, OpenAI claims that data shared to Health will not flow back out the other way. With hundreds of millions using the bots, information shared to ChatGPT, Anthropic's Claude and the like are not just valuable to the companies providing the services. A security firm recently uncovered a VPN service which captured conversations users were having on AI platforms, selling it for marketing reasons. OpenAI has worked for more than two years, with a few hundred physicians across the world to develop the service, it said. The chatbot has been evaluated against clinical standards set by an assessment framework OpenAI has created specifically for examining health-related outputs. Interested users outside the EEA, Switzerland and the UK can sign up for the waitlist to join ChatGPT Health, which will run with a small cohort of users to refine the experience. The company plans to expand access and make the service available to users across the web and iOS in the coming weeks, it said. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[31]
OpenAI introduces ChatGPT Health to answer users' medical questions - SiliconANGLE
OpenAI introduces ChatGPT Health to answer users' medical questions OpenAI Group PBC today previewed ChatGPT Health, an upcoming feature that will help users of its chatbot service find medical information. The feature takes the form of a new section in the ChatGPT interface. According to OpenAI, the chatbot will detect when users enter medical questions into the main chat box and suggest that they switch to ChatGPT Health. It's also possible to launch the feature manually through a button on the sidebar. OpenAI says that ChatGPT Health supports a wide range of healthcare-related prompts. Users can ask it to explain lab results, put together an exercise plan and recommend questions to ask during a medical appointment. It's possible to customize how the feature answers prompts by providing it with high-level instructions ahead of time. According to OpenAI, users can further personalize ChatGPT Health by giving it access to their medical data. The feature is capable of importing a patient's health records through an integration with a clinical data management platform called B.well. An Instacart connector enables ChatGPT Health to turn a meal plan into food orders. The feature can also integrate with several popular wellness apps, most notably Apple Health. The app provides access to healthcare metrics collected by the user's Apple Watch. The smartwatch can track the wearer's heart rate, sleeping patterns and running form along with more than a half dozen other data points. OpenAI will store ChatGPT Health data separately from other account information. According to the company, the infrastructure it uses to host medical details includes "purpose-built encryption and isolation" mechanisms. Its engineers won't use the data in AI training projects and users can delete their medical chat histories. OpenAI developed the large language model that powers ChatGPT with the help of HealthBench, a benchmark dataset it released in May. The dataset includes over 48,00 physician-written rubrics for evaluating AI-generated answers to medical questions. OpenAI says that it created HealthBench in collaboration with over 260 medical professionals across dozens of specialties. ChatGPT Health will initially become accessible through a waitlist. According to OpenAI, the goal is to test the feature with a limited number of early adopters before making it broadly available. The company plans to roll out ChatGPT Health to the rest of its installed base in the coming weeks.
[32]
OpenAI suggests ChatGPT play doctor as millions of Americans face spiking insurance costs: 'In the U.S., ChatGPT has become an important ally' | Fortune
One week after the expiry of the Affordable Care Act's enhanced healthcare subsidies, Americans are starting to feel the pinch. Up through the end of 2025, around 24 million Americans used the subsidies to purchase healthcare at a discounted rate. That lifeline is now gone as the Senate remains deadlocked in renewal talks. But where there is crisis, one of the Internet age's most prolific disruptors has spied opportunity. Americans are increasingly turning to artificial intelligence to help decipher the thorny and ever-changing labyrinth that is the U.S. healthcare system. Three in five Americans say they recently used AI tools for health or healthcare queries, according to a survey published Monday by ChatGPT creator OpenAI. The survey was part of a larger report tracking similar trends globally. The report acknowledged difficulties with dealing with U.S. healthcare requirements are likely an important driver of this user trend. "In the United States, the healthcare system is a long-standing and worsening pain point for many," the report read. Tracking between 1.5 and 2 million health insurance-related messages each week, OpenAI found Americans use ChatGPT for everything from seeking diagnoses to comparing coverage plans and dealing with post-visit bills and claims. Even before ACA subsidies expired, the complexity and high cost of interacting with the healthcare system was discouraging for many Americans. Around 27 million Americans are uninsured, according to the CDC, with many more underinsured, meaning they have coverage but it rarely pays out enough to cover the cost of a claim. A December Gallup poll found a record low 16% of Americans were satisfied with the healthcare system. The U.S. healthcare system is bogged down by an abundance of stakeholders, which researchers say has contributed to higher administrative costs. These include private plans offered through employers, federal providers such as Medicare and Medicaid, and plans offered through ACA, which offered income-based subsidies for individual coverage on federally run marketplaces. In 2023, nearly 60% of insured Americans said they experienced problems using their health insurance, according to a survey by health policy research outfit KFF. That survey also found around nine in 10 insured adults would support policies to simplify healthcare, including requiring insurers to maintain up-to-date provider directories and give clearer disclosures about out-of-pocket cost liabilities. The system's complexity was already funneling many patients to AI tools like ChatGPT for help, the OpenAI report found. The company also claimed people living in communities with low coverage rates could stand to benefit the most from having real-time access to healthcare information. On average, nearly 600,000 healthcare-related queries are submitted to ChatGPT every week from so-called "hospital deserts," underserved rural areas at least a 30-minute drive from the nearest general medical center. The trouble with using ChatGPT as your medical advisor AI can be helpful in deciphering bureaucratic bloat, though researchers have warned of the risks when users become too reliant on AI for medical diagnoses. More than 60% of Americans say AI-generated health information is at least somewhat reliable, according to a survey last year by the University of Pennsylvania. But AI medical diagnoses can also suffer from inaccuracies and bias, as studies have found large language models can issue vastly different recommendations based on a patient's race, income level, and sexual orientation. Issues also arise when users turn to AI for mental health advice. Last year, OpenAI was hit with seven lawsuits in California alleging ChatGPT use had contributed to multiple suicides and psychological harm. In November, OpenAI tweaked its usage policies to walk back ChatGPT's ability to give medical advice. OpenAI did not immediately return Fortune's request for comment. In a statement to Business Insider related to the usage policy change, an OpenAI spokesperson said: "ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information." With millions suddenly lacking any coverage at all in the wake of ACA's subsidy expiry, many more Americans could tap that resource soon.
[33]
OpenAI launches dedicated ChatGPT Health space
OpenAI announced ChatGPT Health on Wednesday, creating a dedicated space within the platform for users to discuss health topics with ChatGPT, as over 230 million people query health and wellness matters each week. The ChatGPT Health feature separates health-related conversations from standard interactions. This siloing ensures that details from health discussions do not influence or appear in everyday chats with the AI. OpenAI designed this separation to maintain distinct contexts across user sessions. When users initiate health topics outside the designated Health section, ChatGPT prompts them to relocate the conversation. This mechanism directs discussions to the appropriate area, preserving the isolation of sensitive health data from general use. Inside the Health section, ChatGPT draws on information from a user's prior standard interactions. For instance, if a user previously mentioned being a runner while seeking a marathon training plan, the AI recalls this detail when addressing fitness goals in Health. Such cross-referencing personalizes responses without merging conversation histories fully. ChatGPT Health supports integration with personal data and medical records from third-party wellness applications. Compatible services include Apple Health, Function, and MyFitnessPal. Users can link these sources to provide the AI with relevant health metrics, enhancing the tool's utility for wellness tracking. OpenAI commits to excluding Health section conversations from model training processes. This policy safeguards user privacy by preventing health data from contributing to improvements in the underlying AI systems. Fidji Simo, CEO of Applications at OpenAI, detailed in a blog post how ChatGPT Health addresses specific healthcare challenges. These include high costs, access barriers, overbooked doctors, and discontinuities in patient care. Simo positions the tool as a targeted response to these systemic issues. OpenAI emphasizes limitations of large language models like ChatGPT. These systems generate responses by predicting the most probable output based on patterns, without verifying factual accuracy. They lack an inherent understanding of truth and remain susceptible to hallucinations, where incorrect information is confidently presented. The company's terms of service state explicitly that ChatGPT is "not intended for use in the diagnosis or treatment of any health condition." This disclaimer underscores the experimental nature of the feature. ChatGPT Health rollout begins in the coming weeks, extending access to eligible users progressively.
[34]
OpenAI says 40 million people use ChatGPT for healthcare every day
200 million ChatGPT users ask AI at least once a week about health-related matters OpenAI has published a report claiming that 40 million people are using ChatGPT for health-related questions every single day, a number that would have sounded wild a couple of years ago but now feels almost inevitable. The company describes its chatbot as a healthcare ally, saying users regularly ask about symptoms, medications, treatment options, and how to navigate often overwhelmed health systems. The report suggests more than five percent of all ChatGPT prompts are about health, and 200 million of the chatbot's 800 million weekly users ask at least one health-related prompt every week. Most of those are people trying to figure out whether a headache is serious, what a complicated diagnosis actually means, or whether a new prescription is supposed to make them feel this tired. I will admit I have done the same after a late-night indigestion spiral, something I used to turn to Google for only a couple of years ago. OpenAI's report asked 1,042 US adults who used AI for healthcare in the past 3 months just exactly how they use the chatbot for health-related matters. 55% used AI to "Check or explore symptoms", 52% used a chatbot to "Ask healthcare questions at any time of day", 48% for "understanding medical terms or instructions", and 44% used AI to "learn about treatment options". OpenAI says these stats show "how Americans are using AI for healthcare navigation: organizing information, translating jargon, and generating drafts they can verify. " One example the company highlighted was of Ayrin Santoso from San Francisco, who "used ChatGPT to help coordinate urgent care for her mother in Indonesia after her mother suffered sudden vision loss that her family attributed to fatigue." According to OpenAI, Santoso "entered symptoms, prior advice, and context, and received a clear warning from ChatGPT that her mother's condition could signal a hypertensive crisis and possible stroke." From ChatGPT's initial response, Santoso's mother was hospitalized in Indonesia and has since "recovered 95% of her vision in the affected eye." OpenAI argues that AI can help outside clinic hours when real doctors are hard to reach. That makes sense on paper with confusing health information, but there are serious risks, especially when you take ChatGPT's word as gospel. A chatbot cannot replace a doctor; it does not have your full medical history, and it can still get things wrong in ways that matter. OpenAI says it is working with hospitals and researchers to improve accuracy and safety, but the core message is clear: millions of people have already decided AI is part of their health routine, whether the rest of us like it or not. 40 million daily users is a wild milestone, but while it's easy to get carried away with such a landmark number, it's worth remembering that people have been using technology like Google for health-related queries for well over a decade. That said, Google's top search results used to be led by reliable health-related websites like the UK's NHS or WebMD. Now, AI Overviews add an element of AI uncertainty. And even more so when you're turning to an AI chatbot like ChatGPT, capable of making up the most ridiculous information. I don't think using AI for quick tips on health-related matters is a bad thing, especially in countries like the United States, where you need to pay to see a doctor about a simple skin irritation. But how do you know it's a simple skin irritation? And do you trust ChatGPT enough to take the risk?
[35]
Exclusive: 40 million Americans turn to ChatGPT for health care
Why it matters: Americans are turning to AI tools to navigate the notoriously complex and opaque U.S. health care system. The big picture: Patients see ChatGPT as an "ally" in navigating their health care, according to analysis of anonymized interactions with ChatGPT and a survey of ChatGPT users by the AI-powered tool Knit. * Users turn to ChatGPT to decode medical bills, spot overcharges, appeal insurance denials, and when access to doctors is limited, some even use it to self-diagnose or manage their care. By the numbers: More than 5% of all ChatGPT messages globally are about health care. * OpenAI found that users ask 1.6 to 1.9 million health insurance questions per week for guidance comparing plans, handling claims and billing and other coverage queries. * In underserved rural communities, OpenAI says users send an average of nearly 600,000 health care-related messages every week. * Seven in 10 health care conversations in ChatGPT happen outside of normal clinic hours. Zoom in: Patients can enter symptoms, prior advice from doctors, and context around their health-care issues and ChatGPT can deliver warnings on the severity of certain conditions. * When care isn't available, this can help patients decide if they should wait for appointments or if they need to seek emergency care. * "Reliability improves when answers are grounded in the right patient-specific context such as insurance plan documents, clinical instructions, and health care portal data," OpenAI says in the report. Reality check: ChatGPT can give wrong and potentially dangerous advice, especially in conversations around mental health. * OpenAI currently faces multiple lawsuits from people who say loved ones harmed or killed themselves after interacting with the technology. * States have enacted new laws focused on use of AI-enabled chatbots, banning apps or services from offering mental health and therapeutic decision-making. The intrigue: Multiple viral stories highlight how people have uploaded itemized bills to AI for analysis, uncovering errors like duplicate charges, improper coding, or violations of Medicare rules. Behind the scenes: OpenAI says it's working to strengthen how ChatGPT responds in health contexts. * The company is continuing to evaluate models to reduce harmful or misleading responses, and work with clinicians to identify risks and improve. * GPT-5 models are more likely to ask follow-up questions from the user, browse the internet for the latest research, use hedging language, and direct users to professional evaluation when needed, per the company. 💭 Our thought bubble: The end of enhanced Affordable Care Act subsidies could accelerate this quiet shift as uninsured and underinsured patients lean on chatbots for health care guidance. What we're watching: How accuracy, liability, and access to patient data evolve as more Americans rely on AI for medical guidance without a doctor in the loop.
[36]
ChatGPT Health Promises Personalized Wellness Advice -- But a Medical Expert Says There's a Catch
On January 7, OpenAI announced the launch of ChatGPT Health, a new tool developed in collaboration with physicians to help people better understand and manage their health and wellbeing. According to OpenAI, healthcare is already one of ChatGPT's most common uses, with more than 230 million people globally asking the chatbot health and wellness related questions every week. Now, with advancements that allow users to input personal health information, OpenAI says, "ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns." But as AI tools move deeper into the most intimate corners of users' lives, concerns over data safety and privacy are rising, particularly when it comes to medical information, which carries a uniquely personal and sensitive weight. One medical expert, however, is bullish on the safety of personal data in ChatGPT Health. Dr. Robert Wachter, chair of the department of medicine at the University of California, San Francisco, and author of A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future, said the central issue is ultimately "whether you trust OpenAI to keep to their word," in a recent interview with Time.
[37]
OpenAI's ChatGPT Health Could Change How You Prep for Doctor Visits - Phandroid
Ever left the doctor's office kicking yourself for forgetting to ask important questions? Or stared at lab results that might as well be written in another language? OpenAI's got something for that. The company has recently launched ChatGPT Health, a new tab that plugs into your medical records. ChatGPT Health connects to Apple Health, Function, and MyFitnessPal too. Now you can ask it to explain your cholesterol numbers or help you prep for appointments instead of panicking on Google at midnight. The feature isn't replacing your doctor. It's basically like having a really smart friend who's read all your medical records. It connects to apps you already use and pulls everything together. Say your blood sugar spiked last Tuesday. ChatGPT Health can look at your MyFitnessPal meals and Apple Health steps to help spot why. Similar to how Samsung Health tracks medications or Google Health Connect brings fitness data together, this wants to make your health info actually useful. The best part is that it's available whenever you need it. Doctor visits are rushed, but the tool can walk you through confusing test results. It also helps you prep better questions so you're not wasting time during appointments. OpenAI's taking privacy seriously here. Your health data stays isolated from regular ChatGPT chats. It won't train their AI models either. That's actually very important when you're sharing sensitive stuff like lab work and fitness habits. Right now, it's only rolling out to a small waitlist group in the US on web and iOS. If you're outside the States or using Android, you'll have to wait. OpenAI's clearly testing things before opening it up wider. Just remember, ChatGPT Health works alongside your doctor, not instead of them. It won't diagnose you or prescribe anything. Think of it as prep work so you show up to appointments ready with the right questions instead of a blank stare.
[38]
40 Million People Use ChatGPT Daily for Advice on Health, OpenAI Report Reveals | AIM
AI tools are being used at scale to navigate healthcare systems, particularly for insurance-related queries, after-hours guidance and administrative tasks, according to a January 2026 report by OpenAI analysing anonymised ChatGPT data. The report finds that over 5% of global interactions on ChatGPT are related to healthcare. On average, more than 40 million people turn to the platform daily with questions related to healthcare, and one in four users asks healthcare-related questions per week, indicating sustained use. A large share of this activity relates to non-clinical tasks. The analysis estimates that 1.6 million to 1.9 million messages per week focus on health insurance, including plan comparisons, billing issues, claims, eligibility and cost-sharing. Users primarily seek help organising information, understanding terminology and preparing documents rather than medical diagnosis. Timing data suggests AI is often used when traditional healthcare access is limited. Around 70% of healthcare-related interactions occur outside standard clinic hours, indicating a demand for information at night and on weekends. Geographic disparities also shape usage patterns. Users in rural and underserved areas generate close to six lakh healthcare-related interactions per week. In areas defined as 'hospital deserts', locations more than 30 minutes from the nearest general hospital, AI tools recorded over 5.8 lakh healthcare-related messages per week during a four-week period in late 2025. States including Wyoming, Oregon and Montana ranked highest by share of such interactions. Healthcare professionals are also using AI tools, largely for administrative support. Citing industry surveys, the report notes that 66% of US-based physicians reported using AI in 2024, up from 38% the previous year. Nearly half of US-based nurses report weekly use, primarily for documentation, billing and workflow support rather than clinical decision-making. Meanwhile, OpenAI has released a new benchmark, HealthBench, designed to evaluate AI systems' capabilities in healthcare. The benchmark aims to help large language models support patients and clinicians with health discussions that are trustworthy, meaningful and open to continuous improvement. HealthBench looks at seven key areas, including emergency care, managing uncertainty and global health.
[39]
OpenAI Launches ChatGPT Health as a 'Dedicated Space' for Medical Questions
Over 230 million people ask ChatGPT health questions every week, and now OpenAI is giving them a "dedicated space" to do it. The company rolled out ChatGPT Health this week, creating a separate section that keeps health conversations isolated from your regular chats. The tool integrates with wellness apps like Apple Health, Function, and MyFitnessPal, pulling in your personal health data. OpenAI's Applications CEO Fidji Simo says it's meant to tackle healthcare access issues and continuity of care. The company promises not to use Health conversations for training its models. But here's the catch: AI chatbots predict likely responses, not necessarily correct ones, and they're prone to hallucinations. OpenAI's own fine print states the platform isn't "intended for use in the diagnosis or treatment of any health condition." The feature launches in the coming weeks.
[40]
ChatGPT Health Is OpenAI's Biggest Step Towards AI-Powered Healthcare Yet
OpenAI said it has added multiple protections to secure user data OpenAI introduced ChatGPT Health, a new dedicated space for healthcare conversations, on Wednesday. The San Francisco-based artificial intelligence (AI) giant has added new capabilities, the ability to connect third-party wellness apps, and multilayer security for the space. It is aimed at users who prefer a more cohesive and unified experience when asking the chatbots questions about their test results, appointments with doctors, advice on diet and workout, and queries about health insurance. The company is currently running a pilot test, with a wider rollout expected in the coming months. ChatGPT Health Introduced In a blog post, the AI giant announced and detailed the new capability within ChatGPT. OpenAI is sharing early access to the feature with a small group of users. Interested individuals can sign up for the waitlist here. No global release date was shared by the company at this time, but users can continue to ask healthcare-related questions in regular chats. The new experience comes amid heavy demand for health queries on ChatGPT, with OpenAI noting that people already ask hundreds of millions of questions about health and wellness each week. ChatGPT Health offers a separate workspace within the platform where health conversations, connected apps and files are compartmentalised from regular chat history to reduce the risk of sensitive data exposure, the company said. This includes layered protections such as purpose-built encryption and data isolation that go beyond OpenAI's existing security controls. Users will be able to connect electronic medical records and wellness apps, such as Apple Health, Function, MyFitnessPal and others, to ground ChatGPT's responses in their own health context. The AI giant said that, by pulling in this data, the system can help users understand lab results, prepare for clinical appointments, track health trends like activity or sleep patterns, discuss diet and exercise routines, and even compare healthcare insurance options based on individual patterns of care. The company stressed that ChatGPT Health is not intended to diagnose or replace care from licensed medical professionals, but to help people make sense of information and engage more actively with their health data. "ChatGPT Health is another step toward turning ChatGPT into a personal super-assistant that can support you with information and tools to achieve your goals across any part of your life," said Fidji Simo, CEO, Applications at OpenAI. OpenAI also described steps it is taking to protect sensitive health information beyond basic ChatGPT security. Health conversations and connected files are encrypted both in transit and when stored, and remain separate from the user's other chat memories. The company said Health builds on its existing privacy architecture, which already includes options to delete chats within 30 days and default protections against training models on personal data. It also includes controls such as multi-factor authentication to strengthen account access. Conversations and data within ChatGPT Health are not used to train OpenAI's foundation models, mentioned the post.
[41]
ETtech Explainer: What OpenAI's new 'health' feature means for its second-largest user market, India
The feature is being presented as a helper, not a medical expert. OpenAI says it is meant to assist users in understanding reports, preparing questions for doctors, and keeping track of overall wellbeing, rather than offering diagnoses or treatment. OpenAI has created a standalone interface on ChatGPT called 'Health' for medical queries, personal health data, and fitness tracking, enabling users to refer to reports, share records and integrate information from popular wellness platforms in one place. The addition shows how health advice and information-seeking have emerged as a major reason people turn to conversational AI, with usage continuing to grow across markets such as India. The feature is being presented as a helper, not a medical expert. OpenAI says it is meant to assist users in understanding reports, preparing questions for doctors, and keeping track of overall wellbeing, rather than offering diagnoses or treatment. Will understanding lab reports and medical documents now be easier? Traditionally, making sense of lab reports means a lot of anxious Googling or rushing to doctors' clinics. According to OpenAI, 'Health' will help users in summarizing the document in plain language, translating medical jargon, highlighting what looks out of range, and suggesting a short, practical list of questions to ask a doctor. But medical professionals warn that while such tools can offer clarity, they cannot replace clinical judgment. "If you want to interpret one test report, that's fine. But medical decisions are based on multiple variables and priorities -- when there are multiple tests and conditions, you have to link them together, and that's something a tool like ChatGPT can't reliably do," said Dr Rajiv Kovil, a Mumbai-based consultant diabetologist. Does this raise the bar for Indian startups building AI-led healthcare tools? For startups, this shifts the emphasis from simply explaining routine test reports like complete blood count, thyroid or lipid panels, and towards strengths that are much harder to copy, such as trust, integration into clinical workflows, measurable health outcomes, and strong distribution across doctors, hospitals, diagnostic labs, employers and insurers. "Foundation models will keep extending into more verticals, and health is likely the first. For a couple of years there will be turbulence because the big horizontal platforms will offer what vertical apps offer and often free. The wrappers will suffer; the ones with real IP, data and clinical depth will survive and even gain," said Jaspreet Bindra, Founder and Author, AI&Beyond, Tech Whisperer Ltd. So, as general-purpose AI platforms start bundling health features, the competitive edge for Indian startups will increasingly come from their model's credibility and clinical validation. OpenAI is not alone in pushing AI into healthcare. Anthropic has also introduced a healthcare-focused version of its Claude chatbot, built for use by hospitals, insurers and patients in regulated medical settings. This shows the rising competition among large AI platforms to gain a foothold in specialised healthcare applications. The move comes just two days after OpenAI unveiled its health feature. "The opportunity for healthcare startups is precisely this, how do they build trust among consumers -- faster and sooner -- because that's the real moat," said Dilip Kumar, who leads health investments at Rainmatter Health. How safe is it to share medical data with an AI platform? A wave of tech companies is moving into healthcare, betting that better AI can spot patterns in personal health data and turn them into more tailored guidance. But the same shift also heightens anxiety around privacy and safety, because these systems would be handling deeply sensitive medical information and nudging users on high-stakes health decisions. "The only health product that has really worked at scale is Apple Health, and a big reason is trust -- Apple has built its brand around privacy, so people are more willing to share sensitive data there. Most others haven't worked because people simply don't trust them with their personal health records," Bindra added.
[42]
OpenAI Launches ChatGPT Health As Executive Says AI Flagged A Dangerous Drug Interaction Doctors Missed During Her Hospital Stay -- But Is It Safe?
Enter your email to get Benzinga's ultimate morning update: The PreMarket Activity Newsletter On Thursday, OpenAI unveiled ChatGPT Health, a new AI-powered platform designed to help patients and doctors navigate complex medical information. Executive Shares Personal Story Highlighting AI's Potential Fidji Simo, CEO of OpenAI Applications, shared on X that the idea for ChatGPT Health is deeply personal. In a blog post she shared on social media, Simo recounted being hospitalized for a kidney stone last year when a resident prescribed an antibiotic that could have triggered a serious infection she had in the past. Because she had already uploaded her health records into ChatGPT, it flagged that this medication could reactivate a dangerous infection, Simo said. The resident confirmed the AI's warning, calling it a "relief" and noting that time constraints and fragmented medical records often prevent clinicians from seeing the full picture. AI Helps Address Burnout and Fragmentation Simo said that the U.S. healthcare system is under strain, with 62% of Americans saying it is broken and nearly half of physicians reporting burnout. ChatGPT Health is designed to help by organizing medical histories, synthesizing research, and translating complex information into plain language for patients. Privacy Concerns Remain Andrew Crawford of the Center for Democracy and Technology told BBC that health data is highly sensitive. He said as the startup explores personalization and potential advertising, health information must be kept separate from other ChatGPT memories. Despite these concerns, early adopters see ChatGPT Health as transformative. Max Sinclair, CEO of AI platform Azoma, told the publication that it is a "watershed moment" that could reshape both patient care and retail. Moreover, generative AI chatbots can sometimes produce inaccurate or misleading information, presenting it confidently as if it were factual. Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Image via Shutterstock Market News and Data brought to you by Benzinga APIs
[43]
New ChatGPT Health App : Organizes Records, Syncs Apple Health & Points You to Better Habits
What if managing your health felt as intuitive as scrolling through your favorite app? Wes Roth outlines how OpenAI's latest innovation, ChatGPT Health, is poised to redefine the way we approach wellness. This AI-powered platform doesn't just track your fitness goals or log your meals, it dives deeper, offering personalized medical insights and actionable recommendations tailored to your unique needs. Imagine decoding your medical test results with ease, syncing all your health data in one place, and even receiving advice based on your genetic profile. It's a bold step toward making healthcare more accessible, efficient, and, above all, personal. In this overview, we'll explore how ChatGPT Health is reshaping the future of personalized healthcare. From its ability to consolidate health metrics seamlessly to its advanced capabilities like biohacking support and trend-based insights, this app is more than just a convenience, it's a potential fantastic option. But what does this mean for you? Whether you're looking to optimize your lifestyle, stay ahead of potential health risks, or simply take control of your wellness journey, ChatGPT Health offers a glimpse into the possibilities of AI-driven healthcare. Could this be the start of a more empowered approach to your health? Let's find out. AI-Powered Health Revolution Key Features of ChatGPT Health ChatGPT Health simplifies the often complex process of health management, giving you the tools to take charge of your wellness journey. Its standout features include: * Decoding medical test results: Understand your health status more clearly and prepare for informed discussions with your healthcare provider. * Personalized health advice: Receive tailored recommendations on diet, exercise, and lifestyle changes based on your specific health data. * Integration with health apps: Seamlessly sync with platforms like Apple Health, MyFitnessPal, and Peloton to consolidate your health metrics in one place. * Secure medical record storage: Organize and access your medical records through a dedicated, secure section for easy retrieval. These features are designed to reduce the complexity of managing multiple health resources, empowering you to make informed decisions with greater confidence and clarity. Advanced Capabilities for Personalized Insights ChatGPT Health goes beyond basic health tracking by offering advanced, data-driven applications that cater to your individual needs. Its capabilities include: * Genetic and health data analysis: Use insights from genetic information and health metrics to receive recommendations tailored to your biological profile. * Biohacking support: Track supplements, dietary changes, and fitness routines to optimize your lifestyle and overall performance. * Trend-based insights: Detect potential health risks, such as irregularities in blood work or genetic predispositions, through comprehensive data analysis. By using these tools, ChatGPT Health becomes a versatile companion for anyone looking to enhance their health through personalized, data-driven strategies. OpenAI's New ChatGPT Health App Gain further expertise in ChatGPT 5 by checking out these recommendations. The Role of AI in Shaping the Future of Healthcare The introduction of ChatGPT Health reflects a broader trend toward AI-driven healthcare solutions. By consolidating data from wearable devices, health apps, and other sources, this app represents a step toward creating a comprehensive health management system. Future advancements in AI-driven healthcare could include: * AI-powered food tracking: Use image recognition to monitor food intake and portion sizes with precision. * Real-time health monitoring: Enhance integration with wearable technologies for continuous health tracking and instant feedback. * Streamlined medical advice: Access personalized recommendations without the need for extensive research or specialist consultations. These innovations highlight the potential for AI to make healthcare more accessible, efficient, and tailored to individual needs, allowing you to take a more active role in managing your health. OpenAI's Upcoming Hardware: Expanding Health Management Tools In addition to ChatGPT Health, OpenAI is overviewedly developing a pen-like AI-powered device to complement existing technologies like smartphones and smartwatches. This portable hardware aims to enhance your ability to manage health data on the go. Key features of this device include: * Audio and camera functionality: Capture health-related data, such as symptoms or environmental factors, in real time for analysis. * AI-driven assistance: Effortlessly record and analyze health information, providing insights that support better decision-making. This device could serve as a third essential tool in your digital health ecosystem, bridging gaps between existing technologies and offering new ways to interact with and interpret health data. Addressing Challenges and Ethical Considerations While ChatGPT Health and its associated technologies offer significant promise, they also raise important challenges and ethical considerations. These include: * Data privacy and security: Protecting sensitive medical information becomes increasingly critical as integration with multiple platforms grows. * Accessibility and affordability: Making sure that advanced AI tools remain user-friendly and affordable for a broad audience is essential. * Ethical use of data: Addressing concerns about monetization, such as using health data for targeted advertising or insurance comparisons, is vital to maintaining trust. These challenges underscore the importance of transparent practices, robust safeguards, and ethical considerations as AI continues to play a larger role in healthcare. The Shift Toward Personalized Healthcare The rise of AI-driven tools like ChatGPT Health signals a significant shift toward more personalized and proactive healthcare solutions. By simplifying the process of researching health information and making decisions, these tools empower you to take an active role in managing your well-being. Moreover, the integration of user-friendly, data-driven systems emphasizes the importance of creating solutions that are both effective and accessible. As AI technologies continue to evolve, their potential to transform healthcare becomes increasingly evident. ChatGPT Health offers a glimpse into a future where personalized, data-driven health management is not just a possibility but a reality for everyone.
[44]
What is ChatGPT Health and How to Join the Waitlist
You can join the waitlist to get early access to ChatGPT Health. It's available to both free and paid users. OpenAI announced ChatGPT Health, a dedicated space inside ChatGPT where you can have health conversations and connect your medical records and fitness apps. OpenAI says health is already one of the most common ways people use ChatGPT. In fact, over 230 million people ask health questions on ChatGPT per week. So, OpenAI has come up with a dedicated experience for health within ChatGPT with privacy protections. ChatGPT Health combines user health data with AI intelligence. Health information is available across websites, wearables, fitness apps, PDFs, lab reports, medical notes, and more. OpenAI intends to bring all this information to ChatGPT Health so that users are more "informed, prepared, and confident" about their health. In addition, OpenAI has added health-specific privacy protections including purpose-built encryption and data isolation for health chats. OpenAI says health conversation are not used to train OpenAI's models and they have isolated memories. Next, live medical records are accessed through b.well for US consumers. As for app integration inside ChatGPT, it can connect to Apple Health, MyFitnessPal, Function, Weight Watchers, Peloton, Instacart, AllTrails, and more. OpenAI says ChatGPT Health is designed to support, and not replace medical care. It's also not intended for diagnosis or treatment. You can use ChatGPT Health for these tasks: ChatGPT Health is currently available via a waitlist. Initially, it's rolling out to ChatGPT users across Free, Go, Plus, and Pro plans. However, note that users in the EU, Switzerland, and the UK can't access the new health experience. OpenAI is starting with a small group of early users and in the coming weeks, it will be widely rolled out on the web and iOS. So that is all about ChatGPT Health. While some users may not wish to divulge their health information with ChatGPT, millions of users are already using it for understanding medical records. OpenAI has done the right thing by bringing a dedicated space and adding privacy protections for health conversations.
[45]
OpenAI Says Over 40 Million Users Have Asked ChatGPT Healthcare Queries
OpenAI said that 7 out of 10 healthcare chats occur outside clinic hours OpenAI's ChatGPT is reportedly drawing a large volume of healthcare-related questions. As per the report, the San Francisco-based artificial intelligence (AI) giant claimed that more than 40 million users globally have sent the AI chatbot questions seeking healthcare and medical information. A significant portion of these messages is said to come from the underserved rural communities, and one of the most asked topics is around health insurance. Notably, in August 2025, when the company released the GPT-5 AI model, it had said that a big focus was on health-related performance. OpenAI Reportedly Claims High Volume of Healthcare Queries The AI giant shared several user data metrics with Axios on how individuals interact with the chatbot when it comes to healthcare and medical queries. Notably, the abovementioned 40 million healthcare messages make up north of five percent of all ChatGPT messages globally. The company reportedly also revealed that between 1.6 and 1.9 million messages per week are asking for guidance about health insurance, with primary questions around plan comparison, claims and billing, and coverage. Apart from this, the report also claimed that as many as 6,00,000 healthcare-related questions per week come from users residing in underserved rural communities, and seven out of 10 conversations occur at a time when clinics are generally closed. OpenAI also shared results from a survey it conducted in December 2025 with the publication. It asked several user behaviour questions to 1,042 adults in the US. As per the data shared in the report, 55 percent of the respondents stated that they use ChatGPT to check or explore physical symptoms they're facing, while 48 percent use the chatbot to understand medical terms and instructions. Another 44 percent admitted using AI to learn about treatment options. The data highlights two things immediately. First is the lack of accessibility of healthcare and medical information in the public domain. While Google has been a popular source for people to look up healthcare information, unless users know the right keywords to search for and have the knowledge to decipher the technical medical language, the knowledge is not readily available. Second is the accessibility of healthcare professionals. Many individuals, especially those living in areas with limited healthcare infrastructure, often do not visit doctors and healthcare professionals for minor ailments and resort to home remedies. OpenAI's data shows how AI is filling both of these gaps with informative and science-backed knowledge. However, there are concerns. With AI hallucination still an issue in 2026, the reliability of the information shared by a chatbot remains a big question. Although OpenAI told the publication that it is working on improving the healthcare-related responses continuously, the window of error and the reliance on a massive user base can quickly become a recipe for disaster if ChatGPT starts spreading misinformation.
[46]
OpenAI Begins Rollout of Health and Wellness-Focused Version of ChatGPT | PYMNTS.com
The new ChatGPT Health is being made available to a select group of users first and will be expanded to all users on web and iOS within weeks, the artificial intelligence (AI) startup said in a Wednesday (Jan. 7) press release. This new offering is designed to bring together each user's health information and ChatGPT's intelligence, supported by purpose-built encryption and isolation that will keep these conversations secure and private, according to the release. It will enable users to connect their medical records and wellness apps so that the AI chatbot will be able to provide responses that are relevant and useful, the release said. ChatGPT Health's capabilities may vary by region and operating system, per the release. "Designed in close collaboration with physicians, ChatGPT Health helps people take a more active role in understanding and managing their health and wellness -- while supporting, not replacing, care from clinicians," OpenAI said in the release. The existing ChatGPT already fields questions about health and wellness from over 230 million people each week, making this one of the chatbot's most common kinds of conversations, according to the release. "Health is designed to support, not replace, medical care," OpenAI said in the release. "It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time -- not just moments of illness -- so you can feel more informed and prepared for important medical conversations." OpenAI's announcement of ChatGPT Health came a day after the company released a report that found that in terms of the number of daily users, ChatGPT stands alongside primary care, urgent care and telehealth as a first stop for medical information. The PYMNTS Intelligence report "How AI Becomes the Place Consumers Start Everything" found that 38.5% of AI users said AI has fully or mostly replaced their previous methods of managing health and wellness. Among the nine activities included in the report, health and wellness is one of the most common uses of AI, second only to managing finances and banking.
[47]
OpenAI Launches ChatGPT Health to Satiate 230mn Queries a Week
The feature is expected to roll out in the coming weeks whereby health discussions will be siloed out from general chats Trust OpenAI to create an business opportunity from public a peeve. At a time when ChatGPT-maker is facing lawsuits over suicide abetment and demands for building guardrails around open discussions with children and teens, Sam Altman's company has come out with ChatGPT Health - a dedicated corner for such conversations. In a blogpost, the company describes it as a "dedicated experience in ChatGPT designed for health and wellness" for the over 230 million people who are already seeking information about medical issues on a weekly basis. What OpenAI is now promising is that such conversations would be siloed from one's other chats. In other words, such information would not be available for others, but what OpenAI does not say is how much of these conversations would be used by the chatbot for future training purposes. So, what could it do for a user? "You can now securely connect medical records and wellness apps -- like Apple Health, Function, and MyFitnessPal -- so ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the trade-offs of different insurance options based on your healthcare patterns," is what the blogpost says. The company claims that siloing health chats would keep them outside standard conversations with ChatGPT, which would actually nudge users away from seeking health advice to switch over to the new section. However, within this silo, AI may reference stuff that one has discussed in its standard operating formats. A blog post written by Fidji Simo, the CEO of Applications at OpenAI notes that ChatGPT Health is a response to existing issues in the healthcare space such as costs and access barriers, busy doctors and a lack of continuity in caregiving. What the post does not say is how the company plans to address known drawbacks of using AI chatbots in lieu of medical advice - that is if removes statutory warnings as a means to such an end. Designed not to replace medicare, but are there safeguards? The post by OpenAI claims that Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time -- not just moments of illness -- so you can feel more informed and prepared for important medical conversations. "To keep your health information protected and secure, Health operates as a separate space with enhanced privacy to protect sensitive data. Conversations in Health are not used to train our foundation models. If you start a health-related conversation in ChatGPT, we'll suggest moving into Health for these additional protections," it says. Of course, we all know that OpenAI states in its terms of service that it is "not intended for use in the diagnosis or treatment of any health condition." Where does that leave the users who actually chat about their health issues? Would the chatbot hallucinate and create problems? Or would the guardrails prevent such a response?
[48]
ChatGPT Health Launch and the Problem of AI Medical Trust
MediaNama's Take: OpenAI's launch of ChatGPT Health marks a decisive push to formalise what millions already do informally: turn to a chatbot for medical guidance. However, the move also exposes a widening gap between adoption and accountability. On the one hand, OpenAI points to clinical input, privacy protections, and a dedicated health interface. On the other hand, real-world use continues to show how easily generative AI can stray into unsafe territory when it delivers confident but flawed medical advice. Moreover, this expansion occurs at a time when courts are only just beginning to determine where responsibility lies when AI-driven health advice causes harm. Lawsuits in the US have already alleged that chatbot guardrails remain weak, inconsistent, and reactive. In the Adam Raine case, OpenAI strengthened safety features only after the lawsuit, effectively treating the public as a live testing environment. That pattern raises uncomfortable questions about whether safety systems mature through foresight or through failure. At the same time, the industry's preferred shields, disclaimers, and terms of use look increasingly inadequate for tools that present themselves as trusted, conversational authorities. When design choices deliberately mimic human interaction, they invite emotional trust and medical reliance, especially among vulnerable users. Yet when harm follows, companies often retreat behind legal fine print. Therefore, as AI systems edge closer to clinical roles, from analysing lab reports to renewing prescriptions, regulators can no longer afford a light-touch approach. Instead, they must demand enforceable standards, pre-deployment validation, and clear liability frameworks. Otherwise, ChatGPT Health risks becoming not just a new product category, but another chapter in the long history of technology racing ahead of the rules meant to protect the public. OpenAI has announced ChatGPT Health, a dedicated health and wellness experience within ChatGPT designed to bring users' personal health information together with its AI capabilities. The launch follows widespread use of the base ChatGPT for health queries, with hundreds of millions of people globally asking health-related questions each week. As a result, OpenAI built ChatGPT Health to securely connect medical records and wellness apps so that it grounds responses in an individual's own health data and makes them more relevant and useful. Users can currently apply for the waitlist to use ChatGPT Health. According to the company, the system was built with extensive clinical input and iterative evaluation. OpenAI said it worked closely with physicians around the world, involving more than 260 clinicians across 60 countries to shape how the model responds to health questions, and to help prioritise safety and clarity in its outputs. OpenAI claims the underlying model was evaluated against clinical standards that reflect how clinicians assess the usefulness of health information. In practice, ChatGPT Health will operate as a separate space within ChatGPT, where users can upload medical records and link data from wellness apps. In this space, the system can help users understand lab results, prepare for doctor visits, explore diet and exercise routines, and consider insurance options with the added context of their own health information. OpenAI has also put extra protections in place for sensitive data. OpenAI keeps health conversations in an isolated environment with purpose-built encryption and does not use them to train its foundation models. Notably, this comes after Utah became the first US state to allow an AI system to prescribe medicine without human oversight in a pilot program. OpenAI published a new report this month setting out how widely people already use ChatGPT for health-related information. According to the report, more than 5% of all ChatGPT messages globally relate to healthcare. In addition, over 40 million people turn to ChatGPT every day with healthcare questions. In the report, OpenAI has suggested initial policy concepts for safely expanding the use of AI in healthcare. First, it proposes opening and securely connecting global medical data, with strong privacy protections, so AI systems can learn from large and diverse datasets. Second, it calls for building modern research and clinical infrastructure, including AI-enabled laboratories and decentralised clinical trials, to translate AI discoveries into real treatments. Third, OpenAI recommends supporting workforce transitions through apprenticeships, training programmes, and regional healthcare talent hubs. Finally, it urges regulators to clarify pathways for consumer AI medical devices and update medical device rules to support innovation in AI tools for doctors. While people are increasingly turning to ChatGPT for health-related questions, the practice carries significant risks when the AI provides inaccurate or unsafe advice. Generative AI models like ChatGPT may offer plausible-sounding but erroneous, or sometimes non-existent, medical information, a phenomenon known as "hallucination." Real-world cases highlight these risks. A California teenager reportedly died of a drug overdose after asking ChatGPT for drug-use advice over an extended period, with the chatbot allegedly responding with increasingly risky recommendations. Similarly, another case saw a 60-year-old man hospitalised with hyponatraemia after he cut salt from his diet based on AI-generated advice, highlighting the risks of following generic guidance without clinical oversight. Research from 2023 also found that ChatGPT's responses can be of low to moderate quality and may fail to align with medical guidelines, raising concerns about its reliability for health decisions. Furthermore, a study of AI mental health interactions reports that chatbots struggle with suicide-related prompts, sometimes producing inconsistent safety responses, and have been cited in lawsuits alleging harm to vulnerable users. The first such lawsuit was filed in 2025, when ChatGPT was accused of wrongful death in the case of a teen committing suicide after alleged encouragement from the chatbot. Several cases followed, including one where the chatbot is the central defendant in a case involving murder. Harleen Kaur, a researcher at the Digital Futures Lab (DFL), says OpenAI's announcement of ChatGPT Health raises serious questions about responsibility, safety, and user over-reliance. "I think it is irresponsible for the company to announce a health use case given that there's a lack of clarity on its design and safety protocols," she says. According to Kaur, such announcements reinforce the perception that users should not question chatbot outputs, even though "there's little evidence to support their authority on health -- that is dangerous." More broadly, Kaur argues that existing liability frameworks fail to account for how people actually use AI chatbots for health and mental health queries. In practice, she argues that companies rely heavily on disclaimers and terms of use to shield themselves from responsibility. "The chatbot companies can get away with legal disclaimers about their status and therefore any unintended consequence that comes from such usage," she notes, describing this approach as an improper extension of caveat emptor, where consumers are responsible for assessing the reliability of a product, to AI systems that ordinary users cannot meaningfully inspect or evaluate. At the same time, she points to design choices that encourage users to place unwarranted trust in chatbot advice. According to Kaur, over-reliance stems not only from a lack of regulation but also from "its deliberate anthropomorphic design" and the sense of authority the interface projects. She adds that whistleblowers have alleged companies bypass safety features, including clear disclaimers, confidence indicators, and escalation prompts for high-risk cases. Ultimately, Kaur says governments must intervene more decisively. "Regulatory intervention in ascribing clear liability would be one of the most important levers that governments could use to prevent harm at a large scale," she says, even if companies push back over costs and slower deployment timelines. Kaur says India now faces a critical policy moment as chatbots increasingly enter healthcare and mental health contexts. While usage continues to expand, she argues that safety mechanisms remain inconsistent and poorly enforced. In her view, regulators must treat chatbot deployment as a lifecycle issue rather than a one-time compliance exercise. As she puts it, "checks and balances on chatbots should be done throughout the lifecycle, starting at the inception stage up until post-deployment safety assessments." However, she warns that many providers currently bypass even limited safeguards. Despite the existence of ethics review and clinical trial-style mechanisms for public health interventions, Kaur says "many providers don't perform safety or ethics testing before rolling out their products." Moreover, she adds, some companies avoid oversight altogether by refusing to categorise their products as health tools, even when users clearly rely on them for medical support. Regulatory clarity, therefore, remains central. Kaur argues that authorities must clearly define when a chatbot functions as a medical device and subject it to risk-based assessment based on design and intended use. At the same time, she notes that accountability remains difficult to enforce in practice. "Post facto reporting of error rates is significant but also difficult to implement because of the lack of an ecosystem where such errors can be studied by anyone but the service provider," she says. Looking ahead, Kaur calls for a stronger research and monitoring ecosystem in India. "India needs to imagine an ecosystem where research on harms caused by chatbots is documented more scientifically and clearly", she says, adding that, at present, it remains difficult to directly attribute harm because of weak data and limited correlation studies.
[49]
OpenAI Reports More Than 40 Million Users Turn to ChatGPT Daily for Health Guidance
Millions Turn to ChatGPT for Health Guidance Every Day, OpenAI Reveals OpenAI's new report shows more than 40 million people opting for ChatGPT when seeking health advice. This sharp rise reflects a global trend as users depend on artificial intelligence to navigate complex healthcare systems. The data shows roughly 5% of all ChatGPT interactions worldwide are now about health topics or medical questions. Driven by high costs and limited access, users depend on the platform to understand insurance terms, check symptoms, and get ready for doctor visits. Notably, 70% of these questions come up after active hours, providing a 24/7 digital safety net.
[50]
OpenAI launches ChatGPT Health to help users manage personal health data By Investing.com
Investing.com -- OpenAI has launched ChatGPT Health, a new dedicated experience that allows users to securely connect their health information with ChatGPT's AI capabilities. The new service creates a separate, secure space within ChatGPT where users can connect medical records and wellness apps like Apple Health, Function, and MyFitnessPal to get personalized health insights. According to OpenAI, health is already one of the most common uses for ChatGPT, with over 230 million people globally asking health and wellness related questions weekly. ChatGPT Health includes enhanced privacy and security features specifically designed for health data, including purpose-built encryption and isolation to keep health conversations protected. The company emphasized that conversations in Health are not used to train OpenAI's foundation models. The service was developed in collaboration with more than 260 physicians who have practiced in 60 countries across dozens of specialties. These medical professionals provided feedback on model outputs over 600,000 times across 30 areas of focus. OpenAI clarified that ChatGPT Health is designed to support, not replace, medical care and is not intended for diagnosis or treatment. Instead, it aims to help users understand test results, prepare for doctor appointments, get advice on diet and exercise routines, and understand insurance options based on their healthcare patterns. The service is initially available to a small group of early users outside the European Economic Area, Switzerland, and the United Kingdom. Users with ChatGPT Free, Go, Plus, and Pro plans can join a waitlist for access. OpenAI plans to expand availability to all users on web and iOS in the coming weeks. Medical record integrations and some apps are currently available only in the U.S., and connecting Apple Health requires iOS. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.
[51]
OpenAI introduces ChatGPT Health to support informed health decisions | Advertising | Campaign India
The AI-led offering is designed to help users navigate health information with greater clarity and confidence. OpenAI has launched ChatGPT Health, positioning it as an AI agent that brings together personal health information and conversational AI to support users in understanding and managing health-related concerns. The announcement reflects the growing role of AI in everyday health and wellness decision-making. According to a note from ChatGPT, around 230 million users currently use its services each week to ask health and wellness-related questions. ChatGPT Health has been developed to respond to this demand by offering a more structured, secure and context-aware way for users to engage with health information. The core aim of ChatGPT Health is to help users feel more informed, prepared and confident when navigating health issues. By combining individual health data with ChatGPT's AI capabilities, the tool is intended to provide more relevant and grounded responses, while remaining supportive rather than directive. A key focus of the new offering is privacy and security. OpenAI has stated that ChatGPT Health includes additional, layered protections designed specifically for health-related use cases. These include purpose-built encryption and data isolation mechanisms intended to keep health conversations "protected and compartmentalised". The company has emphasised that users will retain control over their data, with clear options around access and usage. Users will be able to securely connect medical records and wellness apps to ChatGPT Health. This integration is designed to ground conversations in real information, allowing responses to be more personalised and contextually accurate. Linked data can be used to help users understand recent test results, organise information ahead of medical appointments and approach conversations with healthcare professionals more confidently. The platform also supports connections with wellness and fitness services such as Apple Health, Function and MyFitnessPal. Through these integrations, ChatGPT Health can assist users in interpreting trends related to diet, exercise and overall wellbeing, and offer guidance on how to think about lifestyle routines in a structured way. OpenAI has said the product was designed and developed in close collaboration with physicians worldwide. Over a period of more than two years, ChatGPT worked with over 260 physicians across 60 countries and dozens of medical specialties. This process focused on understanding what makes health-related responses genuinely helpful, as well as identifying areas where information could be incomplete, misleading or potentially harmful. The company has positioned this clinical collaboration as central to the design of ChatGPT Health, particularly in ensuring that answers are clear, cautious and framed to support informed discussions rather than replace professional medical advice. The emphasis is on preparation and understanding, rather than diagnosis or treatment decisions. ChatGPT Health is not being launched as a fully open product immediately. For now, users can register to join a waitlist, indicating a phased rollout approach. This may allow OpenAI to refine the experience, test safeguards and incorporate feedback before broader availability. The launch reflects wider trends at the intersection of technology, health and consumer services, where AI tools are increasingly expected to offer not just information, but structured support grounded in individual context. For OpenAI, ChatGPT Health represents a move towards more specialised, domain-specific applications of conversational AI. As consumer expectations around digital health tools continue to evolve, ChatGPT Health signals OpenAI's intent to play a more active role in how people access, interpret and prepare to act on health information, while placing privacy, safety and professional collaboration at the centre of the experience.
[52]
ChatGPT Health and your medical data: How safe will it really be?
How safe is your health data inside OpenAI's new ChatGPT Health OpenAI has officially entered the doctor's office with the launch of "ChatGPT Health," a specialized environment within its popular chatbot designed to handle your most sensitive medical information. The promise is enticing: a centralized digital assistant that can digest your blood test results, integrate with Apple Health to track your sleep, and help you prepare for a consultation, all by accessing your actual medical records. But as users are invited to upload their clinical history to an AI company, the pressing question isn't just about utility; it is about safety. Also read: Teen dies of overdose after seeking drug advice from ChatGPT: Here's what happened OpenAI has anticipated the privacy backlash. The company explicitly positions ChatGPT Health as a "walled garden." According to the announcement, this new domain operates separately from the standard ChatGPT interface. The most significant pledge is that data shared within this health-specific environment, including conversations, uploaded files, and connected app data, will not be used to train OpenAI's foundation models. This is a critical distinction from the standard free version of ChatGPT, where user interactions effectively become fodder for future AI iterations. Also read: ChatGPT will evolve into personal super assistant in 2026, says OpenAI's Fidji Simo Technically, OpenAI is leveraging "purpose-built encryption" and isolation protocols. To facilitate the connection with electronic medical records (EMRs) in the US, they have partnered with b.well, a health management platform that acts as the secure plumbing between the chatbot and healthcare providers. The system is designed so that users can revoke access or delete "health memories" at any time, theoretically placing the kill switch in the patient's hands. However, privacy experts and healthcare professionals remain skeptical, and for good reason. "Safety" in healthcare means more than just preventing a data breach; it means ensuring accuracy and compliance. While OpenAI claims enhanced protections, the landscape of HIPAA compliance for direct-to-consumer AI tools is murky. Standard ChatGPT is not HIPAA-compliant, and while ChatGPT Health adds layers of security, it stops short of being a regulated medical device. The platform explicitly warns that it is not intended for diagnosis or treatment, a "use at your own risk" disclaimer that shifts the burden of error squarely onto the user. Furthermore, the "black box" nature of AI remains a hurdle. Even if the data is secure from hackers, the risk of "hallucinations" - where the AI confidently invents false medical facts - persists. A secure system that gives bad advice is still a danger to patient health. While OpenAI promises not to train on this data now, terms of service can change. Once data enters the ecosystem of a for-profit tech giant, the long-term trajectory of that data is often unpredictable. Ultimately, ChatGPT Health represents a trade-off. It offers a futuristic level of convenience, turning scattered medical PDFs into actionable insights. But it requires a massive leap of faith. Users must decide if the value of an AI health assistant is worth the risk of handing over the keys to their biological biography. For now, the safest approach may be to treat ChatGPT Health like a medical student: helpful for summarizing notes, but never to be trusted with your life without a supervisor present.
[53]
OpenAI unveils ChatGPT Health, a new privacy-focused AI experience for personal health insights: All details
The feature is rolling out gradually to select users, with broader web and iOS availability planned in the coming weeks. OpenAI has officially announced ChatGPT Health, a new dedicated health-focused experience that aims to help users better understand and manage their health information using AI. The feature brings together personal health data and ChatGPT's intelligence in a secure environment. According to the company, health-related queries already represent one of the most common use cases for ChatGPT, with hundreds of millions of users globally seeking health and wellness information each week. ChatGPT Health builds on this demand by allowing users to securely connect medical records and wellness apps, enabling more contextual and personalised responses. Unlike normal chats, ChatGPT Health operates as a separate space within the platform, offering enhanced privacy protections tailored for sensitive health data. The company said health conversations are encrypted with additional safety and are isolated from other chats. The company has also stated that these conversations within Health will not be used to train OpenAI's foundation models. The ChatGPT Health will allow users to link medical records and apps such as Apple Health, MyFitnessPal and Function, helping them gain better knowledge about the lab results, track trends over time, prepare for doctor appointments or even understand the diet and fitness. The company also clarified that ChatGPT Health is not intended to diagnose or treat medical conditions but rather support informed discussions with healthcare professionals. Also read: BSNL brings back viral Rs 1 plan after massive customer demand across India To enable secure access to medical records in the US, OpenAI has partnered with health data platform b.well. It stated that the users will have full control over their data and can disconnect the records or apps at any given time. The app permissions will also need explicit permission and undergo additional security reviews before being available. OpenAI said the product was developed in close collaboration with physicians worldwide. Over the past two years, more than 260 doctors across 60 countries contributed feedback to help shape how the system responds to health questions, particularly around safety, clarity, and appropriate escalation of care. The company also uses its HealthBench evaluation framework to assess responses against clinical standards. ChatGPT Health is currently rolling out to a limited group of users on Free, Go, Plus, and Pro plans outside the European Economic Area, Switzerland and the UK. OpenAI plans to expand access more broadly on the web and iOS in the coming weeks. Users interested in early access can join a waitlist as the company continues to refine the experience.
[54]
Over 40 mn people use ChatGPT daily for health advice: OpenAI
Over five percent of all messages sent to ChatGPT globally are about healthcare. More than 40 million people around the world now turn to ChatGPT every day for health-related advice, according to a new report from OpenAI, the company behind the chatbot. The report says that over five percent of all messages sent to ChatGPT globally are about healthcare. People use ChatGPT for many health-related reasons. More than half said they used it to check or explore symptoms. Nearly half said it helped them understand medical terms or instructions, while about 44 percent used it to learn more about treatment options. The findings, first reported by Axios, highlight how quickly AI chatbots are becoming part of the US healthcare system. Some states, including California and Texas, have tried to limit how AI can be used in healthcare. At the same time, Congress has been slow to act, and the Trump administration is working to weaken state-level AI laws. Also read: Apple iPhone 17 Pro price drops by over Rs 12,900: How to get this deal OpenAI is also facing lawsuits that claim ChatGPT contributed to suicides or made mental health problems worse, a concern some experts call AI psychosis. Despite these concerns, OpenAI's report presents ChatGPT as a helpful tool for people struggling with a complex and costly healthcare system, especially in rural areas with few hospitals or doctors. "For both patients and providers in the US, ChatGPT has become an important ally, helping people navigate the healthcare system, enabling them to self-advocate, and supporting both patients and providers for better health outcomes," the company said. "Americans are using AI and ChatGPT to equip themselves with information to gain more agency over their health, particularly when dealing with a system that's difficult to navigate and makes decisions without a lot of context." Also read: Google Pixel 10 Pro price drops by over Rs 12,550: How to get this deal OpenAI also noted that people in rural "hospital deserts" send about 580,000 healthcare-related messages to ChatGPT each week. "AI will not, on its own, reopen a shuttered hospital, restore a discontinued obstetrics, or replace other critical but vanishing services," the company said. "But it can make a near-term contribution by helping people in underserved areas interpret information, prepare for care, and navigate gaps in access, while helping rare clinicians reclaim time and reduce burnout."
Share
Share
Copy Link
OpenAI unveiled ChatGPT Health, a dedicated space for health and wellness conversations that lets users connect medical records and wellness apps. With 230 million people asking health questions weekly, the feature offers personalized health responses while OpenAI insists it's not intended for diagnosis or treatment—a crucial distinction following recent cases where AI-generated health advice led to tragic outcomes.
OpenAI announced ChatGPT Health on Wednesday, creating a dedicated section within its AI chatbot designed specifically for health and wellness conversations
1
2
. The feature allows users to connect medical records and integrate with wellness apps like Apple Health, MyFitnessPal, and Function to receive personalized health responses3
. According to OpenAI, more than 230 million people ask health and wellness questions on the platform each week, making it one of the chatbot's most common use cases1
. The company worked with more than 260 physicians over two years to develop the feature, which aims to help users summarize care instructions, prepare for doctor appointments, and understand test results1
.
Source: Gadgets 360

Source: Axios
The launch comes amid growing concerns about AI-generated health advice risks. Just days before the announcement, an investigation detailed how a 19-year-old California man died of a drug overdose in May 2025 after 18 months of seeking recreational drug advice from ChatGPT
1
. The case illustrates the dangers of relying on generative AI for health guidance, as LLMs use statistical relationships in training data to produce plausible responses rather than necessarily accurate ones. These AI models are prone to hallucinations—generating plausible but false information that makes it difficult for users to distinguish fact from fiction1
. Peter D. Chang, an associate professor at the University of California, Irvine, acknowledged that "absolutely there's nothing preventing the model from going off the rails to give you a nonsensical result"3
.
Source: Digit
Despite enabling users to connect medical records to AI, OpenAI maintains that ChatGPT Health is not intended for diagnosis or treatment
1
4
. Fidji Simo, OpenAI's CEO of applications, stated the health and wellness chatbot is designed to "support, not replace, medical care" and help users "navigate everyday questions and understand patterns over time"1
. This legal disclaimer becomes particularly important given that inaccurate information from AI chatbots has led to hospitalizations, including a case where a man allegedly followed advice to replace salt with sodium bromide4
.Related Stories
The feature raises significant questions about personal health data privacy in an era already plagued by data breaches. Andrew Crawford, senior counsel at the Center for Democracy and Technology, noted that "the US doesn't have a general-purpose privacy law, and HIPAA only protects data held by certain people like health care providers and insurance companies"
4
. Since companies like OpenAI operating outside HIPAA's scope will be collecting and using people's health data, inadequate protections could put sensitive information at risk4
. OpenAI promises additional safeguards including purpose-built encryption, isolation, and multi-factor authentication, stating that Health conversations won't be used for model training4
5
.ChatGPT Health operates as a separate space within the chatbot, maintaining its own memory and chat history compartmentalized from regular conversations
2
5
. If users start health-related chats outside the Health section, the AI will nudge them to switch over2
. The feature is currently accessible only through a waitlist, with OpenAI providing access to a small group of early users before expanding to all users across web and iOS over the coming weeks5
. As approximately 40 million people daily rely on ChatGPT for medical questions, the rollout will be closely watched by health experts and privacy advocates alike5
.Summarized by
Navi
[2]
[3]
10 Nov 2025•Health

11 Jan 2026•Technology

05 Aug 2025•Technology

1
Business and Economy

2
Technology

3
Technology
