20 Sources
20 Sources
[1]
ChatGPT's AI Health-Care Push Has a Fatal Flaw
Anthropic's chatbot Claude has shown high accuracy in certain medical tasks, but the company couldn't provide complete answers on its accuracy rate for making diagnoses, highlighting the need for more transparency in the industry. OpenAI and Anthropic have both announced big plans to enter healthcare, with a consumer-focused tool called ChatGPT Health and a version of the chatbot Claude that can help clinicians figure out a diagnosis and write medical notes. Notably absent from this flurry of announcements is Google. Its Gemini chatbot is one of the most popular and capable, so why not jump into the lucrative health market too? Perhaps because Google knows from experience that such an effort can backfire spectacularly. Health advice is where generative artificial intelligence has some of its most exciting potential. But the newer AI companies, perhaps blinded by bravado and hype, face a fate similar to Google's if they're not more transparent about their technology's notorious hallucinations. OpenAI is slowly rolling out a new feature that lets users query about their health, with a separate memory and links to data from a person's medical records or their wellness apps if they choose to plug them in. The company says ChatGPT Health is more secure and "not intended for diagnosis," but many people already use it to determine ailments. More than 230 million people ask the app for health-related advice every week, the company says. It also announced ChatGPT for Healthcare, a version of the bot for clinicians that's being trialed at several hospitals including Boston Children's Hospital and Memorial Sloan Kettering Cancer Center. Anthropic, which has had greater success than OpenAI in selling to businesses, launched a chatbot aimed at doctors. It looks the same as the consumer version of Claude, but is trained on databases of medical data such as diagnostic codes and healthcare providers -- to help it generate authorization documents -- and academic papers from PubMed to help it walk a doctor through a potential diagnosis. The company has given a tantalizing glimpse of how that training can make Claude more accurate. When the consumer version of Claude is asked about the ICD-10 codes doctors use to classify a diagnosis or procedure, the answer is correct 75% of the time, Anthropic's chief product officer, Mike Krieger said at a launch event earlier this month. But the doctors' version of Claude, trained on those codes, is 99.8% accurate. What's the accuracy rate when it comes to making a diagnosis, though? That particular number seems more important. When I asked Anthropic, the company couldn't give a complete answer. It said its most powerful reasoning model, Claude Opus 4.5, achieved 92.3% accuracy on MedCalc, which tests medical calculation accuracy, and 61.3% on MedAgentBench, which measures whether an AI can do clinical tasks in a simulated electronic health-record system. But neither indicate how reliable the AI is with clinical recommendations. The first refers to a test for drug dosing and lab values; the 61.3% stat is, let's face it, a worryingly low score. To its credit, Anthropic's models are more honest -- they're more likely to admit uncertainty than invent answers -- than those made by OpenAI or Google, according data compiled by Scale, the AI company recently purchased by Meta Platforms Inc. Anthropic played up those numbers during its launch at the JPMorgan Chase Healthcare Conference in San Francisco, but such praise will ring hollow for doctors if they can't quantify how accurate a diagnostic tool actually is. When I asked OpenAI about ChatGPT's reliability with health facts, a spokeswoman said its models had become more reliable and accurate in health scenarios compared with previous versions, but she also didn't provide hard numbers showing hallucination rates when giving medical advice. AI companies have long been silent about how often their chatbots make mistakes, in part because doing so would highlight how difficult a problem this has been to solve. Instead, they'll provide benchmark data showing, for instance, how well their AI models do on a medical licensing exam. But being more transparent about reliability will be critical in building trust both with clinical professionals and the public. Alphabet Inc.'s Google learned this the hard way. Between 2008 and 2011, it tried to create a personal health record under the banner "Google Health," which could aggregate a person's medical data from different doctors and hospitals in one place. The effort failed in part because Google faced an enormous technical challenge in collating health data from incompatible systems. The bigger problem: People were creeped out at the idea of uploading their health records to a company that regularly hoovered up personal information for ads. Public mistrust was so strong that a valiant effort by Google's DeepMind lab to alert hospital doctors to signs of acute kidney failure was shut down in 2018 after it emerged it had accessed more than a million UK patient records as part of the project. A year later, the Wall Street Journal unveiled another Google effort, known as Project Nightingale, to access the medical records of millions of US patients. Both incidents were deemed scandals, and the lesson was clear: People perceived Google as untrustworthy. That makes the fate of AI companies in healthcare even more fraught. Google's troubles came down to how it was perceived by the public, not because of any errors its systems had made in processing medical records. The cost will be higher if ChatGPT or Claude make a mistake when helping doctors make life or death decisions. Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Sign up for the Bloomberg Opinion bundle Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Get Matt Levine's Money Stuff, John Authers' Points of Return and Jessica Karl's Opinion Today. Bloomberg may send me offers and promotions. Plus Signed UpPlus Sign UpPlus Sign Up By submitting my information, I agree to the Privacy Policy and Terms of Service. Perhaps it was naivety or blinkered thinking that led Dario Amodei, the chief executive of Anthropic, to address this exact point during his healthcare launch last week, even as his company provided no data to address it. The definition of "safety" was expanding as his company entered new markets like health, he said. "Healthcare is one place you don't want the model making stuff up," he added. "That's bad." But refusing to say how often it happens? That's bad too. More from Bloomberg Opinion: * What I Learned Playing Liar's Poker Against AI: Aaron Brown * Is America's New Productivity Boom AI's Doing?: Jonathan Levin * Elon Musk AI's Fake Nudes Are a Provocation Too Far: Parmy Olson Want more from Bloomberg Opinion? OPIN <GO> . Or subscribe to our daily newsletter .
[2]
Claude joins the ward as Anthropic eyes US healthcare data
AI firm promises HIPAA-compliant integrations as chatbot moves into hospital admin Fresh from watching rival OpenAI stick its nose into patient records, Anthropic has decided now is the perfect moment to march Claude into US healthcare too, promising to fix medicine with yet more AI, APIs, and carefully-worded reassurances about privacy. In a blog post over the weekend, Anthropic trumpeted the launch of Claude for Healthcare alongside expanded life sciences tools, a double-barreled push to make its chatbot not just a research assistant for scientists but an actual cog in the $4 trillion-plus American healthcare machine. If this feels less like healthcare reform and more like an AI land rush toward anything stuffed with data and VC money, you've got the gist. Anthropic is selling Claude for Healthcare as a HIPAA-compliant way to plug its model into the plumbing of US medicine, from coverage databases and diagnostic codes to provider registries. Once wired up, Claude can help with prior authorization checks, claims appeals, medical coding, and other administrative chores that currently clog up clinicians' inboxes and sanity. "Claude can now connect to industry-standard systems and databases to help clinicians and administrators find the data they need and generate reports more efficiently," Anthropic wrote. "The aim is to make patients' conversations with doctors more productive, and to help users stay well-informed about their health information." The life sciences side of the announcement adds integrations with Medidata and ClinicalTrials.gov, promising to help with clinical trial planning and regulatory wrangling. Because nothing says "we're a serious AI partner for pharma" quite like rifling through clinical trial registries. There's plenty of lofty talk about helping researchers and saving time, but the underlying logic is the same one driving most AI-for-industry plays - admin drudgery is far easier, and far more profitable, to automate than care itself. The company is keen to emphasize that Claude won't quietly slurp up your health data to train future models: data sharing is opt-in, connectors are HIPAA-compliant, and "we do not use user health data to train models," Anthropic reassures us. That's the polite way of saying it would let hospitals, insurers, and maybe patients themselves hand over structured medical forms and records as long as lawyers and compliance teams are satisfied. And yes, patients may get to play too. In beta, Claude can integrate with services like HealthEx, Apple HealthKit, and Android Health Connect so subscribers can ask the bot to explain their lab results or summarize their personal medical history. That'll be handy right up until the inevitable moment when someone discovers that handing a large language model access to health apps brings with it all the usual "AI hallucination" caveats and eyebrow-raising liability questions. Anthropic's announcement follows hot on the heels of OpenAI's ChatGPT Health ploy, which instantly raised privacy concerns by suggesting clinicians and consumers alike could feed it raw medical records and get back summaries and treatment suggestions. That gambit drew criticism from privacy advocates worried about where all that data might go, a conversation Anthropic's carefully-worded language aims to pre-empt. So here we are: two of the biggest names in "responsible AI" now neck-deep in the US healthcare sector, promising to make sense of everything from coverage policies to clinical trial data. The claims are big, the caveats are long, and the proof, as ever, will come later. ®
[3]
Anthropic brings Claude to healthcare with HIPAA-ready Enterprise tools
Anthropic is bringing Claude for healthcare, following a similar move by OpenAI for ChatGPT. In a blog post, Anthropic explained that Claude is expanding into Healthcare, and it's testing new connectors specifically tailored for healthcare needs. With AI, healthcare can improve its billing and work faster. But there are also other ways in which Claude can help. For example, Claude can now connect to the CMS Coverage Database and check Medicare coverage rules that depend on where you are, support prior authorization, etc. CMS integration can help healthcare with its revenue cycle and compliance. Moreover, Claude can look up ICD-10 codes, which means it can now correct medical coding, reduce billing mistakes, and improve claims processing. Last but not least, when Claude is deployed across healthcare, it can verify providers, support credentialing, and reduce claim errors.
[4]
Anthropic Launches Claude AI for Healthcare with Secure Health Record Access
Anthropic has become the latest Artificial intelligence (AI) company to announce a new suite of features that allows users of its Claude platform to better understand their health information. Under an initiative called Claude for Healthcare, the company said U.S. subscribers of Claude Pro and Max plans can opt to give Claude secure access to their lab results and health records by connecting to HealthEx and Function, with Apple Health and Android Health Connect integrations rolling out later this week via its iOS and Android apps. "When connected, Claude can summarize users' medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments," Anthropic said. "The aim is to make patients' conversations with doctors more productive, and to help users stay well-informed about their health." The development comes merely days after OpenAI unveiled ChatGPT Health as a dedicated experience for users to securely connect medical records and wellness apps and get personalized responses, lab insights, nutrition advice, and meal ideas. The company also pointed out that the integrations are private by design, and users can explicitly choose the kind of information they want to share with Claude and disconnect or edit Claude's permissions at any time. As with OpenAI, the health data is not used to train its models. The expansion comes amid growing scrutiny over whether AI systems can avoid offering harmful or dangerous guidance. Recently, Google stepped in to remove some of its AI summaries after they were found providing inaccurate health information. Both OpenAI and Anthropic have emphasized that their AI offerings can make mistakes and are not substitutes for professional healthcare advice. In the Acceptable Use Policy, Anthropic notes that a qualified professional in the field must review the generated outputs "prior to dissemination or finalization" in high-risk use cases related to healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance. "Claude is designed to include contextual disclaimers, acknowledge its uncertainty, and direct users to healthcare professionals for personalized guidance," Anthropic said.
[5]
Anthropic Adds Features for Doctors, Patients in Health Care Push
Anthropic's medical responses are grounded with citations from respected publications, and the company said it will not train its models on health care user data. Anthropic is making it easier for patients and clinicians to use its artificial intelligence chatbot to access medical information, part of a broader push into the lucrative health care sector. The San Francisco-based company on Sunday said that its Claude product is launching a new health care offering that is compliant with the Health Insurance Portability and Accountability Act, or HIPAA, and can be used by hospitals, medical providers and consumers to field protected health data. Anthropic has also integrated scientific databases into its product and added enhanced capabilities for biological research. On the consumer front, Anthropic is allowing users to export their health data from apps including Apple Health and Function Health, with the goal of helping them gather and share medical records with providers. Anthropic, which is currently in funding talks at a $350 billion valuation, announced the features days after rival OpenAI unveiled new tools for clinicians to work through cases and for everyday users to review their test results, diets and workout routines. The back-to-back releases highlight Silicon Valley's growing desire to gain ground in health care to bolster sales and prove AI's broad benefits. "When we think about the overall economy and where AI can have the most impact, it is really well primed for that, once you do the right things on the regulatory and data front," said Mike Krieger, Anthropic's chief product officer and a co-founder of Instagram. The new tools are designed for "empowering people to have more knowledge, both from their data, but also in conversation with their providers," Krieger said. Founded in 2021 by former employees of OpenAI, Anthropic has positioned itself as a reliable, safety-conscious AI developer. Its software has become particularly popular among engineers who use it to automate the coding process. But Anthropic, whose chief executive officer, Dario Amodei, is a biophysicist by training, has also started to see some early traction from medical providers. The company said Banner Health, one of the largest nonprofit health systems in the US, has more than 22,000 clinical providers using Claude, and that 85% report working faster with higher accuracy. Anthropic is also working with customers such as Novo Nordisk A/S and Stanford Health Care. The Claude maker faces steep competition not just from OpenAI but also legacy technology providers as well as newer startups, which have tried to apply advances in AI to drug discovery, medical paperwork and analyzing patient records. But these ventures also come with new privacy and safety risks from AI handling sensitive personal data and offering suggestions for high-stakes health matters. Anthropic said its medical responses are grounded with citations from respected publications such as PubMed and the NPI Registry, ensuring that clinicians can have more confidence in the results. Anthropic also said it will not train its models on health care user data.
[6]
Anthropic brings Claude into healthcare -- skip the ChatGPT Health waitlist
Claude is moving into healthcare -- here's what Anthropic's new AI tools can do Anthropic, the AI lab behind the Claude family of LLMs (large language models), is making a major push into the healthcare space with a new set of tools designed to help patients and clinicians work with medical data more effectively. The announcement, timed with the start of the J.P. Morgan Healthcare Conference in San Francisco, introduces Claude for Healthcare, a suite of capabilities built on Claude's latest models and designed to be compliant with strict U.S. medical privacy rules like HIPAA. Anthropic's move comes just days after rival OpenAI launched ChatGPT Health, part of its own expansion into health-related AI tools that let users upload medical records and receive personalized health guidance. Together, these announcements show that major AI labs now see healthcare as a frontline battleground for their technology rather than a fringe use case. Unlike general-purpose chatbots, Claude for Healthcare is tailored for regulated clinical environments and built to connect with trusted medical data sources. According to Anthropic, the system can tap into key healthcare and scientific databases -- giving it the ability to interpret and contextualize complex medical information. The offering also includes tools aimed at life sciences workflows, helping researchers with clinical trial planning, regulatory document support and biomedical literature review. Patients and clinicians can already use Claude's updated features with Claude Pro and Claude Max subscriptions to gain clearer explanations of health records or test results, and the platform integrates with personal health data systems such as Apple Health and fitness apps so users can ask personalized questions about their own medical information. Anthropic's broader safety framework, known as constitutional AI, plays into privacy. Instead of relying heavily on human reviewers reading user conversations, Claude is trained to follow a set of internal rules that emphasize: OpenAI has improved its privacy controls significantly in recent years, including opt-out options and enterprise safeguards. But Anthropic has leaned harder into privacy-first positioning as a core differentiator -- especially for businesses and regulated industries. That's why Anthropic markets Claude as a safer choice for: Claude is designed to be useful without learning from you. Conversations aren't used for training by default, enterprise data is locked down, and healthcare workflows are built to keep medical data private -- which helps explain why Anthropic is moving aggressively into regulated spaces like healthcare. Between OpenAI and Anthropic, it's clear that AI is being integrated into high-stakes sectors like medicine -- and competition may accelerate deployment. The parallel push by two of the leading AI labs highlights how quickly generative AI is being At the same time, the trend raises fresh questions about data privacy, regulatory compliance and the balance between AI convenience and clinical accuracy -- topics that will likely shape future adoption and oversight. We'll be keeping a close eye on those issues, as well as more of what's to come.
[7]
Claude just joined your healthcare team -- and might be ready to help your doctor help you
Healthcare professionals can use Claude to speed up tasks like prior authorizations and claims appeals Anthropic's Claude AI chatbot is scrubbing into your medical care. The company has debuted a new initiative called Claude for Healthcare, and is inviting U.S. users to let their digital assistant peek under the hood of their personal health data. The AI can, with permission, look at lab tests, fitness metrics, and doctor appointment notes by connecting with platforms like HealthEx, Function, Apple Health, and Android Health Connect. The timing is notable, as it closely follows OpenAI's unveiling of ChatGPT Health and its somewhat similar provisions. Essentially, Claude can act as a kind of translator for your bloodwork and medical history, as well as dive into information collected by your smartwatch to give more specific suggestions on improving your health. It will also offer ideas of what to talk to your doctor about at your next visit. It's an opt-in feature for Claude Pro and Max subscribers, and comes with HIPAA-compliant tools for doctors that are supposed to streamline their paperwork for things like prior authorization, claims appeals, and care coordination. For the average user, it's a question of how much access to give Claude. If you connect Claude to your health data, however, it can pull in your records and interpret them like a well-read medical assistant. Your cholesterol numbers get a plain-language explanation. Your last five years of back pain logs become a digestible summary. All of this is explicitly opt-in. Anthropic insists the system is private by design: you choose what data Claude can see, you can revoke access at any time, and your information is not used to train future models. Anthropic also claims that Claude for Healthcare will provide relief to hospitals and healthcare providers. For instance, it can review prior authorization requests, connecting to Medicare's Coverage Database, pull the latest criteria, compare it to a patient's file, and suggest a determination that a human reviewer can approve or refine. That means less time chasing scattered documents and more time getting people what they need. Claude also integrates with the ICD-10 system for diagnosis and billing codes, and with the National Provider Identifier Registry, helping staff verify provider credentials, submit cleaner claims, and navigate the arcane coding labyrinth that fuels the healthcare economy. And on the enterprise side, companies using Claude in HIPAA-compliant environments can hook it into PubMed to pull relevant studies, literature reviews, or clinical research. Claude for Healthcare also opens doors for startups and developers. On the Claude Developer Platform, new health-focused apps are already in motion: ambient note-taking tools that reduce the documentation burden for clinicians, lightweight triage assistants for patient messages, and even chart review systems that keep tabs on the finer points of clinical guidelines. This isn't happening in a vacuum, as ChatGPT Health's appearance indicates. But while Claude for Healthcare and ChatGPT Health both aim to make sense of the complex, often opaque world of medical data, they take notably different approaches. Claude is designed as an AI that can not only explain your lab results in plain language, but also plug directly into the machinery of the U.S. healthcare system. While it does offer patient-facing features, Claude's real muscle is in administrative clarity. ChatGPT Health, on the other hand, lives inside the ChatGPT app, offering a separate space where users can connect apps and work more from a personal angle. Whether Claude for Health succeeds might depend on how well Anthropic keeps its promises on privacy and transparency. But the value of getting an AI you trust to explain your health simply is obvious, especially if it also helps your doctor get you care more quickly.
[8]
Anthropic's Claude will soon help you make sense of your Apple Watch health data
The AI assistant will analyze your wearable metrics and offer clearer health insights. Anthropic just stepped into the healthcare AI space with the launch of Claude for Healthcare, a new suite of tools designed for providers, payers, and patients. Following in the footsteps of OpenAI's ChatGPT Health, Claude for Healthcare aims to bring AI safely into medical contexts, helping users access and understand their health information more effectively. As part of this push, Anthropic is introducing new integrations that let users connect their health data to Claude. In the US, subscribers on the Claude Pro and Max plans can give the AI assistant secure access to lab results and health records, and unlock features that make that data actionable. Recommended Videos Once connected, Claude can help users summarize their medical history, explain test results in simple language, and even prepare questions for doctor visits. It can also analyze health and fitness data from wearables like the Apple Watch, detecting patterns across metrics to give users a clearer picture of their overall health. Anthropic has already released new HealthEX and Function connectors in beta, which let users provide Claude access to their medical data. The Apple Health and Android Health Connect integrations, which allow Claude to pull health and fitness metrics from phones and wearables, will roll out in beta this week through the Claude app for iOS and Android. Anthropic promises complete privacy and user control The company has emphasized that privacy and user control are central to these integrations. It notes that users must explicitly opt in to try these capabilities, can control exactly what information they share, and can disconnect or edit Claude's permissions at any time. Anthropic also says that user health data will not be used to train its AI models. To ensure users approach its insights responsibly, Claude will include contextual disclaimers, acknowledge areas of uncertainty, and direct users to healthcare professionals for personalized guidance.
[9]
Anthropic rolls out new healthcare and life science features for Claude | Fortune
AI lab Anthropic is making a major push into healthcare with the launch of Claude for Healthcare and an expansion of its life sciences offerings. The announcement, timed to coincide with the start of the JPMorgan Healthcare Conference in San Francisco this week, comes just days after OpenAI unveiled ChatGPT for Health. That's no coincidence and reflects the growing competition among leading AI labs to build specialized products for lucrative industries like healthcare, finance, and coding. The Claude for Healthcare announcements include a partnership with HealthEx, a startup that allows patients to see all of their electronic medical records in a single place and control access to that data. The partnership includes a way for users to connect their personal medical records to Anthropic's Claude in order to use the chatbot to answer health-related questions. "HealthEx lets people bring their health records into a conversation with Claude and ask important questions in everyday language -- What does this lab result mean? What should I bring up with my doctor? How has this number changed over time? -- and get answers grounded in their own health history," Amol Avasare, product lead at Anthropic, said. The announcements also include a similar set of connectors for Function Health, a company that helps patients schedule lab tests and interpret the results, as well as integrations with Apple Health and Android Health Connect that will be rolling out to beta testers next week. For now, the connectors to HealthEx and Function Health are available to Claude Pro and Max subscribers in the U.S. Health-related queries are among the leading consumer use cases of AI chatbots. But so far, Anthropic has been less focused on serving the general consumer market than its rival OpenAI, which boasts more than 800 million weekly users. Anthropic is thought to have far fewer consumer users and has instead concentrated on specialized use cases, such as software coding, that more naturally appeal to enterprise customers. It has pulled ahead of OpenAI in enterprise marketshare according to several recent surveys. It has also recently been creating more tailored versions of Claude to serve other industry or professional verticals, such as Claude for Financial Services and Claude for Life Sciences. Anthropic has said it is interested in serving consumers as well as large organizations, and today's announcements were aimed at both consumers and enterprise customers, such as hospitals, insurers, and pharmaceutical companies. The company said it was adding connectors to industry-standard databases including the Centers for Medicare & Medicaid Services Coverage Database, the International Classification of Diseases (ICD-10), the National Provider Identifier Registry, and PubMed. These connectors are designed to help healthcare providers with tasks like speeding up prior authorization requests, supporting claims appeals, coordinating care, and triaging patient messages. For life sciences companies, Anthropic is expanding beyond its initial focus on preclinical research to support clinical trial operations and regulatory work. New connectors include Medidata for clinical trial data and ClinicalTrials.gov. It is also launching connectors to bioRxiv and medRxiv -- which are repositories for medical and biological research papers, usually before their findings have been peer reviewed; Open Targets, a database of identified drug targets; and ChEMBL, a database of bioactive compounds that could be used to make drugs. The company is working with major healthcare and pharmaceutical companies including AstraZeneca, Sanofi, Genmab, Banner Health, Flatiron Health, and Veeva, among others. In a video clip Anthropic provided to reporters, it showed how Claude can now help a pharmaceutical company design a protocol for a Phase II clinical trial of a hypothetical drug designed to treat Parkinson's Disease. It reduced the time it takes to draft the protocol design from many days to just about an hour. Among the centerpieces of the new consumer health offerings is the partnership with HealthEx, which can help patients consolidate medical records from more than 50,000 health systems. Fortune talked with executives from both companies exclusively about the new offering. "Personal health records today are scattered across providers, and it can be difficult to get a complete view," Avasare told Fortune. "HealthEx built a way to use Claude to unify those records with user consent and strong controls. Users decide what to share and can revoke access at any time, and their health data is never used for model training." Users enable the HealthEx connector inside Claude, verify their identity, and connect their patient portal logins. HealthEx then unifies records across providers. When users ask Claude health-related questions, Claude uses Model Context Protocol (MCP) -- an open standard Anthropic developed for connecting AI to external data sources -- to securely retrieve relevant portions of the record for each specific question. To enhance data privacy, Claude requests only the categories of information most likely to be relevant to a question -- such as medications, allergies, recent lab reports, or doctor notes -- rather than pulling an entire medical record. If relevance isn't obvious, Claude can prompt users to broaden the scope, asking if they want to look further back in their history, Avasare said. Priyanka Agarwal, cofounder and CEO of HealthEx, said the partnership addresses a fundamental problem in American healthcare: making it easier for consumers to access and understand their own health data. "We're giving every American a safe, private way for them to use their health data with AI," Agarwal told Fortune. "We know that AI based on personal context is going to be more effective at providing support." She said that by connecting medical records to HealthEx and HealthEx to Claude, users will get "responses [that] are grounded in your health history, not generic advice." According to Anthropic, the healthcare and life sciences announcements are possible because of recent improvements to Claude's underlying capabilities. When tested on simulations of real-world medical and scientific tasks, Claude Opus 4.5, Anthropic's latest model, substantially outperforms earlier releases. The company also said Opus 4.5 with extended thinking shows improvements in producing correct answers on honesty evaluations, reflecting progress on reducing factual hallucinations.
[10]
AI is uncannily good at diagnosis. Its makers just won't say so.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. In the past week, two of the biggest AI companies went all-in on that reality. OpenAI launched ChatGPT Health, a dedicated space within its larger chat interface where users can connect their medical records, Apple Health data, and stats from other fitness apps to get personalized responses. (It's currently available to a small group of users, but the company says it will eventually be open to all users.) Just days later, Anthropic announced a similar consumer-facing tool for Claude, alongside a host of others geared toward health care professionals and researchers.
[11]
Anthropic joins OpenAI's push into health care with new Claude tools
The Anthropic app on a smartphone.Gabby Jones / Bloomberg via Getty Images Anthropic announced a new suite of health care and life sciences features Sunday, enabling users of its Claude artificial intelligence platform to share access to their health records to better understand their medical information. The launch comes just days after rival OpenAI introduced ChatGPT Health, signaling a broader push by major AI companies into health care, a field seen as both a major opportunity and a sensitive testing ground for generative AI technology. Both tools will allow users to share information from health records and fitness apps, including Apple's iOS Health app, to personalize health-related conversations. At the same time, the expansion comes amid heightened scrutiny over whether AI systems can safely interpret medical information and avoid offering harmful guidance. Users must join a waitlist to access OpenAI's ChatGPT Health tool, while Claude's health care offerings are now available for Pro and Max plan subscribers in the U.S. Eric Kauderer-Abrams, head of life sciences at Anthropic, one of the world's largest AI companies and newly rumored to be valued at $350 billion, said Sunday's announcement represents a step toward using AI to help people navigate complex health care issues. "When navigating through health systems and health situations, you often have this feeling that you're sort of alone and that you're tying together all this data from all these sources, stuff about your health and your medical records, and you're on the phone all the time," he told NBC News. "I'm really excited about getting to the world where Claude can just take care of all of that." With the new Claude for Healthcare functions, "you can integrate all of your personal information together with your medical records and your insurance records, and have Claude as the orchestrator and be able to navigate the whole thing and simplify it for you," Kauderer-Abrams said. When unveiling ChatGPT Health last week, OpenAI said hundreds of millions of people ask wellness- or health-related questions on ChatGPT every week. The company stressed that ChatGPT Health is "not intended for diagnosis or treatment," but is instead meant to help users "navigate everyday questions and understand patterns over time -- not just moments of illness." AI tools like ChatGPT and Claude can help users understand complex and inscrutable medical reports, double-check doctors' decisions and, for billions of people around the world who lack access to essential medical care, summarize and synthesize medical information that would otherwise be inaccessible. Like OpenAI, Anthropic emphasized privacy protections around its new offerings. In a blog post accompanying Sunday's launch, the company said health data shared with Claude is excluded from the model's memory and not used for training future systems. In addition, users "can disconnect or edit permissions at any time," Anthropic said. Anthropic also announced new tools for health care providers and expanded its Claude for Life Science offerings that focus on improving scientific discovery. Anthropic said its platform now includes a "HIPAA-ready infrastructure" -- referring to the federal law governing medical privacy -- and can connect to federal health care coverage databases, the official registry of medical providers and other services that will ease physician and health-provider workloads. These new features could help automate time-consuming tasks such as preparing prior authorization requests for specialist care and supporting insurance appeals by matching clinical guidelines to patient records. Dhruv Parthasarathy, chief technology officer at Commure, which creates AI solutions for medical documentation, said in a statement that Claude's features will help Commure in "saving clinicians millions of hours annually and returning their focus to patient care." The rollout comes after months of increased scrutiny of AI chatbots' role in dispensing mental health and medical advice. On Thursday, Character.AI and Google agreed to settle a lawsuit alleging their AI tools contributed to worsening mental health among teenagers who died by suicide. Anthropic, OpenAI and other leading AI companies caution that their systems can make mistakes and should not be substitutes for professional judgment. Anthropic's acceptable use policy requires that "a qualified professional ... must review the content or decision prior to dissemination or finalization" when Claude is used for "healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance." "These tools are incredibly powerful, and for many people, they can save you 90% of the time that you spend on something," Anthropic's Kauderer-Abrams said. "But for critical use cases where every detail matters, you should absolutely still check the information. We're not claiming that you can completely remove the human from the loop. We see it as a tool to amplify what the human experts can do."
[12]
The Billion Dollar Battle to Become Your AI Doctor | AIM
OpenAI and Anthropic are pushing AI into healthcare workflows, reviving old privacy fears while reframing the race around trust, accuracy and clinical responsibility. The competition in healthcare AI is heating up. Just days after OpenAI launched ChatGPT Health, Anthropic has rolled out Claude for Healthcare, accelerating the race to embed generative AI deeper into medical workflows. Unlike ChatGPT Health, which operates as a separate, sandboxed space within ChatGPT, Claude for Healthcare is woven directly into Anthropic's Claude chatbot. According to the company, the new features allow Claude to securely access trusted medical and insurance databases to assist with medical-related queries and routine healthcare tasks. For hospitals and insurers, Claude can verify whether a treatment is covered by insurance or assist with preparing documentation when claims are rejected. For patients, it can simplify complex lab reports and medical histories into understandable language. ChatGPT Health, by contrast, offers a dedicated environment for health and wellness queries, where users can optionally connect medical records, fitness trackers or nutri
[13]
After OpenAI, Anthropic launches Claude for Healthcare
Claude for Healthcare provides 'HIPAA-ready' tools for consumers, as well as healthcare providers. In a strong push into the healthcare sector, enterprise AI giant Anthropic has launched a dedicated set of tools under Claude for medical queries, mimicking rival OpenAI's ChatGPT Health that launched just days earlier. While OpenAI targets general consumers with its new private health-specific service, 'Claude for Healthcare' provides "HIPAA-ready" tools - referring to the US law on patient privacy - for consumers, as well as healthcare providers. Claude can summarise patient-uploaded health records, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments. The data integration is private by design, claims Anthropic. Users must explicitly opt-in to enable Claude to access these records. This permission can be rescinded at any time, Anthropic said, adding that users' health data will not be used to train models. Meanwhile, for insurance providers and pharmacies, Claude can help speed up reviews for requests, coordinate care, as well as support healthcare start-ups develop new ideas. "When navigating through health systems and health situations, you often have this feeling that you're sort of alone and that you're tying together all this data from all these sources", Eric Kauderer-Abrams, the head of life sciences at Anthropic told NBC News. "I'm really excited about getting to the world where Claude can just take care of all of that." Claude's new health records functions are available in beta mode for Pro and Max users in the US, while integrations with Apple Health and Android Health Connect are rolling out in beta for the paid subscribers in the coming days. The new service follows on from 'Claude for Life Sciences', which was launched last October. The launch aimed supporting preclinical research and development using the chatbot. Alongside the dedicated health-tool, Anthropic has also expanded on Claude's life sciences prowess by focusing on clinical trial operations and regulatory stages of the development chain. Now, Claude can create draft clinical trial protocols, use trial data to track indicators and prepare regulatory submissions. While it is generally understood that AI systems don't actually "understand" information, models are increasingly being deployed in sensitive areas such as healthcare for large-scale data analysis. And although marketed as private, experts encourage caution while using these new tools. Speaking to Time Magazine, one such expert said, "the most conservative approach is to assume that any information you upload into these tools, or any information that may be in applications you otherwise link to the tools, will no longer be private." OpenAI and Anthropic launch their health products at a time when there is growing scrutiny in the roles AI chatbots play in deterioration of user mental health. OpenAI, for instance, is involved in a number of lawsuits surrounding such issues, including a California case where it is alleged that ChatGPT encouraged a man with mental illness to kill his mother and himself. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[14]
Anthropic pushes into healthcare to help patients understand their medical records - SiliconANGLE
Anthropic pushes into healthcare to help patients understand their medical records Artificial intelligence developer Anthropic PBC debuted new healthcare and life sciences capabilities in its flagship chatbot Claude on Sunday, saying users can now share their medical records with the service to better understand their health. Claude now lets users share information from their official medical records and fitness apps such as Apple Inc.'s iOS Health, so it can engage in more personalized conversations regarding their health. The new features are available now for Claude Pro and Max plan subscribers in the U.S. The launch comes just days after Anthropic's main rival OpenAI Group PBC debuted ChatGPT Health, and underscores how AI companies view healthcare as a major opportunity for the technology. Anthropic Head of Life Sciences Eric Kauderer-Abrams told NBC News in an interview that the new features build on last October's launch of Claude for Life Sciences, which transformed the chatbot into a proactive research partner for clinicians and scientists that can aid in tasks such as drug discovery. In this case, Anthropic is now targeting actual patients, with the aim being to help them better understand their health. "When connected, Claude can summarize users' medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments," the company wrote in a blog post. "The aim is to make patients' conversations with doctors more productive, and to help users stay well-informed about their health." When it launched ChatGPT Health last week, OpenAI said it was already seeing hundreds of millions of users ask the standard version of its chatbot health- and wellness-related questions every week, hence the enormous potential it sees in making a more concerted effort at tackling medical issues. However, the company was keen to stress that the app is not intended to be used for diagnosis or to recommend any particular treatment. Rather, it's simply there to help users "navigate everyday questions and understand patterns over time." Kauderer-Abrams said Claude for Healthcare can help users to understand complex medical reports more easily, double-check doctors' decisions and also summarize and synthesize medical information for the billions of people around the world who lack access to it. As with OpenAI, Anthropic was eager to stress the privacy protections it has built into Claude for Healthcare. It explained that healthcare data shared with the chatbot will not be dumped in its memory and will never be used to train future versions of the model. Users also have the option to disconnect their medical records or edit the chatbot's permissions at any time, the company said. Besides patients, Anthropic is also targeting healthcare providers, expanding the Claude for Life Sciences offering that's primarily focused on research. That offering now boasts a "HIPAA-ready infrastructure," the company said, referring to the U.S. federal law that governs medical privacy. This means it can connect to federal healthcare coverage databases, the federal registry of medical providers and other services to help make the lives of physicians easier. For instance, the chatbot can help with time-consuming tasks such as preparing prior authorization requests for specialist care, or prepare the ground for insurance appeals by matching patient records with clinical guidelines. Dhruv Parthasarathy, chief technology officer of Commure Inc., which sells AI tools that aid in the creation of medical documentation, said Claude will help his company to save clinicians "millions of hours annually" and return their focus to patient care. While Anthropic and OpenAI clearly see healthcare as a major opportunity, the launch will likely enhance scrutiny over the suitability of these kinds of tools in dispensing medical advice. To date, their track record has been questionable, with Google LLC and Character Technologies Inc. last week agreeing to settle out of court following a lawsuit that alleged their AI chatbots had influenced the mental health of a teenager who later committed suicide. Anthropic does put out a disclaimer, warning that Claude can make mistakes and should not be used as a substitute for qualified medical advice. "For critical use cases where every detail matters, you should absolutely still check the information," said Kauderer-Adams. "We're not claiming that you can completely remove the human from the loop. We see it as a tool to amplify what the human experts can do.
[15]
No waitlist: Claude Health arrives for U.S. Pro and Max users
Anthropic announced a suite of health-care and life-sciences features for its Claude AI platform on Sunday, allowing U.S. Pro and Max subscribers to share health records and fitness-app data, including from Apple's iOS Health app, to personalize health conversations. This launch follows OpenAI's ChatGPT Health introduction days earlier. The new Claude features enable users to integrate personal information with medical records and insurance data. Claude acts as an orchestrator to simplify navigation through health systems. Eric Kauderer-Abrams, head of life sciences at Anthropic, described the challenges users face. "When navigating through health systems and health situations, you often have this feeling that you're sort of alone and that you're tying together all this data from all these sources, stuff about your health and your medical records, and you're on the phone all the time," he told NBC News. He expressed excitement about Claude handling these tasks. With Claude for Healthcare, users connect disparate data sources into a unified view. This setup processes health records alongside fitness data from apps. Personalization occurs through secure sharing mechanisms designed for health-related queries. Availability targets Pro and Max plan subscribers in the United States, providing immediate access without a waitlist. OpenAI's ChatGPT Health, unveiled last week, requires users to join a waitlist. OpenAI noted that hundreds of millions of people ask wellness- or health-related questions on ChatGPT every week. The company specified that ChatGPT Health serves to help users "navigate everyday questions and understand patterns over time -- not just moments of illness." OpenAI emphasized that the tool operates "not intended for diagnosis or treatment." Both platforms incorporate data from health records and fitness apps, such as Apple's iOS Health app, for tailored interactions. AI systems like Claude and ChatGPT assist in interpreting complex medical reports. They enable users to review doctors' decisions. For billions worldwide lacking essential medical care access, these tools summarize and synthesize otherwise inaccessible medical information. Anthropic positions its expansions amid a field viewed as an opportunity and sensitive area for generative AI. Anthropic prioritizes privacy in its offerings. Health data shared with Claude remains excluded from the model's memory. This data receives no use in training future systems. Users maintain control through options to disconnect or edit permissions at any time, as stated in Anthropic's blog post accompanying the launch. Beyond consumer tools, Anthropic introduced features for health-care providers. The platform now supports a HIPAA-ready infrastructure. HIPAA refers to the federal law governing medical privacy. Connections extend to federal health-care coverage databases and the official registry of medical providers. These integrations reduce workloads for physicians and health providers by linking to essential services. Specific automation targets time-consuming processes. Prior authorization requests for specialist care become streamlined. Insurance appeals receive support through matching clinical guidelines directly to patient records. Expanded Claude for Life Science offerings focus on improving scientific discovery alongside these provider tools. Commure, a company developing AI solutions for medical documentation, anticipates benefits. Dhruv Parthasarathy, chief technology officer at Commure, stated that Claude's features assist in "saving clinicians millions of hours annually and returning their focus to patient care." This automation addresses administrative burdens in health-care settings. The announcements occur against increased scrutiny of AI chatbots in medical and mental-health advice. On Thursday, Character.AI and Google settled a lawsuit. The suit alleged their AI tools contributed to worsening mental health among teenagers who died by suicide. Leading AI companies, including Anthropic and OpenAI, warn that their systems can err and must not replace professional judgment. Anthropic enforces restrictions via its acceptable-use policy. For uses involving "healthcare decisions, medical diagnosis, patient care, therapy, mental health, or other medical guidance," the policy mandates that "a qualified professional ... must review the content or decision prior to dissemination or finalization." This requirement ensures human oversight in critical applications. Kauderer-Abrams highlighted efficiency gains. "These tools are incredibly powerful, and for many people, they can save you 90 % of the time that you spend on something," he said. He qualified this for routine tasks only. In critical cases where details matter, verification remains essential. Anthropic does not claim complete removal of human involvement. Instead, Claude amplifies human experts' capabilities. Anthropic ranks among the world's largest AI companies. Rumors place its valuation at $350 billion. The life-sciences division, led by Kauderer-Abrams, drives these health-care integrations. Sunday's features build on Claude's existing platform to address real-world health navigation challenges through data orchestration and provider support.
[16]
What Doctors Really Think of ChatGPT Health and A.I. Medical Advice
ChatGPT Health and similar tools promise access, but doctors worry about misinformation and how A.I. companies handle sensitive health data. Each week, more than 230 million people globally ask ChatGPT questions about health and wellness, according to OpenAI. Seeing a vast, untapped demand, OpenAI earlier this month launched ChatGPT Health and made a swift $60 million acquisition of the health care tech startup Torch to turbocharge the effort. Anthropic soon followed suit, announcing Claude for Healthcare last week. The move from general-purpose chatbot to health care advisor is well underway. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters For a world rife with health care inequities -- whether skyrocketing insurance costs in the U.S. or care deserts in remote regions around the globe -- democratized information and advice about one's health is, at least in theory, a positive development. But the intricacies of how large A.I. companies operate raise questions that health tech experts are eager to interrogate. "What I am worried about as a clinician is that there is still a high level of hallucinations and erroneous information that sometimes makes it out of these general-purpose LLMs to the end user," said Saurabh Gombar, a clinical instructor at Stanford Health Care and the chief medical officer and co-founder of Atropos Health, an A.I. clinical decision support platform. "It's one thing if you're asking for a spaghetti recipe and it's telling you to add 10 times the amount [of an ingredient] that you should. But it's a totally different thing if it's fundamentally missing something about the health care of the individual," he told Observer. For example, a doctor might see left shoulder pain as a non-traditional sign of a heart attack in certain patients, whereas a chatbot might only suggest taking an over-the-counter pain medication. The reverse can also happen. If a patient comes to a provider convinced they have a rare disorder based on a simple symptom after chatting with A.I., it can erode trust when a human doctor seeks to rule out more common explanations first. Google is already under fire for its AI Overviews providing inaccurate and false health information. ChatGPT, Claude and other chatbots have faced similar criticism for hallucinations and misinformation, even as they attempt to limit liability in health-related conversations by noting that they are "not intended for diagnosis or treatment." Gombar argues that A.I. companies must do more to publicly emphasize how often an answer may be hallucinated and clearly flag when information is poorly grounded in evidence or entirely fabricated. This is particularly important given that extensive chatbot disclaimers serve to prevent legal recourse, whereas human health care models allow individuals to sue for malpractice. The primary care provider workforce in the U.S. has shrunk by 11 percent annually over the past seven years, especially in rural areas. Gombar suggests that physicians may no longer control how they fit into the global health care landscape. "If the whole world is moving away from going to physicians first, then physicians are going to be utilized more as an expert second opinion, as opposed to the primary opinion," he said. The inevitable question of data privacy OpenAI and Anthropic have been explicit that their health tools are secure and compliant, including with the Health Insurance Portability and Accountability Act (HIPAA) in the U.S., which protects sensitive patient health information from unauthorized use and disclosure. But for Alexander Tsiaras, founder and CEO of the A.I.-driven medical record platform StoryMD, there is more to consider. "It's not the protection from being hacked. It's the protection of what they will do with [the data] after," Tsiaras told Observer. "In the back end, their encryption algorithms are as good as anyone in HIPAA. But once you have the data, can you trust them? And that's where I think it's going to be a real problem, because I certainly would not trust them." Tsiaras points to the persistent techno-optimism of Silicon Valley elites like OpenAI CEO Sam Altman, arguing that they live in a bubble and have "proven themselves to not care." On a more tangible level, chatbots tend to be overly agreeable. xAI's Grok recently drew criticism for agreeing to generate nearly nude photos of real women and children, though the company blocked this capability this week following public outcry. Chatbots can also reinforce delusions and harmful thought patterns in people with mental illness, triggering crises such as psychosis or even suicide. Andrew Crawford, senior counsel for privacy and data at the nonpartisan think tank Center for Democracy and Technology, said an A.I. company prioritizing profit through personalization over data protection can put sensitive health information at serious risk. "Especially as OpenAI moves to explore advertising as a business model, it's crucial that the separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight," Crawford said in a statement to Observer. Then there is the question of non-protected health data that users voluntarily input. Personal wellness companies such as MyFitnessPal and Oura already pose data privacy risks. "It's amplifying the inherent risk by making that data more available and accessible," Gombar said. For people like Tsiaras, profit-driven A.I. giants have tainted the health tech space. "The trust is eroded so significantly that anyone [else] who builds a system has to go in the opposite direction of spending a lot of time proving that we're there for you and not about abusing what we can get from you," he said. Nasim Afsar, a physician, former chief health officer at Oracle and advisor to the White House and global health agencies, views ChatGPT Health as an early step toward what she calls intelligent health, but far from a complete solution. "A.I. can now explain data and prepare patients for visits," Afsar said in a statement to Observer. "That's meaningful progress. But transformation happens when intelligence drives prevention, coordinated action and measurable health outcomes, not just better answers inside a broken system."
[17]
Anthropic, OpenAI's healthcare push fans the flames of privacy unrest - The Economic Times
Anthropic has announced "Claude for Healthcare and Life Sciences" for payers, providers and pharma companies via cloud and enterprise integrations. Meanwhile, OpenAI has launched 'ChatGPT Health', a set of tools that can input medical records and patient histories and generate summaries, answers and suggested actions for clinicians and consumers.Artificial intelligence (AI) companies Anthropic and OpenAI are making a direct pitch for health-related use cases through new offerings that can handle medical records and patient data. Anthropic has announced "Claude for Healthcare and Life Sciences" for payers, providers and pharma companies via cloud and enterprise integrations. Meanwhile, OpenAI has launched 'ChatGPT Health', a set of tools that can input medical records and patient histories and generate summaries, answers and suggested actions for clinicians and consumers. Both firms have positioned these as privacy‑conscious and "HIPAA‑ready", but the sensitive nature of the data to be used for these products raises significant questions. HIPAA is a US federal law that sets standards for protecting sensitive patient health information. Sensitive data and inference risks The tools released by OpenAI and Anthropic are designed to work not just with simple symptoms, but with full medical records, lab reports, claims data and wellness feeds. This dramatically increases the sensitivity of the health data these bots have been processing. Even when obvious identifiers are blocked, models can infer conditions such as mental health issues, pregnancy, or chronic illness from patterns in medications and test results. Experts said this data might be used as product metadata rather than regulated health information. Also Read: Anthropic plans an IPO as early as 2026 Regulatory grey zones beyond HIPAA In the US, where these offerings are being piloted, much of this AI activity sits outside traditional health‑privacy laws because AI providers are categorised as tech vendors and not healthcare providers. This means that HIPAA may not apply to many consumer and some enterprise uses. So, users and even some hospitals will have to rely on general consumer‑protection laws and terms of service, which are harder to enforce in practice for such sensitive data. Consent and terms of use Both companies have highlighted opt‑in flows and have ensured that users or enterprise customers can control whether health data is shared, but once data enters their systems, it can be logged and analysed for safety, analytics or product improvement. Therein lies the risk that the data may be used for model training. For insurance, consent is usually mentioned deep into the paperwork rather than obtained directly for AI use. So, it may be unclear whether patients understand that their records may be passed through a general‑purpose AI service. Also Read: Anthropic said to be in talks to raise funding at a $350 billion valuation Cross‑border data flows Because these products are distributed through global cloud platforms and integrate with multiple health apps and data intermediaries, health information can move across borders and jurisdictions with different privacy regimes. That creates uncertainty for regulators in regions such as Europe or India over who is responsible when something goes wrong. What has already gone wrong There is already growing concern around users turning to AI for health advice and companionship. OpenAI has faced multiple lawsuits alleging that ChatGPT contributed to suicides by mishandling users' mental health crises, including a high-profile case where a California couple claimed the chatbot encouraged their teenage son, Adam Raine, to act on suicidal thoughts in April 2025. Separately, Google's AI Overviews has drawn criticism after a Guardian investigation found the feature delivered inaccurate or dangerous health advice in 44% of medical queries, such as misleading guidance on symptoms or treatments that could lead a user to delay getting care. Also Read: OpenAI vs Google: ChatGPT's daily visits fall 22%, Gemini holds steady
[18]
OpenAI and Anthropic Take Divergent Paths Into Healthcare | PYMNTS.com
As OpenAI and Anthropic formalize their healthcare and life sciences divisions, they are doing more than just selling software; they are auditioning to become the foundational "operating system" for a multitrillion-dollar industry. While their goals are identical -- capturing clinical workflows and research cycles -- their strategies reveal a fundamental divergence over how a risk-averse sector will ultimately adopt generative AI. OpenAI is approaching healthcare as an extension of its broader platform strategy. Its recently announced OpenAI for Healthcare initiative is framed as an enterprise AI stack designed to slot into existing health system workflows, helping organizations automate documentation, reduce administrative burden, and standardize care delivery while meeting HIPAA requirements. Rather than offering a single healthcare product, OpenAI is emphasizing APIs, business associate agreements, and integrations that allow hospitals, insurers and software vendors to embed its models into clinical decision support tools, chart summarization, care coordination and analytics. The company has highlighted physician-led benchmarking efforts such as HealthBench to demonstrate that its models can meet clinical expectations for reliability and alignment. At the same time, OpenAI is leveraging a consumer channel that few competitors can match. ChatGPT is already used at massive scale for health-related questions, from interpreting lab results to understanding symptoms and insurance options. That usage is now being formalized through ChatGPT Health, which allows users to securely connect personal health data so responses can be grounded in individual context. On Monday (Jan. 12) OpenAI announced in a post on X that it acquired healthcare startup Torch, which unifies lab results, medications and visit recordings, and will combine it with ChatGPT Health. In its own blog post about the acquisition, Torch that by bringing together health information that is otherwise scattered, it builds "a medical memory for AI" that helps patients see the whole picture. OpenAI said in its post: "Bringing this together with ChatGPT Health opens up a new way to understand and manage your health." OpenAI is careful to position the consumer experience delivered by ChatGPT Health as informational rather than diagnostic, and it is not regulated under HIPAA. Still, the company has acknowledged that healthcare is already one of ChatGPT's largest use cases, with tens of millions of health-related queries flowing through the system daily, as reported by PYMNTS. That demand effectively serves as a top-of-funnel for enterprise adoption, familiarizing patients and clinicians alike with AI as a default interface for medical information. Anthropic is taking a more targeted approach. Instead of extending a mass-market assistant into healthcare, it is building healthcare and life sciences offerings on top of the Claude model family, with an emphasis on tightly controlled, domain-specific deployments. The company's Claude for Healthcare product is designed for clinicians, insurers and healthcare administrators, with HIPAA-ready infrastructure and direct integrations into authoritative datasets such as ICD-10 coding systems, CMS coverage data, the National Provider Identifier Registry, and PubMed. These connectors enable concrete workflows like prior authorization, report generation, and medical coding interpretation, rather than broad conversational use. Anthropic is also pushing deeper into life sciences. Through Claude for Life Sciences, the company is positioning its models as research partners embedded in scientific environments, connected to platforms like PubMed, Benchling and ClinicalTrials.gov. The focus is on tasks such as literature synthesis, hypothesis generation, clinical trial planning, and regulatory documentation, placing Claude closer to the core of biomedical research rather than at the patient-facing edge. A central theme in Anthropic's messaging is control. The company emphasizes that data accessed through healthcare and research integrations is not used to train its models, and it highlights customizable agent skills, including FHIR-based tool building, that allow organizations to define how Claude operates within strict boundaries. This reflects a view of healthcare as a market where trust, auditability and predictability matter more than rapid experimentation.
[19]
Anthropic Launches Claude for Healthcare post OpenAI's ChatGPT Health
Unlike ChatGPT which focuses on answering health queries, Anthropic's product aims on automating non-medical part of healthcare One may question OpenAI around several aspects of its existence and journey but there's one aspect that generates consensus among its proponents and opponents. The sense of FOMO that they've brought in the world of Big Tech. How else can one explain Anthropic's Claude for Healthcare launch, barely days after ChatGPT Health came calling? Industry veterans have repeatedly spoken about half-assed efforts being propagated in their rush to hit the marquee, but tech giants hate to lose, even if it means bringing out products that aren't ready for interfacing with the general users. Users may recall early AI chatbot stories of users being told to use glue to keep pizza dough binding. For what it is worth, Anthropic took a full five days after ChatGPT launched its health story to provide its own version of tools for healthcare providers, payers and patients. Claude for Healthcare comes three months after they launched Claude for Life Sciences. This time round the company appears to have brought in more sophistication than its rivals. In a blog post, Anthropic describes the latest effort as "a complementary set of tools and resources that allow healthcare providers, payers, and consumers to use Claude for medical purposes through HIPAA-ready products." Additionally, it connects Claude to more scientific programs such as clinical trial management and regulatory operations. "These features build on top of major recent improvements we've made to Claude's general intelligence. These improvements are best captured by evaluations of Claude's agentic performance on detailed simulations of medical and scientific tasks, since this correlates most closely to real-world usefulness." Looks like Anthropic has stolen a march, however temporary, over ChatGPT whose healthcare product appears focussed on patient-side chat experience. Maybe, this is where Anthropic's agentic AI skills are coming to the fore. Claude's connectors provides AI access to platforms and databases that could speed up research processes and report generation activities for both payers and providers. The connectors would speed up prior authorisation reviews which involves doctors first submitting additional information to an insurance provider to see whether it would cover a medication or treatment. Anthropic CPO Mike Krieger says clinicians often report spending more time on documentation and paperwork than on actually seeing patients. And this is where Anthropic appears to have scored a march over its rivals. It has automated a series of administrative tasks that do not require specialized training and expertise but is nonetheless critical to the healthcare delivery process. This is quite at variance from what OpenAI has sought to do, claiming that their product was catering to the 230 million weekly users who sought health information via ChatGPT. Looks like Anthropic has taken the smarter route to integrating AI to the healthcare sector. We know how ChatGPT is facing lawsuits over instances of self-harm by teen users engaged in open conversations with the AI chatbot. More recently, we had the Guardian reporting of how Google was doling out misinformation on some healthcare queries through its AI summaries. For now, it looks like Google has removed AI summaries, says TechCrunch.
[20]
Anthropic follows OpenAI in rolling out healthcare AI tools By Investing.com
Investing.com-- Anthropic said on Sunday it is launching a new range of healthcare tools under its Claude chatbot, just days after rival OpenAI announced a similar move. Anthropic said its tools were compliant with the Health Insurance Portability and Accountability Act, and could be used by hospitals and doctors to find protected health data. Upgrade to InvestingPro for more key AI-related breaking news and insights The company also enabled users to export their health data from mobile apps such as Apple Health, which in turn can be shared with healthcare providers. The move is the latest from the world's leading AI companies, with healthcare seen as a potential application for their AI technology. AI technology can in theory be useful in medicine, especially for improving diagnostics, analysing patient data, and even potentially developing new treatments. But the tendency of large-language models to hallucinate information could present potential risks. Anthropic, creator of the Claude AI chatbot, was recently seen in funding talks for a $350 billion valuation. The company is backed by Amazon.com Inc (NASDAQ:AMZN) and Google owner Alphabet Inc (NASDAQ:GOOGL), and is also viewed as a major competitor for OpenAI.
Share
Share
Copy Link
Anthropic launched Claude for Healthcare with HIPAA-compliant integrations for patient records and medical coding. While the company touts 99.8% accuracy for ICD-10 codes, it couldn't provide complete data on diagnostic accuracy. The move follows OpenAI's ChatGPT Health announcement, intensifying competition in the lucrative healthcare AI market despite ongoing concerns about AI hallucinations and patient data privacy.
Anthropic has launched Claude for Healthcare, marking its formal entry into the $4 trillion American healthcare sector just days after rival OpenAI unveiled ChatGPT Health
1
5
. The San Francisco-based company, currently in funding talks at a $350 billion valuation, announced HIPAA compliance and new integrations designed to connect its AI chatbot to industry-standard systems and databases2
. U.S. subscribers of Claude Pro and Max plans can now opt to give Claude secure health record access by connecting to HealthEx, Function, Apple Health, and Android Health Connect4
. The timing underscores Silicon Valley's intensifying race to capture ground in AI in healthcare, where companies see opportunities to prove artificial intelligence's broad benefits while bolstering sales.
Source: Silicon Republic
Claude for Healthcare connects to the CMS Coverage Database to check Medicare coverage rules, supports prior authorization, and handles medical coding tasks
3
. The system can look up ICD-10 codes to correct medical coding, reduce billing and claim errors, and improve claims processing3
.
Source: Hacker News
On the life sciences side, Anthropic added integrations with Medidata and ClinicalTrials.gov to assist with clinical trial planning and regulatory work
2
. Mike Krieger, Anthropic's chief product officer and Instagram co-founder, emphasized the goal of "empowering people to have more knowledge, both from their data, but also in conversation with their providers"5
. Banner Health, one of the largest nonprofit health systems in the U.S., now has more than 22,000 clinical providers using Claude, with 85% reporting faster work with higher accuracy5
.While Anthropic showcased impressive numbers for specific tasks, critical questions about AI accuracy in healthcare remain unanswered. When asked about ICD-10 codes, the consumer version of Claude is correct 75% of the time, but the doctors' version trained on those codes achieves 99.8% accuracy, according to Krieger
1
. However, when pressed about diagnostic accuracy rates, Anthropic couldn't provide complete answers. The company cited its Claude Opus 4.5 model achieving 92.3% accuracy on MedCalc for medical calculations and 61.3% on MedAgentBench for clinical tasks in simulated electronic health records—a worryingly low score that doesn't indicate reliability with clinical recommendations1
. OpenAI similarly declined to provide hard numbers on hallucination rates when giving medical advice, though a spokeswoman noted models had become more reliable in health scenarios1
. According to data compiled by Scale, recently purchased by Meta Platforms Inc., Anthropic's models are more honest and likely to admit uncertainty than those made by OpenAI or Google, reducing the risk of AI hallucinations1
.
Source: Digital Trends
Related Stories
Both Anthropic and OpenAI have emphasized that patient data privacy protections are built into their healthcare offerings. Anthropic states it will not use healthcare data for model training, with data sharing remaining opt-in and connectors maintaining HIPAA compliance
2
5
. Users can explicitly choose what information to share with Claude and disconnect or edit permissions at any time4
. Anthropic's medical responses are grounded with citations from respected publications such as PubMed and the NPI Registry, ensuring clinicians can verify results5
. The careful positioning around data privacy concerns reflects lessons learned from Google Health's failure between 2008 and 2011, when public mistrust over uploading patient records to a company known for collecting personal information for ads contributed to the initiative's shutdown1
.The lack of transparency about reliability metrics poses risks for both companies as they push deeper into healthcare. AI companies have long remained silent about how often their chatbots make mistakes, partly because doing so would highlight how difficult the problem has been to solve
1
. Instead, they provide benchmark data showing performance on medical licensing exams, which doesn't translate directly to real-world diagnostic accuracy. Both OpenAI and Anthropic emphasize their AI offerings can make mistakes and are not substitutes for professional healthcare advice . In its Acceptable Use Policy, Anthropic notes that qualified professionals must review generated outputs "prior to dissemination or finalization" in high-risk use cases related to healthcare decisions, medical diagnosis, or patient care4
. As more than 230 million people already ask ChatGPT for health-related advice every week1
, the stakes for accuracy and transparency continue to climb. Building trust with clinical professionals and the public will require more complete disclosure about diagnostic reliability and error rates, particularly as these AI systems move from administrative tasks into clinical decision support.Summarized by
Navi
[1]
[2]
[3]
05 Jan 2026•Technology

20 Jan 2026•Technology

10 Jul 2025•Health

1
Business and Economy

2
Policy and Regulation

3
Technology
