Anthropic Claude enters healthcare with HIPAA tools, but AI accuracy questions linger

Reviewed byNidhi Govil

20 Sources

Share

Anthropic launched Claude for Healthcare with HIPAA-compliant integrations for patient records and medical coding. While the company touts 99.8% accuracy for ICD-10 codes, it couldn't provide complete data on diagnostic accuracy. The move follows OpenAI's ChatGPT Health announcement, intensifying competition in the lucrative healthcare AI market despite ongoing concerns about AI hallucinations and patient data privacy.

Anthropic Claude Pushes Into Healthcare Market

Anthropic has launched Claude for Healthcare, marking its formal entry into the $4 trillion American healthcare sector just days after rival OpenAI unveiled ChatGPT Health

1

5

. The San Francisco-based company, currently in funding talks at a $350 billion valuation, announced HIPAA compliance and new integrations designed to connect its AI chatbot to industry-standard systems and databases

2

. U.S. subscribers of Claude Pro and Max plans can now opt to give Claude secure health record access by connecting to HealthEx, Function, Apple Health, and Android Health Connect

4

. The timing underscores Silicon Valley's intensifying race to capture ground in AI in healthcare, where companies see opportunities to prove artificial intelligence's broad benefits while bolstering sales.

Source: Silicon Republic

Source: Silicon Republic

HIPAA-Compliant Tools Target Administrative Burden

Claude for Healthcare connects to the CMS Coverage Database to check Medicare coverage rules, supports prior authorization, and handles medical coding tasks

3

. The system can look up ICD-10 codes to correct medical coding, reduce billing and claim errors, and improve claims processing

3

.

Source: Hacker News

Source: Hacker News

On the life sciences side, Anthropic added integrations with Medidata and ClinicalTrials.gov to assist with clinical trial planning and regulatory work

2

. Mike Krieger, Anthropic's chief product officer and Instagram co-founder, emphasized the goal of "empowering people to have more knowledge, both from their data, but also in conversation with their providers"

5

. Banner Health, one of the largest nonprofit health systems in the U.S., now has more than 22,000 clinical providers using Claude, with 85% reporting faster work with higher accuracy

5

.

AI Accuracy in Healthcare Remains Unclear

While Anthropic showcased impressive numbers for specific tasks, critical questions about AI accuracy in healthcare remain unanswered. When asked about ICD-10 codes, the consumer version of Claude is correct 75% of the time, but the doctors' version trained on those codes achieves 99.8% accuracy, according to Krieger

1

. However, when pressed about diagnostic accuracy rates, Anthropic couldn't provide complete answers. The company cited its Claude Opus 4.5 model achieving 92.3% accuracy on MedCalc for medical calculations and 61.3% on MedAgentBench for clinical tasks in simulated electronic health records—a worryingly low score that doesn't indicate reliability with clinical recommendations

1

. OpenAI similarly declined to provide hard numbers on hallucination rates when giving medical advice, though a spokeswoman noted models had become more reliable in health scenarios

1

. According to data compiled by Scale, recently purchased by Meta Platforms Inc., Anthropic's models are more honest and likely to admit uncertainty than those made by OpenAI or Google, reducing the risk of AI hallucinations

1

.

Source: Digital Trends

Source: Digital Trends

Patient Data Privacy Takes Center Stage

Both Anthropic and OpenAI have emphasized that patient data privacy protections are built into their healthcare offerings. Anthropic states it will not use healthcare data for model training, with data sharing remaining opt-in and connectors maintaining HIPAA compliance

2

5

. Users can explicitly choose what information to share with Claude and disconnect or edit permissions at any time

4

. Anthropic's medical responses are grounded with citations from respected publications such as PubMed and the NPI Registry, ensuring clinicians can verify results

5

. The careful positioning around data privacy concerns reflects lessons learned from Google Health's failure between 2008 and 2011, when public mistrust over uploading patient records to a company known for collecting personal information for ads contributed to the initiative's shutdown

1

.

Transparency Gap Threatens Trust

The lack of transparency about reliability metrics poses risks for both companies as they push deeper into healthcare. AI companies have long remained silent about how often their chatbots make mistakes, partly because doing so would highlight how difficult the problem has been to solve

1

. Instead, they provide benchmark data showing performance on medical licensing exams, which doesn't translate directly to real-world diagnostic accuracy. Both OpenAI and Anthropic emphasize their AI offerings can make mistakes and are not substitutes for professional healthcare advice . In its Acceptable Use Policy, Anthropic notes that qualified professionals must review generated outputs "prior to dissemination or finalization" in high-risk use cases related to healthcare decisions, medical diagnosis, or patient care

4

. As more than 230 million people already ask ChatGPT for health-related advice every week

1

, the stakes for accuracy and transparency continue to climb. Building trust with clinical professionals and the public will require more complete disclosure about diagnostic reliability and error rates, particularly as these AI systems move from administrative tasks into clinical decision support.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo