AI deepfakes impersonate doctors to spread medical misinformation and promote fraudulent products

2 Sources

Share

The American Medical Association is calling for urgent federal and state action as AI deepfakes increasingly target doctors to promote unproven supplements and medical devices. High-profile physicians like CNN's Sanjay Gupta have been impersonated in convincing fake videos, while AI chatbots falsely claim medical licenses. The crisis threatens to erode public trust in healthcare and puts patient safety at risk.

Doctors Confront Growing AI Deepfakes Problem Threatening Healthcare Trust

The American Medical Association (AMA) has escalated its response to what CEO John Whyte describes as a public health and safety crisis, urging federal and state lawmakers to address the misuse of artificial intelligence through modernized identity protections and legal reforms

1

. Physicians across the country are discovering their identities exploited in AI deepfakes that promote fraudulent health products, from wellness supplements to unapproved medical devices, creating a threat that extends beyond individual reputation to patient safety and the physician-patient relationship

2

.

Source: Axios

Source: Axios

The scale of the issue has become mainstream, with Whyte noting that "everyone knows someone who this has impacted," though many cases go unreported due to embarrassment

1

. High-profile victims include CNN's Sanjay Gupta, whose likeness appeared in deepfake videos promoting a breakthrough Alzheimer's cure so convincing that even fellow doctors were deceived. "What was different this time around was just the quality of these ads," Gupta told CNN's Terms of Service podcast, adding that "stuff that shows up in my feed is demonstrably, objectively not true, and yet it is there"

2

.

AI Chatbots Falsely Claiming Medical Licenses Trigger State Action

Beyond video deepfakes, artificial intelligence chatbots have crossed another dangerous threshold by impersonating licensed medical professionals. Pennsylvania's medical board issued a cease and desist order against Character Technologies Inc. after investigators discovered a chatbot named "Emilie," described as a "Doctor of psychiatry," claiming to hold a Pennsylvania medical license with a fictional license number

2

. Pennsylvania Governor Josh Shapiro emphasized that "we will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional"

2

.

Source: Jerusalem Post

Source: Jerusalem Post

The AMA's letters to Congress outline specific regulatory boundaries, including preventing general-purpose chatbots from diagnosing illnesses without FDA approval, requiring increased transparency in mental health support tools, and reinforcing privacy protections in the collection of personal data by chatbots

2

. "We shouldn't have to make the public detectives to determine whether something's not a deepfake," John Whyte stated, highlighting the burden placed on patients to verify authenticity

2

.

Medical Misinformation Spreads as AI Systems Absorb Fake Research

The capacity for AI to spread medical misinformation extends to how these systems learn and disseminate information. Researchers from the University of Gothenburg in Sweden demonstrated this vulnerability by uploading fake medical papers describing the fictional disease "bixonimania" to test how quickly AI systems would absorb and reuse the fabricated information. Microsoft Bing's Copilot, Google's Gemini, the Perplexity AI answer engine, and OpenAI's ChatGPT all incorporated the false data, prompting a Google spokesperson to acknowledge that "for sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals"

2

.

Insurance Fraud and Cybersecurity Risk Compound Healthcare Threats

The deepfake problem extends beyond impersonating doctors to fabricating medical data itself. Health systems are uncovering faked diagnostic images and clinical data that create insurance fraud and cybersecurity risk scenarios. A recent study in Radiology found that most clinicians failed to spot deepfake X-rays, with one-quarter missing the fakes even after being warned to look for telltale characteristics like unnatural soft tissue textures and overly smooth bone surfaces

1

. Lead author Mickael Tordjman from the Icahn School of Medicine at Mount Sinai warned that "there is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos"

1

.

Doctors themselves face legal jeopardy, as they could be sued if patients are harmed by taking counterfeit products or following advice the real physician never gave. The AMA is seeking guidance on how targeted physicians should respond and how malpractice and cyber liability insurance can provide protection

1

.

New Legislation Emerges to Combat Deepfake Health Advertising

California has emerged as a leader in regulatory response, having already implemented requirements for disclosures on AI-generated ads. Senate Bill 1146, introduced by State Senator Lena Gonzalez and sponsored by the California Medical Association, would establish clear prohibitions and penalties against those who advertise health products without disclosing the use of deepfakes

2

. California Medical Association President René Bravo emphasized that "when bad actors use AI to steal a doctor's identity to sell and market to vulnerable patients, they are not just committing fraud - they are putting lives at risk"

2

.

The AMA is pushing for tech platforms to more quickly remove impersonations and demanding a crackdown against deepfake creators

1

. Whyte stressed that "AI-enabled tools may help expand access to mental health resources and support innovation in health care delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation, and inadequate crisis response"

2

. The profusion of AI content on social media platforms is eroding public trust in the medical establishment at a time when credibility can be the difference between life and death, making transparency and patient safety paramount as legislators work to establish guardrails that allow responsible innovation while protecting the public

1

.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved