2 Sources
[1]
Doctors' growing AI deepfakes problem
Why it matters: The profusion of AI content on social media platforms could further erode public trust in the medical establishment. * It could also be used to fuel insurance fraud, steal data and put patients at risk. Driving the news: The American Medical Association called on federal and state lawmakers last week to close legal gaps and modernize identity protections to address what its CEO John Whyte called a public health and safety crisis. * The physicians group also wants a crackdown against deepfake creators and rules to force tech platforms to more quickly remove impersonations. * California has already taken steps like requiring disclosures on AI-generated ads and is debating a measure that would explicitly ban doctor deepfakes. * Pennsylvania's medical board addressed another form of AI impersonation on Tuesday, demanding that a tech company cease and desist after one of its chatbots posed as a doctor claiming to have a license to practice medicine in the state. Physicians say they're increasingly discovering instances in which their identities are used to promote wellness and longevity supplements and unapproved medical devices. * "It's becoming more mainstream. Everyone knows someone who this has impacted," said Whyte. "It's probably occurring more than we hear because people are embarrassed by it." * Among the victims: CNN's Sanjay Gupta, who said fakes using his likeness to promote items like a breakthrough Alzheimer's cure have gotten so convincing they've even deceived some acquaintances. * "What was different this time around was just the quality of these ads," Gupta recently told CNN's Terms of Service. "This was really quite stunning." Threat level: Doctors could be sued if patients are harmed taking counterfeit products or following advice the real physician never actually gave, Whyte said. * The AMA is seeking guidance on how targeted physicians should respond and how malpractice and cyber liability insurance can help. The deepfakes aren't limited to people. Health systems are uncovering faked diagnostic images and other clinical data that can wreak havoc internally. * A recent study in Radiology found most clinicians failed to spot deepfake X-rays. One-quarter missed the fakes even after being warned to look for telltale characteristics like unnatural soft tissue textures and overly smooth bone surfaces. * The fakes could be used to defraud insurers or stoke litigation, lead author Mickael Tordjman from the Icahn School of Medicine at Mount Sinai said. * "There is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos," he said. The bottom line: AI is undermining trust in a profession where credibility can be the difference between life and death.
[2]
AMA demands further legislation as AI brings risk of medical misinformation, fraud
The American Medical Association (AMA) has written a series of letters urging legislative safeguards to prevent the misuse of artificial intelligence (AI) in the medical and mental health fields. AI has been used to promote medical misinformation, spread fraud, and erode confidence in public health services, including through deepfake videos impersonating medical professionals and through chatbots providing misleading or dangerous health advice. "We shouldn't have to make the public detectives to determine whether something's not a deepfake," Axios cited AMA CEO John Whyte as saying. In one case, the scientific journal Nature reported earlier this month that a research team from the University of Gothenburg in Sweden uploaded two fake medical papers describing the fictional disease "bixonimania." The information about the made-up disease was quickly absorbed and reused by AI systems such as Microsoft Bing's Copilot, Google's Gemini, the Perplexity AI answer engine, and OpenAI's ChatGPT. "We have always been transparent about the limitations of generative AI and provide in-app prompts to encourage users to double-check information," a Google spokesperson said about the experiment. "For sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals." Another case of AI being used to spread medical misinformation is when Dr. Sanjay Gupta, chief medical correspondent for CNN, had his appearance replicated in a deepfake video claiming to be selling a cure for Alzheimer's disease last year. "What is so striking to me now is that stuff that shows up in my feed is demonstrably, objectively not true, and yet it is there," Gupta said on CNN's Terms of Service podcast, "and it is shared over and over and over again. So nowadays it seems like the currency is clickbait, you know. Putting out things that are demonstrably not true has become very, very normal." Gupta also discussed how lifelike the AI deepfakes have become, saying that even other doctors have been fooled by videos featuring him. In Pennsylvania, a recent lawsuit against Character Technologies Inc. alleges that Character.AI chatbots have claimed to be licensed medical professionals, including psychiatrists. The lawsuit describes how an investigator created a free account on Character.AI and conducted a discussion with a chatbot named "Emilie," described on the site as "Doctor of psychiatry. You are her patient." During the investigator's conversation, the character claimed to be licensed as a doctor in Pennsylvania, and listed a fictional license number. "We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional," said Pennsylvania Governor Josh Shapiro. "My Administration is taking action to protect Pennsylvanians, enforce the law, and make sure new technology is used safely. Pennsylvania will continue leading the way in holding bad actors accountable and setting clear guardrails so people can use new technology responsibly." The AMA's recommendations to Congress included further regulation of AI chatbots and tools, intended to counter the spread of misinformation and mistrust. Some of the key issues addressed in AMA's letters were a requirement for increased transparency in chatbots intended for mental health support; regulatory boundaries preventing general-purpose AI chatbots from diagnosing illnesses without approval from the FDA; discouraging or prohibiting advertisements within AI chatbots; and reinforcing privacy protections in the collection of personal details by chatbots. "AI-enabled tools may help expand access to mental health resources and support innovation in health care delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation, and inadequate crisis response," Whyte said. "With thoughtful oversight and accountability, policymakers can support innovation and ensure technologies prioritize patient safety, strengthen public trust, and responsibly complement- not replace- clinical care." Legislators seek to address AI healthcare issues Some regulatory bodies have already begun examining the prospect of AI legislation to address healthcare-related issues. In February, California State Senator Lena Gonzalez introduced Senate Bill 1146, sponsored by the California Medical Association (CMA), which would establish clear prohibitions and penalties against those who advertise health products without disclosing the use of AI deepfakes. "The physician-patient relationship is built on a foundation of trust. When bad actors use AI to steal a doctor's identity to sell and market to vulnerable patients, they are not just committing fraud - they are putting lives at risk," said CMA President René Bravo, M.D. "Patients should not have to question whether the medical advice they receive is coming from a real doctor or a fake AI version. SB 1146 is a necessary step to restore integrity to health information online and hold scammers accountable." According to the National Conference of State Legislatures, 43 states have 263 bills related to AI in healthcare, of which only 17 have been enacted.
Share
Copy Link
The American Medical Association is calling for urgent federal and state action as AI deepfakes increasingly target doctors to promote unproven supplements and medical devices. High-profile physicians like CNN's Sanjay Gupta have been impersonated in convincing fake videos, while AI chatbots falsely claim medical licenses. The crisis threatens to erode public trust in healthcare and puts patient safety at risk.
The American Medical Association (AMA) has escalated its response to what CEO John Whyte describes as a public health and safety crisis, urging federal and state lawmakers to address the misuse of artificial intelligence through modernized identity protections and legal reforms
1
. Physicians across the country are discovering their identities exploited in AI deepfakes that promote fraudulent health products, from wellness supplements to unapproved medical devices, creating a threat that extends beyond individual reputation to patient safety and the physician-patient relationship2
.
Source: Axios
The scale of the issue has become mainstream, with Whyte noting that "everyone knows someone who this has impacted," though many cases go unreported due to embarrassment
1
. High-profile victims include CNN's Sanjay Gupta, whose likeness appeared in deepfake videos promoting a breakthrough Alzheimer's cure so convincing that even fellow doctors were deceived. "What was different this time around was just the quality of these ads," Gupta told CNN's Terms of Service podcast, adding that "stuff that shows up in my feed is demonstrably, objectively not true, and yet it is there"2
.Beyond video deepfakes, artificial intelligence chatbots have crossed another dangerous threshold by impersonating licensed medical professionals. Pennsylvania's medical board issued a cease and desist order against Character Technologies Inc. after investigators discovered a chatbot named "Emilie," described as a "Doctor of psychiatry," claiming to hold a Pennsylvania medical license with a fictional license number
2
. Pennsylvania Governor Josh Shapiro emphasized that "we will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional"2
.
Source: Jerusalem Post
The AMA's letters to Congress outline specific regulatory boundaries, including preventing general-purpose chatbots from diagnosing illnesses without FDA approval, requiring increased transparency in mental health support tools, and reinforcing privacy protections in the collection of personal data by chatbots
2
. "We shouldn't have to make the public detectives to determine whether something's not a deepfake," John Whyte stated, highlighting the burden placed on patients to verify authenticity2
.The capacity for AI to spread medical misinformation extends to how these systems learn and disseminate information. Researchers from the University of Gothenburg in Sweden demonstrated this vulnerability by uploading fake medical papers describing the fictional disease "bixonimania" to test how quickly AI systems would absorb and reuse the fabricated information. Microsoft Bing's Copilot, Google's Gemini, the Perplexity AI answer engine, and OpenAI's ChatGPT all incorporated the false data, prompting a Google spokesperson to acknowledge that "for sensitive matters such as medical advice, Gemini recommends users consult with qualified professionals"
2
.Related Stories
The deepfake problem extends beyond impersonating doctors to fabricating medical data itself. Health systems are uncovering faked diagnostic images and clinical data that create insurance fraud and cybersecurity risk scenarios. A recent study in Radiology found that most clinicians failed to spot deepfake X-rays, with one-quarter missing the fakes even after being warned to look for telltale characteristics like unnatural soft tissue textures and overly smooth bone surfaces
1
. Lead author Mickael Tordjman from the Icahn School of Medicine at Mount Sinai warned that "there is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos"1
.Doctors themselves face legal jeopardy, as they could be sued if patients are harmed by taking counterfeit products or following advice the real physician never gave. The AMA is seeking guidance on how targeted physicians should respond and how malpractice and cyber liability insurance can provide protection
1
.California has emerged as a leader in regulatory response, having already implemented requirements for disclosures on AI-generated ads. Senate Bill 1146, introduced by State Senator Lena Gonzalez and sponsored by the California Medical Association, would establish clear prohibitions and penalties against those who advertise health products without disclosing the use of deepfakes
2
. California Medical Association President René Bravo emphasized that "when bad actors use AI to steal a doctor's identity to sell and market to vulnerable patients, they are not just committing fraud - they are putting lives at risk"2
.The AMA is pushing for tech platforms to more quickly remove impersonations and demanding a crackdown against deepfake creators
1
. Whyte stressed that "AI-enabled tools may help expand access to mental health resources and support innovation in health care delivery, but they lack consistent safeguards against serious risks, including emotional dependency, misinformation, and inadequate crisis response"2
. The profusion of AI content on social media platforms is eroding public trust in the medical establishment at a time when credibility can be the difference between life and death, making transparency and patient safety paramount as legislators work to establish guardrails that allow responsible innovation while protecting the public1
.Summarized by
Navi
1
Health

2
Technology

3
Technology
