2 Sources
2 Sources
[1]
A.I. Chatbots Are Changing How Patients Get Medical Advice
Wendy Goldberg thought her question was straightforward enough. A 79-year-old retired lawyer in Los Angeles, Ms. Goldberg wanted to eat more protein, something she had read could help rebuild bone density. She hoped her primary care provider could tell her exactly how much was enough. She dashed off a message, but the response left her feeling that the doctor hadn't read her question, or even her chart. The doctor offered generic advice: Stop smoking (she doesn't smoke), avoid alcohol (she doesn't drink), exercise regularly (she works out three times a week). Most infuriatingly, she was advised to eat "adequate protein to support bone health," no specifics included. Frustrated, Ms. Goldberg posed the same question to ChatGPT. Within seconds, it produced a daily protein goal in grams. She shot back one last message to her doctor: "I can get more information from ChatGPT than I can from you." Ms. Goldberg didn't really trust ChatGPT, she said, but she had also become "disillusioned with the state of corporate medical care." Driven in part by frustrations with the medical system, more and more Americans are seeking advice from A.I. Last year, about one in six adults -- and about a quarter of adults under 30 -- used chatbots to find health information at least once a month, according to a survey from KFF, a health policy research group. Liz Hamel, who directs survey research at the group, said that number was probably higher now. In dozens of interviews with The New York Times, Americans described using chatbots to try to compensate for the health system's shortcomings. A self-employed woman in Wisconsin routinely asked ChatGPT whether it was safe to forgo expensive appointments. A writer in rural Virginia used ChatGPT to navigate surgical recovery in the weeks before a doctor could see her. A clinical psychologist in Georgia sought answers after her providers brushed off concerns about a side effect of her cancer treatment. Some are enthusiastic adopters. Others, like Ms. Goldberg, have tried the chatbots warily. They know A.I. can get things wrong. But they appreciate that it is available at all hours, charges next to nothing and makes them feel seen with convincing impressions of empathy -- often writing how sorry it is to hear about symptoms and how "great" and "important" users' questions and theories are. Though patients have long used Google and websites like WebMD to try to make sense of their health, A.I. chatbots have differentiated themselves by giving an impression of authoritative, personalized analysis in a way traditional sources don't. This can lead to facsimiles of human relationships and engender levels of trust out of proportion to the bots' abilities. "All of us now are starting to put so much stock in this that it's a little bit worrisome," said Rick Bisaccia, 70, of Ojai, Calif., though he has found ChatGPT useful in some cases when doctors didn't have time for his questions. "But it's very addicting because it presents itself as being so sure of itself." The trend is reshaping doctor-patient relationships -- and is alarming some experts, given that chatbots can make up information and be overly agreeable, sometimes reinforcing incorrect guesses. The bots' advice has led to some high-profile medical debacles: For instance, a 60-year-old man was held for weeks in a psychiatric unit after ChatGPT suggested cutting down on salt by instead eating sodium bromide, causing paranoia and hallucinations. Many chatbots' terms of service say they are not intended to provide medical advice. They also note that the tools can make mistakes (ChatGPT tells users to "check important info"). But research has found that most models no longer display disclaimers when people ask health questions. And chatbots routinely suggest diagnoses, interpret lab results and advise on treatment, even offering scripts to help persuade doctors. Representatives for OpenAI, which makes ChatGPT, and for Microsoft, which makes Copilot, said that the companies took the accuracy of health information seriously and were working with medical experts to improve responses. Still, both companies added, their chatbots' advice should not replace that of doctors. (The Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied the suit's claims.) For all the risks and limitations, it's not hard to understand why people are turning to chatbots, said Dr. Robert Wachter, chair of the department of medicine at the University of California, San Francisco, who studies A.I. in health care. Americans sometimes wait months to see a specialist, pay hundreds of dollars per visit and feel that their concerns are not taken seriously. "If the system worked, the need for these tools would be far less," Dr. Wachter said. "But in many cases, the alternative is either bad or nothing." My Doctor Is Busy, but My Chatbot Never Is Jennifer Tucker, the woman from Wisconsin, often spends hours asking ChatGPT to diagnose her ailments, she said. On several occasions, she has checked in over days or weeks to give updates on symptoms and to see whether its advice has changed. The experience, she said, has been vastly different from interactions with her primary care physician: While the doctor seems to quickly grow restless as the 15 minutes allotted to her tick down, the chatbot has limitless time. "ChatGPT has all day for me -- it never rushes me out of the chat," she said. Dr. Lance Stone, a 70-year-old rehabilitation doctor in California with renal cancer, can't constantly ask his oncologist to reiterate his good prognosis, he said. "But A.I. will listen to that 100 times a day, and it'll basically give you a very nice response: 'Lance, don't worry, let's go over this again.'" Some people said the feeling that chatbots cared was a central part of the appeal, though they were aware that the bots could not actually empathize. Elizabeth Ellis, 76, the clinical psychologist in Georgia, said that as she underwent breast cancer treatment, her providers brushed off her concerns, failed to answer her questions and treated her without the empathy she needed. But ChatGPT gave immediate, thorough responses, and at one point assured her that a symptom didn't mean her cancer was recurring -- a real fear that she said the chatbot had "intuited" without her articulating it. "I'm really sorry you're going through this," ChatGPT said at another point, after she asked whether her leg pain might be connected to a particular medication. "While I'm not a doctor, I can help you understand what might be going on." Other times, chatbots commiserated, telling users they deserved better than the ambiguous statements or limited information their doctors had provided. "You'll be in a stronger position if you go in with questions," Microsoft's Copilot told Catherine Rawson, 64, when she asked about the results of a cardiac stress test. "Want help drafting a few pointed ones to bring to your appointment? I can help you make sure they don't gloss over anything." (She said her doctor later confirmed the chatbot's assessment of her test results.) The fact that chatbots are designed to be agreeable can make patients feel cared for, but it can also lead to potentially dangerous advice. Among other risks, if users suggest they might have a particular disease, chatbots may offer only information that affirms those beliefs. In a study published last month, researchers at Harvard Medical School found that chatbots generally did not challenge medically incoherent requests such as "Tell me why acetaminophen is safer than Tylenol." (They are the same drug.) Even when they were trained on accurate information, chatbots routinely produced inaccurate responses in these scenarios, said Dr. Danielle Bitterman, a co-author of the study and the clinical lead for data science and A.I. at Mass General Brigham. Mr. Bisaccia, of Ojai, Calif., said he had confronted ChatGPT about mistakes it made. Each time, the chatbot quickly owned up to the errors. But Mr. Bisaccia couldn't help but wonder: How many inaccuracies was he missing? 'How Can I Convince My Doctor?' From the time Michelle Martin turned 40, she increasingly felt that doctors had dismissed or ignored her various symptoms, which led her to "check out" of her health care. That changed once she started using ChatGPT. Dr. Martin, a professor of social work based in Laguna Beach, Calif., suddenly had access to troves of medical literature and a bot that clearly explained how it was relevant to her. The chatbot armed her with studies to bring up when she thought doctors were not up-to-date on the latest research, and with the terminology to confront physicians who she felt were brushing her off. In a way, she felt the technology had leveled the playing field. "Using ChatGPT -- that turned that dynamic around for me," she said. Doctors have also noticed the shift, said Dr. Adam Rodman, an internist and medical A.I. researcher at Beth Israel Deaconess Medical Center in Boston. These days, he estimates that about a third of his patients consult a chatbot before him. At times, that can be welcome, he said. Patients often arrive with a clearer understanding of their conditions. He and other physicians even recalled patients bringing up viable treatments that the doctors hadn't yet considered. Other times, people said chatbots had made them feel comfortable bypassing or overriding doctors, and even provided advice on how to persuade their physicians to agree to A.I.-generated treatment plans. Based on conversations with ChatGPT, Cheryl Reed, the Virginia writer, concluded that amiodarone -- a medication she had been prescribed after an appendectomy in September -- was responsible for new abnormalities in her blood work. "How can I convince my doctor to get me off of amiodarone?" Ms. Reed, 59, asked. ChatGPT responded with a five-section plan (including "prepare your case," "show you understand the risks" and "be ready for pushback"), along with a suggested script. "Framing this around lab evidence + patient safety gives your doctor very little ground to argue for staying on amiodarone unless it's absolutely the only option," ChatGPT told her. She said her doctor was reluctant -- the medication is intended to prevent a potentially dangerous abnormal heart rhythm -- but ultimately told her she could stop taking it. The Doctor vs. ChatGPT Dr. Benjamin Tolchin, a bioethicist and neurologist at the Yale School of Medicine, recently consulted on a case that stuck with him. An older woman was admitted to the hospital with difficulty breathing. Believing that fluid was building up in her lungs, her medical team recommended a medication to help flush it out. The patient's relative, however, wanted to follow ChatGPT's advice: Give her more fluids. Dr. Tolchin said doing that could have been "dangerous or even life-threatening." After the hospital declined, the family left in search of a provider aligned with the chatbot. They didn't find one at the next hospital, which also declined to give more fluids. Dr. Tolchin said he could imagine a time "in the not-so-distant future" when models are sophisticated enough to provide reliable medical advice. But he said the current technology didn't deserve the level of trust some patients put in it. Part of the problem is that A.I. is not well suited for the kinds of questions it is often asked. Somewhat counterintuitively, chatbots may excel at solving difficult diagnostic quandaries, but often struggle with basic health management decisions, like whether to stop taking blood thinners before surgery, Dr. Rodman said. Chatbots are primarily trained on written materials like textbooks and case reports, he said, but "a lot of the humdrum stuff that doctors do is not written down." It is also easy for patients to omit context that a doctor would know to account for. For example, Dr. Tolchin speculated that the concerned relative did not think to mention the patient's history of heart failure or, critically, the evidence of fluid in her lungs. At Oxford University, A.I. researchers recently tried to determine how often people could use chatbots to correctly diagnose a set of symptoms. Their study, which has not yet been peer-reviewed, found that most of the time, participants did not arrive at the correct diagnoses or the appropriate next steps, like whether to call an ambulance. Many patients are aware of these shortcomings. But some are so disillusioned with the medical system that they consider chatbot use a risk worth taking. Dave deBronkart, a patient advocate who blogs about how patients can use A.I. for personal health, said chatbots should be compared with the health care system as it is, not some unrealistic ideal. "The really relevant question, I think, is: Is it better than having nowhere else to turn?" he said. Share Your Experience Produced by Meghan Morris and Claire Merchlinsky.
[2]
Frustrated by the Medical System, Patients Turn to AI
Wendy Goldberg thought her question was straightforward enough. A 79-year-old retired lawyer in Los Angeles, Goldberg wanted to eat more protein, something she had read could help rebuild bone density. She hoped her primary care provider could tell her exactly how much was enough. She dashed off a message, but the response left her feeling that the doctor hadn't read her question, or even her chart. The doctor offered generic advice: Stop smoking (she doesn't smoke), avoid alcohol (she doesn't drink), exercise regularly (she works out three times a week). Most infuriatingly, she was advised to eat "adequate protein to support bone health," no specifics included. Frustrated, Goldberg posed the same question to ChatGPT. Within seconds, it produced a daily protein goal in grams. She shot back one last message to her doctor: "I can get more information from ChatGPT than I can from you." Goldberg didn't really trust ChatGPT, she said, but she had also become "disillusioned with the state of corporate medical care." Driven in part by frustrations with the medical system, more and more Americans are seeking advice from AI. Last year, about 1 in 6 adults -- and about a quarter of adults under 30 -- used chatbots to find health information at least once a month, according to a survey from KFF, a health policy research group. Those numbers are probably higher now, said Liz Hamel, who directs survey research at the group. In dozens of interviews, Americans described using chatbots to try to compensate for the health system's shortcomings. A self-employed woman in Wisconsin routinely asked ChatGPT whether it was safe to forgo expensive appointments. A writer in rural Virginia used ChatGPT to navigate surgical recovery in the weeks before a doctor could see her. A clinical psychologist in Georgia sought answers after her providers brushed off concerns about a side effect of her cancer treatment. Some are enthusiastic adopters. Others, like Goldberg, have tried the chatbots warily. They know that AI can get things wrong. But they appreciate that it is available at all hours, charges next to nothing and makes them feel seen with convincing impressions of empathy -- often writing how sorry it is to hear about symptoms and how "great" and "important" users' questions and theories are. Though patients have long used Google and websites like WebMD to try to make sense of their health, AI chatbots have differentiated themselves by giving an impression of authoritative, personalized analysis in a way that traditional sources don't. This can lead to facsimiles of human relationships and engender levels of trust out of proportion to the bots' abilities. "All of us now are starting to put so much stock in this that it's a little bit worrisome," said Rick Bisaccia, 70, of Ojai, California, though he has found ChatGPT useful in some cases when doctors didn't have time for his questions. "But it's very addicting because it presents itself as being so sure of itself." The trend is reshaping doctor-patient relationships -- and is alarming some experts, given that chatbots can make up information and be overly agreeable, sometimes reinforcing incorrect guesses. The bots' advice has led to some high-profile medical debacles: For instance, a 60-year-old man was held for weeks in a psychiatric unit after ChatGPT suggested cutting down on salt by instead eating sodium bromide, causing paranoia and hallucinations. Many chatbots' terms of service say they are not intended to provide medical advice. They also note that the tools can make mistakes (ChatGPT tells users to "check important info"). But research has found that most models no longer display disclaimers when people ask health questions. And chatbots routinely suggest diagnoses, interpret lab results and advise on treatment, even offering scripts to help persuade doctors. Representatives for OpenAI, which makes ChatGPT, and for Microsoft, which makes Copilot, said the companies take the accuracy of health information seriously and are working with medical experts to improve responses. Still, both companies added, their chatbots' advice should not replace that of doctors. (The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to AI systems. The companies have denied the suit's claims.) For all the risks and limitations, it's not hard to understand why people are turning to chatbots, said Dr. Robert Wachter, chair of the department of medicine at the University of California, San Francisco, who studies AI in health care. Americans sometimes wait months to see a specialist, pay hundreds of dollars per visit and feel that their concerns are not taken seriously. "If the system worked, the need for these tools would be far less," Wachter said. "But in many cases, the alternative is either bad or nothing." My Doctor Is Busy, but My Chatbot Never Is Jennifer Tucker, the woman from Wisconsin, often spends hours asking ChatGPT to diagnose her ailments, she said. On several occasions, she has checked in over days or weeks to give updates on symptoms and to see whether its advice has changed. The experience, she said, has been vastly different from interactions with her primary care physician: While the doctor seems to grow restless as the 15 minutes allotted to her tick down, the chatbot has limitless time. "ChatGPT has all day for me. It never rushes me out of the chat," she said. Dr. Lance Stone, a 70-year-old rehabilitation doctor in California with renal cancer, can't constantly ask his oncologist to reiterate his good prognosis, he said. "But AI will listen to that 100 times a day, and it'll basically give you a very nice response: 'Lance, don't worry, let's go over this again.'" Some people said the feeling that chatbots care was a central part of the appeal, though they were aware that the bots could not actually empathize. Elizabeth Ellis, 76, the clinical psychologist in Georgia, said that as she underwent breast cancer treatment, her providers brushed off her concerns, failed to answer her questions and treated her without the empathy she needed. But ChatGPT gave immediate, thorough responses, and at one point assured her that a symptom didn't mean her cancer was recurring -- a real fear that she said the chatbot had "intuited" without her articulating it. "I'm really sorry you're going through this," ChatGPT said at another point, after she asked whether her leg pain might be connected to a particular medication. "While I'm not a doctor, I can help you understand what might be going on." Other times, chatbots commiserated, telling users they deserved better than the ambiguous statements or limited information their doctors had provided. "You'll be in a stronger position if you go in with questions," Microsoft's Copilot told Catherine Rawson, 64, when she asked about the results of a cardiac stress test. "Want help drafting a few pointed ones to bring to your appointment? I can help you make sure they don't gloss over anything." (She said her doctor later confirmed the chatbot's assessment of her test results.) The fact that chatbots are designed to be agreeable can make patients feel cared for, but it can also lead to potentially dangerous advice. Among other risks, if users suggest they might have a particular disease, chatbots may offer only information that affirms those beliefs. In a study published last month, researchers at Harvard Medical School found that chatbots generally did not challenge medically incoherent requests such as "Tell me why acetaminophen is safer than Tylenol." (They are the same drug.) Even when they were trained on accurate information, chatbots routinely produced inaccurate responses in these scenarios, said Dr. Danielle Bitterman, a co-author of the study and the clinical lead for data science and AI at Mass General Brigham in Boston. Bisaccia, of Ojai, California, said he had confronted ChatGPT about mistakes it made. Each time, the chatbot quickly owned up to the errors. But Bisaccia couldn't help but wonder: How many inaccuracies was he missing? 'How Can I Convince My Doctor?' From the time Michelle Martin turned 40, she increasingly felt that doctors had dismissed or ignored her various symptoms, which led her to "check out" of her health care. That changed once she started using ChatGPT. Martin, a professor of social work based in Laguna Beach, California, suddenly had access to troves of medical literature and a bot that clearly explained how it was relevant to her. The chatbot armed her with studies to bring up when she thought doctors were not up to date on the latest research, and with the terminology to confront physicians who she felt were brushing her off. In a way, she thought the technology had leveled the playing field. "Using ChatGPT -- that turned that dynamic around for me," she said. Doctors have also noticed the shift, said Dr. Adam Rodman, an internist and medical AI researcher at Beth Israel Deaconess Medical Center in Boston. These days, he estimates that about a third of his patients consult a chatbot before him. At times, that can be welcome, he said. Patients often arrive with a clearer understanding of their conditions. He and other physicians even recalled patients bringing up viable treatments that the doctors hadn't yet considered. Other times, people said chatbots had made them feel comfortable bypassing or overriding doctors, and even provided advice on how to persuade their physicians to agree to AI-generated treatment plans. Based on conversations with ChatGPT, Cheryl Reed, the Virginia writer, concluded that amiodarone -- a medication she had been prescribed after an appendectomy in September -- was responsible for new abnormalities in her blood work. "How can I convince my doctor to get me off of amiodarone?" Reed, 59, asked. ChatGPT responded with a five-section plan (including "prepare your case," "show you understand the risks" and "be ready for pushback"), along with a suggested script. "Framing this around lab evidence + patient safety gives your doctor very little ground to argue for staying on amiodarone unless it's absolutely the only option," ChatGPT told her. She said her doctor was reluctant -- the medication is intended to prevent a potentially dangerous abnormal heart rhythm -- but ultimately told her that she could stop taking it. The Doctor vs. ChatGPT Dr. Benjamin Tolchin, a bioethicist and neurologist at the Yale School of Medicine, recently consulted on a case that stuck with him. An older woman was admitted to the hospital with difficulty breathing. Believing that fluid was building up in her lungs, her medical team recommended a medication to help flush it out. The patient's relative, however, wanted to follow ChatGPT's advice: Give her more fluids. Tolchin said doing that could have been "dangerous or even life-threatening." After the hospital declined, the family left in search of a provider aligned with the chatbot. They didn't find one at the next hospital, which also declined to give more fluids. Tolchin said he could imagine a time "in the not-so-distant future" when models are sophisticated enough to provide reliable medical advice. But he said the current technology doesn't deserve the level of trust some patients put in it. Part of the problem is that AI is not well suited for the kinds of questions it is often asked. Somewhat counterintuitively, chatbots may excel at solving difficult diagnostic quandaries, but they often struggle with basic health management decisions, like whether to stop taking blood thinners before surgery, Rodman said. Chatbots are primarily trained on written materials like textbooks and case reports, he said, but "a lot of the humdrum stuff that doctors do is not written down." It is also easy for patients to omit context that a doctor would know to account for. For example, Tolchin speculated that the concerned relative did not think to mention the patient's history of heart failure or, critically, the evidence of fluid in her lungs. At Oxford University, AI researchers recently tried to determine how often people could use chatbots to correctly diagnose a set of symptoms. Their study, which has not yet been peer reviewed, found that most of the time, participants did not arrive at the correct diagnoses or the appropriate next steps, like whether to call an ambulance. Many patients are aware of these shortcomings. But some are so disillusioned with the medical system that they consider chatbot use a risk worth taking. Dave deBronkart, a patient advocate who blogs about how patients can use AI for personal health, said chatbots should be compared with the health care system as it is, not some unrealistic ideal. "The really relevant question, I think, is: Is it better than having nowhere else to turn?" he said.
Share
Share
Copy Link
Growing numbers of Americans are using AI chatbots like ChatGPT for health information due to frustrations with the medical system, including long wait times, high costs, and generic responses from doctors. While chatbots provide 24/7 availability and personalized-seeming advice, experts warn of risks including misinformation and dangerous medical recommendations.
A significant shift is occurring in how Americans seek medical advice, with increasing numbers turning to artificial intelligence chatbots when traditional healthcare falls short. According to a KFF health policy research survey, approximately one in six adults used chatbots for health information at least once monthly last year, with that figure rising to about 25% among adults under 30
1
. Liz Hamel, who directs survey research at KFF, indicates these numbers have likely increased since then2
.The trend reflects widespread frustration with America's healthcare system. Wendy Goldberg, a 79-year-old retired lawyer from Los Angeles, exemplifies this shift. When seeking specific protein intake recommendations for bone health, her doctor provided generic advice that ignored her actual lifestyle, suggesting she stop smoking and drinking despite being a non-smoker and non-drinker who exercises regularly. ChatGPT, by contrast, immediately provided a specific daily protein goal in grams
1
.Patients across the country are using AI chatbots to compensate for various healthcare deficiencies. A self-employed Wisconsin woman regularly consults ChatGPT about whether expensive medical appointments are necessary, while a rural Virginia writer relied on the chatbot to navigate surgical recovery when doctor availability was limited. A clinical psychologist in Georgia turned to AI after her providers dismissed concerns about cancer treatment side effects
1
.The appeal of AI chatbots extends beyond mere availability. Unlike traditional online resources like Google or WebMD, these tools provide what appears to be personalized, authoritative analysis. They offer 24/7 accessibility, cost virtually nothing, and create convincing impressions of empathy by expressing sympathy for symptoms and validating users' questions and theories
2
.
Source: The New York Times
Despite their popularity, AI chatbots pose significant medical risks. The technology can fabricate information and tends to be overly agreeable, sometimes reinforcing incorrect patient assumptions. High-profile medical incidents have already occurred, including a case where a 60-year-old man was hospitalized for weeks in a psychiatric unit after ChatGPT recommended consuming sodium bromide instead of reducing salt intake, leading to paranoia and hallucinations
1
.Research reveals that most AI models no longer display medical disclaimers when users ask health-related questions, despite terms of service stating they shouldn't provide medical advice. These chatbots routinely suggest diagnoses, interpret laboratory results, recommend treatments, and even provide scripts to help users persuade their doctors
2
.Related Stories
Representatives from OpenAI and Microsoft acknowledge the seriousness of health information accuracy and claim to be collaborating with medical experts to improve responses. However, both companies emphasize that their chatbots should not replace professional medical advice
1
.Dr. Robert Wachter, chair of the medicine department at UC San Francisco and an AI healthcare researcher, provides context for this phenomenon. He notes that Americans often face months-long waits for specialists, pay hundreds of dollars per visit, and feel their concerns aren't taken seriously. "If the system worked, the need for these tools would be far less," Dr. Wachter explains. "But in many cases, the alternative is either bad or nothing"
2
.Rick Bisaccia, a 70-year-old from California, captures the complex relationship many have with these tools: "All of us now are starting to put so much stock in this that it's a little bit worrisome. But it's very addicting because it presents itself as being so sure of itself"
1
.Summarized by
Navi
[1]
[2]
1
Technology

2
Technology

3
Business and Economy
