2 Sources
[1]
ChatGPT in Your Clinic: Who's the Expert Now
Patients arriving at appointments with researched information is not new, but artificial intelligence (AI) tools such as ChatGPT are changing the dynamics. Their confident presentation can leave physicians feeling that their expertise is challenged. Kumara Raja Sundar, MD, a family medicine physician at Kaiser Permanente Burien Medical Center in Burien, Washington, highlighted this trend in a recent article published in JAMA. A patient visited Sundar's clinic reporting dizziness and described her symptoms with unusual precision: "It's not vertigo, but more like a presyncope feeling." She then suggested that the tilt table test might be useful for diagnosis. Occasionally, patient questions reveal subtle familiarity with medical jargon. This may indicate that they either have relevant training or have studied the subject extensively. Curious, Sundar asked if she worked in the healthcare sector. She replied that she had consulted ChatGPT, which recommended the tilt table test. For years, patients have brought newspaper clippings, internet research, and advice from friends and relatives to consultations. Suggestions shared in WhatsApp groups have become a regular part of clinical discussions. Sundar noted that this particular encounter was different. The patient's tone and level of detail conveyed competence, and the confidence with which she presented the information subtly challenged his clinical judgment and treatment plans. It is not surprising that large language models (LLMs), such as ChatGPT, are appealing. Recent studies have confirmed their remarkable strengths in logical reasoning and interpersonal communication. However, a direct comparison between LLMs and physicians is unfair. Clinicians often face immense pressure, including constrained consultation times, overflowing inboxes, and a healthcare system that demands productivity and efficiency. Even skilled professionals struggle to perform optimally under adverse conditions. In contrast, generative AI is functionally limitless. This imbalance creates an unrealistic benchmark; however, this is today's reality. Patients want clear answers; more importantly, they want to feel heard, understood, and reassured. Patients value accurate information but also want to feel recognized, reassured, and heard. "Unfortunately, under the weight of competing demands, which is what often slips for me not just because of systemic constraints but also because I am merely human," Sundar wrote. Despite the capabilities of generative AI, patients still visit doctors. Though these tools deliver confidently worded suggestions, they inevitably conclude: "Consult a healthcare professional." The ultimate responsibility for liability, diagnostics, prescriptions, and sick notes remains with physicians. In practice, this means dealing with requests, such as a tilt table test for intermittent vertigo, a procedure that is not uncommon but often inappropriate. "I find myself explaining concepts such as overdiagnosis, false positives, or other risks of unnecessary testing. At best, the patient understands the ideas, which may not resonate when one is experiencing symptoms. At worst, I sound dismissive. There is no function that tells ChatGPT that clinicians lack routine access to tilt-table testing or that echocardiogram appointments are delayed because of staff shortages. I have to carry those constraints into the examination room while still trying to preserve trust," Sundar emphasized in his article. When I speak with medical students, I notice a different kind of paternalism creeping in. And I have caught it in my inner monologue, even if I do not say it aloud. The old line, "They probably WebMD'd it and think they have cancer," has morphed into the newer, just-as-dismissive line, "They probably ChatGPT'd it and are going to tell us what to order." It often reflects defensiveness from clinicians rather than genuine engagement and carries an implicit message: We still know the best. "It is an attitude that risks eroding sacred and fragile trust between clinicians and patients. It reinforces the feeling that we are not 'in it' with our patients and are truly gatekeeping rather than partnering. Ironically, that is often why I hear patients turn to LLMs in the first place," Sundar concluded. One patient said plainly, "This is how I can advocate for myself better." The word "advocate" struck Sundar, capturing the effort required to persuade someone with more authority. Although clinicians still control access to tests, referrals, and treatment plans, the term conveys a sense of preparing for a fight. When patients feel unheard, gathering knowledge becomes a strategy to be taken seriously. In such situations, the usual approach of explaining false-positive test results, overdiagnosis, and test characteristics is often ineffective. From the patient's perspective, this sounds more like, "I still know more than you, no matter what tool you used, and I'm going to overwhelm you with things you don't understand." The role of physicians is constantly evolving. The transition from physician-as-authority to physician-as-advisor is intensifying. Patients increasingly present with expectations shaped by nonevidence-based sources, often misaligned with the clinical reality. As Sundar observed, "They arm themselves with knowledge to be heard." This necessitates a professional duty to respond with understanding rather than resistance. His approach centers on emotional acknowledgment before clinical discussion: "I say, 'We'll discuss diagnostic options together. But first, I want to express my condolences. I can hardly imagine how you feel. I want to tackle this with you and develop a plan.'" He emphasized, "This acknowledgment was the real door opener." What began as a US trend observed by Sundar has now spread worldwide, with patients increasingly arriving at consultations armed with medical knowledge from tools like ChatGPT rather than just "Dr Google." Clinicians across health systems have reported that digitally informed patients now comprise the majority. In a forum discussion, physicians from various disciplines shared their experiences, highlighting how previously informed patients are now the norm. Inquiries often focus on specific laboratory values, particularly vitamin D or hormone tests. In gynecologic consultations, Internet research on menstrual disorders has become a routine part of patient interactions, with an overwhelming range of answers available online. 'Chanice,' a Coliquio user who's a gynecologist, shared, "The answers range from, 'It's normal; it can happen' to 'You won't live long.'" "It's also common to Google medication side effects, and usually, women end up experiencing pretty much every side effect, even though they didn't have them before." How should doctors respond to this trend? Opinions are clear: openness, education, and transparency are essential and ideally delivered in a structured manner. "Get the patients on board; educate them. In writing! Each and every one of them. Once it's put into words, it's no longer a job. Invest time in educating patients to correct misleading promises made by health insurance companies and politicians," commented another user, Jörg Christian Nast, a specialist in gynecology and obstetrics. The presence of digitally informed patients is increasingly seen not only as a challenge but also as an opportunity. Conversations with these patients can be constructive, but they can also generate unrealistic demands or heated debates. Thus, a professional, calm, and explanatory approach remains crucial, and at times, a dose of humor can help. Another user who specializes in internal medicine added, "The term 'online consultation' takes on a whole new meaning." The full forum discussion, "The Most Frequently Asked 'Dr. Google' Questions," can be found here. Find out what young physicians think about AI and the evolving doctor-patient relationship in our interview with Christian Becker, MD, MHBA, University Medical Center Göttingen, Göttingen, Germany, and a spokesperson for the Young German Society for Internal Medicine.
[2]
Patients are bringing AI diagnoses and prescriptions to clinics: What does it mean for doctors?
Artificial intelligence is changing healthcare. Patients now use AI for diagnoses, sometimes challenging doctors. This creates pressure and trust issues. Doctors must address patient concerns and avoid defensiveness. A recent case showed AI giving dangerous advice, leading to hospitalization. Experts call for transparency and patient education. AI offers information, but lacks medical judgment. For years, patients have walked into clinics carrying clippings from newspapers, advice from friends, or the latest findings from WhatsApp groups. Today, they arrive with something far more sophisticated: a neatly packaged diagnosis or even a prescription generated by artificial intelligence. According to a recent Medscape report, this trend is rapidly reshaping the dynamics of clinical practice. Dr. Kumara Raja Sundar, a family physician at Kaiser Permanente Burien Medical Center in Washington, described one such case in JAMA. A patient presented with dizziness and, with striking medical precision, said, "It's not vertigo, but more like a presyncope feeling." She confidently suggested a tilt table test for diagnosis. Intrigued, Sundar asked if she worked in healthcare. Her reply: she had asked ChatGPT. What stood out was not just the information but the confidence with which it was delivered, subtly challenging the physician's role as the sole authority. Large language models such as ChatGPT have demonstrated impressive reasoning and communication abilities, but comparing them to doctors is problematic. Physicians juggle limited consultation time, staff shortages, and systemic pressures. AI, by contrast, appears limitless. Sundar observed in his article that this imbalance creates unrealistic expectations: "Unfortunately, under the weight of competing demands, what often slips for me is not accuracy, but making patients feel heard." The arrival of AI-informed patients brings practical challenges. Requests for advanced or unnecessary tests, such as tilt table examinations or hormone panels, often collide with real-world constraints like delayed appointments or limited access. Sundar wrote that explaining overdiagnosis and false positives can sometimes sound dismissive rather than collaborative, further straining trust. The shift, he warns, risks fostering a new kind of defensiveness among clinicians: the quiet thought that a patient has "ChatGPT'd it" before walking into the room. Such attitudes, he argued, risk eroding fragile doctor-patient trust. For some patients, AI tools are more than information sources; they are instruments of advocacy. One patient told Sundar, "This is how I can advocate for myself better." The language of advocacy reflects the effort required to be taken seriously in clinical spaces. Doctors, he emphasized, must resist gatekeeping and instead acknowledge patients' concerns before moving to clinical reasoning. His preferred approach is to begin with empathy: "I want to express my condolences. I can hardly imagine how you feel. I want to tackle this with you and develop a plan." What Sundar has seen in the United States is not unique. The Medscape report highlights that doctors worldwide now face AI-informed patients as the norm rather than the exception. In Germany, gynecologists report women consulting ChatGPT for menstrual disorders, often encountering contradictory or alarming answers. Specialists in internal medicine note that Googling side effects leads patients to experience nearly all of them -- even when they had none before. Clinicians responding in online forums have called for transparency, structured patient education, and even humor as tools for navigating this new reality. One remarked that "online consultation takes on a whole new meaning" when AI walks into the room with the patient. The blurred line between helpful guidance and hazardous misinformation was recently illustrated in a striking case reported in the Annals of Internal Medicine in August', 2025. A 60-year-old man who wanted to cut down on table salt turned to ChatGPT for alternatives. The chatbot recommended sodium bromide, a compound more familiar in swimming pool maintenance than in home kitchens. Trusting the advice, he used the substance for several months until he landed in the hospital with paranoia, hallucinations, and severe electrolyte imbalances. Doctors diagnosed bromism, a condition rarely seen since the early 20th century, when bromide salts were once widely prescribed. Physicians treating the man noted bromide levels more than 200 times the safe reference range, explaining his psychiatric and neurological decline. After intensive fluid therapy and correction of electrolytes, he recovered, but only after a three-week hospital stay. The case, is a reminder that medical judgment requires not just knowledge, but also context and responsibility -- qualities AI does not yet possess.
Share
Copy Link
The integration of AI tools like ChatGPT in healthcare is reshaping patient-doctor interactions, presenting both opportunities and challenges for medical professionals.
In recent years, the healthcare landscape has witnessed a significant shift as patients increasingly arrive at medical appointments armed with information from artificial intelligence (AI) tools like ChatGPT. This trend, highlighted by Dr. Kumara Raja Sundar in a JAMA article, is reshaping the traditional doctor-patient dynamic 1.
Source: Medscape
A notable example involved a patient who presented symptoms of dizziness with unusual precision, suggesting a "presyncope feeling" and recommending a tilt table test. When questioned, the patient revealed that ChatGPT had provided this information 1. This incident underscores the growing influence of AI in patient self-diagnosis and healthcare decision-making.
The integration of AI-generated information into clinical settings presents several challenges for healthcare professionals:
Expertise Challenge: Patients' confident presentation of AI-sourced information can make physicians feel their expertise is being questioned 1.
Unrealistic Expectations: AI tools, unburdened by real-world constraints, can create unrealistic expectations for diagnostic procedures and treatments 1.
Trust Issues: The need to explain concepts like overdiagnosis or false positives in response to AI suggestions can sometimes make doctors appear dismissive, potentially eroding patient trust 1.
The advent of AI in healthcare is accelerating the transition from physician-as-authority to physician-as-advisor. Dr. Sundar notes that patients are increasingly using AI tools as a means of self-advocacy, preparing themselves to be taken seriously in clinical settings 1.
Source: Economic Times
This shift requires a new approach from healthcare professionals. Dr. Sundar suggests starting consultations with emotional acknowledgment before moving to clinical discussions, emphasizing empathy and partnership in tackling health issues 1.
While AI tools can provide valuable information, they lack medical judgment and context. A case reported in the Annals of Internal Medicine illustrates the potential dangers:
A 60-year-old man, following ChatGPT's advice to use sodium bromide as a salt alternative, developed severe bromism, resulting in a three-week hospitalization 2. This incident highlights the critical importance of professional medical guidance in interpreting and applying AI-generated health information.
The phenomenon of AI-informed patients is not limited to the United States. Doctors worldwide are encountering similar challenges. In Germany, for instance, gynecologists report women consulting ChatGPT for menstrual disorders, often receiving contradictory or alarming information 2.
In response to these challenges, healthcare professionals are calling for:
As AI continues to play an increasingly significant role in healthcare information dissemination, the medical community must adapt to maintain the delicate balance between leveraging technological advancements and preserving the essential human elements of healthcare delivery.
Summarized by
Navi
[1]
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
12 hrs ago
7 Sources
Technology
12 hrs ago
Anthropic has updated its Claude Opus 4 and 4.1 AI models with the ability to terminate conversations in extreme cases of persistent harm or abuse, as part of its AI welfare research.
6 Sources
Technology
20 hrs ago
6 Sources
Technology
20 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago
OpenAI has made subtle changes to GPT-5's personality, aiming to make it more approachable after users complained about its formal tone. The company is also working on allowing greater customization of ChatGPT's style.
4 Sources
Technology
4 hrs ago
4 Sources
Technology
4 hrs ago
SoftBank has purchased Foxconn's Ohio plant for $375 million to produce AI servers for the Stargate project. Foxconn will continue to operate the facility, which will be retrofitted for AI server production.
5 Sources
Technology
4 hrs ago
5 Sources
Technology
4 hrs ago