2 Sources
[1]
Dr. ChatGPT Will See You Now
Patients and doctors are turning to AI for diagnoses and treatment recommendations, often with stellar results, but problems arise when experts and algorithms disagree. A poster on Reddit lived with a painful clicking jaw, the result of a boxing injury, for five years. They saw specialists, got MRIs, but no one could give them a solution to fix it, until they described the problem to ChatGPT. The AI chatbot suggested a specific jaw-alignment issue might be the problem and offered a technique involving tongue placement as a treatment. The individual tried it, and the clicking stopped. "After five years of just living with it," they wrote on Reddit in April, "this AI gave me a fix in a minute." The story went viral, with LinkedIn cofounder Reid Hoffman sharing it on X. And it's not a one-off: Similar stories are flooding social media -- of patients purportedly getting accurate assessments from LLMs of their MRI scans or x-rays. Courtney Hofmann's son has a rare neurological condition. After 17 doctor visits over three years and still not receiving a diagnosis, she gave all of his medical documents, scans, and notes to ChatGPT. It provided her with an answer -- tethered cord syndrome, where the spinal cord can't move freely because it's attached to tissue around the spine -- that she says physicians treating her son had missed. "He had surgery six weeks from when I used ChatGPT, and he is a new kid now," she told a New England Journal of Medicine podcast in November 2024. Consumer-friendly AI tools are changing how people seek medical advice, both on symptoms and diagnoses. The era of "Dr. Google" is giving way to the age of "Dr. ChatGPT." Medical schools, physicians, patient groups, and the chatbots' creators are racing to catch up, trying to determine how accurate these LLMs' medical answers are, how best patients and doctors should use them, and how to address patients who are given false information. "I'm very confident that this is going to improve health care for patients," says Adam Rodman, a Harvard Medical School instructor and practicing physician. "You can imagine lots of ways people could talk to LLMs that might be connected to their own medical records." Rodman has already seen patients turn to AI chatbots during his own hospital rounds. On a recent shift, he was juggling care for more than a dozen patients when one woman, frustrated by a long wait time, took a screenshot of her medical records and plugged it into an AI chatbot. "She's like, 'I already asked ChatGPT,'" Rodman says, and it gave her the right answer regarding her condition, a blood disorder. Rodman wasn't put off by the exchange. As an early adopter of the technology and the chair of the group that guides the use of generative AI in the curriculum at Harvard Medical School, he thinks there's potential for AI to give physicians and patients better information and improve their interactions. "I treat this as another chance to engage with the patient about what they are worried about," he says. The key word here is potential. Several studies have shown that AI is capable in certain circumstances of providing accurate medical advice and diagnoses, but it's when these tools get put in people's hands -- whether they're doctors or patients -- that accuracy often falls. Users can make mistakes -- like not providing all of their symptoms to AI, or discarding the right info when it is fed back to them.
[2]
ChatGPT as Doctor: When Consumers Rely on AI for Medical Advice | PYMNTS.com
According to the post, which was shared recently on X by OpenAI President Greg Brockman, the user experienced unexplained symptoms for more than 10 years, undergoing spinal MRIs, CT scans, blood tests, and even checks for Lyme disease -- all to no avail. After entering lab results and symptom history into ChatGPT, the AI flagged a potential connection to the A1298C mutation in the MTHFR gene. A physician later confirmed the diagnosis and the B12 supplement treatment later "largely resolved" the symptoms. The doctor was "super shocked" to find out ChatGPT correctly diagnosed the symptoms. "Not sure how they didn't think to test me for MTHFR mutation," the post said. About 1 in 6 adults ask AI chatbots for health information and advice at least once a month, according to the KFF Health Misinformation Tracking Poll. When it comes to adults aged 18 to 29, the percentage rises to 25%. The next largest age group are those from 30 to 49, at 19%. The 50 to 64 age group comes in at 15% and over 65 at 10%. However, when it comes to trusting that information, 56% of those who use AI are not confident about its accuracy. Among ethnic and age groups, those under age 50 -- as well as Black and Hispanic adults -- tend to trust the data more than older white respondents. Kim Rippy, practice owner and licensed counselor at Keystone Therapy Group, told PYMNTS that clients have used ChatGPT or AI for "substitute therapy," which is "both helpful and dangerous at the same time." For those with ADHD, ChatGPT can help them summarize or organize their thoughts. "You can 'thought dump' into the system and the AI program can return your thoughts to you in a clear, succinct format. This can help you better understand your own thoughts and potentially improve your ability to communicate." Read more: Can AI-Powered Smart Glasses Help Spot Medication Errors? But the dangers are that ChatGPT can never fully understand the patient's experience and "can't pick up on nuances of language, behaviors, nonverbals, tone, syntax and emotion that a human therapist can," Rippy said. "ChatGPT can't challenge unhealthy cognitions, or even pick up on when those may be occurring for someone. ChatGPT can't gauge when someone is at-risk and may push someone past their ability to safely regulate themselves. ... [It] also doesn't hold people accountable." In the end, AI chatbots should be "recognized as a coping tool for organizing thoughts, just as journaling or meditation can," Rippy said. An AI and mental health survey by Iris Telehealth shared with PYMNTS showed that 65% of patients feel comfortable using AI assessment tools and AI chatbots before speaking with a human provider. But 70% worry about the privacy and security of their data and 55% question the accuracy of the chatbot's assessments of their condition. Dr. Angela Downey, a family physician, told PYMNTS that AI can be helpful in guiding people toward possible diagnoses, especially if they've felt "dismissed or overlooked" in the past. These chatbots work around the clock and process a lot of information quickly. "But there are limits," Downey said. "AI can't examine you or pick up on subtle cues, and it can delay proper care if taken as a substitute for medical advice. It can offer a list of possibilities, but you still need a trained clinician to put the full picture together." But for Gil Spencer, CTO of WitnessAI, it was a lifesaver. He told PYMNTS that he had injured his knee skiing and MRI scans from radiologists were inconclusive. So he turned to ChatGPT, using a multimodal prompt workflow he created and uploaded his MRI scans. The AI correctly diagnosed a major meniscus tear and confirmed his ACL was intact. His surgeon later validated the AI's diagnosis. Read more: Ant Group Unveils AI-Powered Healthcare Smartphone App
Share
Copy Link
AI chatbots like ChatGPT are increasingly being used for medical advice and diagnoses, sometimes with remarkable success. However, this trend raises questions about accuracy, patient safety, and the changing dynamics between patients and healthcare providers.
In recent years, there has been a significant shift in how people seek medical advice, with artificial intelligence (AI) chatbots like ChatGPT increasingly being used for diagnoses and treatment recommendations. This trend, dubbed the era of "Dr. ChatGPT," is rapidly changing the landscape of healthcare information access 12.
Several anecdotal cases have gone viral on social media, showcasing the potential of AI in medical diagnosis. For instance, a Reddit user reported resolving a five-year-old jaw problem after consulting ChatGPT, which suggested a specific jaw-alignment issue and offered a treatment technique 1. In another case, Courtney Hofmann turned to ChatGPT after 17 inconclusive doctor visits for her son's rare neurological condition. The AI suggested tethered cord syndrome, leading to successful surgery and significant improvement in her son's condition 1.
Source: PYMNTS
The accessibility of AI-powered medical advice is changing how patients interact with healthcare systems. According to a KFF Health Misinformation Tracking Poll, about one in six adults consult AI chatbots for health information at least once a month, with higher usage among younger age groups 2. This trend is reshaping the traditional doctor-patient relationship, as exemplified by Adam Rodman's experience with a patient who had already consulted ChatGPT before their interaction 1.
Proponents of AI in healthcare, like Harvard Medical School instructor Adam Rodman, see significant potential for improving patient care. AI tools can process vast amounts of medical data quickly, potentially catching issues that human doctors might miss 12. For mental health applications, Kim Rippy of Keystone Therapy Group notes that AI can help organize thoughts for patients with ADHD, acting as a useful coping tool 2.
However, the integration of AI in healthcare is not without risks. While 65% of patients feel comfortable using AI assessment tools before speaking with a human provider, 70% worry about data privacy and security, and 55% question the accuracy of AI-generated assessments 2. Dr. Angela Downey emphasizes that while AI can guide people towards possible diagnoses, it cannot replace the nuanced assessment of a trained clinician 2.
Source: Wired
Several studies have shown that AI can provide accurate medical advice in certain circumstances. However, the accuracy often decreases when these tools are used by individuals, whether they are doctors or patients 1. Common mistakes include users not providing all relevant symptoms to the AI or disregarding correct information provided by the system.
As AI continues to evolve, medical schools, physicians, patient groups, and AI developers are working to determine the best practices for integrating these tools into healthcare. There's a growing recognition that AI could potentially improve healthcare outcomes, especially when connected to patients' medical records 1.
However, experts stress the importance of using AI as a complementary tool rather than a replacement for professional medical advice. Dr. Downey notes that while AI can offer a list of diagnostic possibilities, a trained clinician is still needed to synthesize the full picture and make accurate diagnoses 2.
As the integration of AI in healthcare continues to advance, it's clear that while it offers promising potential, careful consideration must be given to its limitations and the ethical implications of its use in medical settings.
Goldman Sachs is testing Devin, an AI software engineer developed by Cognition, potentially deploying thousands of instances to augment its human workforce. This move signals a significant shift towards AI adoption in the financial sector.
5 Sources
Technology
14 hrs ago
5 Sources
Technology
14 hrs ago
RealSense, Intel's depth-sensing camera technology division, has spun out as an independent company, securing $50 million in Series A funding to scale its 3D perception technology for robotics, AI, and computer vision applications.
13 Sources
Technology
14 hrs ago
13 Sources
Technology
14 hrs ago
AI adoption is rapidly increasing across businesses and consumers, with tech giants already looking beyond AGI to superintelligence, suggesting the AI revolution may be further along than publicly known.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
Elon Musk's artificial intelligence company xAI is preparing for a new funding round that could value the company at up to $200 billion, marking a significant increase from its previous valuation and positioning it as one of the world's most valuable private companies.
3 Sources
Business and Economy
14 hrs ago
3 Sources
Business and Economy
14 hrs ago
The United Nations' International Telecommunication Union urges companies to implement advanced tools for detecting and eliminating AI-generated misinformation and deepfakes to counter risks of election interference and financial fraud.
2 Sources
Technology
14 hrs ago
2 Sources
Technology
14 hrs ago