Curated by THEOUTPOST
On Mon, 9 Sept, 8:00 AM UTC
3 Sources
[1]
ChatGPT outperforms trainee doctors in respiratory disease assessments
European Respiratory SocietySep 9 2024 The chatbot ChatGPT performed better than trainee doctors in assessing complex cases of respiratory disease in areas such as cystic fibrosis, asthma and chest infections in a study presented at the European Respiratory Society (ERS) Congress in Vienna, Austria. The study also showed that Google's chatbot Bard performed better than trainees in some aspects and Microsoft's Bing chatbot performed as well as trainees. The research suggests that these large language models (LLMs) could be used to support trainee doctors, nurses and general practitioners to triage patients more quickly and ease pressure on health services. The study was presented by Dr. Manjith Narayanan, a consultant in paediatric pulmonology at the Royal Hospital for Children and Young People, Edinburgh and honorary senior clinical lecturer at the University of Edinburgh, UK. Large language models, like ChatGPT, have come into prominence in the last year and a half with their ability to seemingly understand natural language and provide responses that can adequately simulate a human-like conversation. These tools have several potential applications in medicine. My motivation to carry out this research was to assess how well LLMs are able to assist clinicians in real life." Dr. Manjith Narayanan, consultant in paediatric pulmonology, Royal Hospital for Children and Young People, Edinburgh To investigate this, Dr Narayanan used clinical scenarios that occur frequently in paediatric respiratory medicine. The scenarios were provided by six other experts in paediatric respiratory medicine and covered topics like cystic fibrosis, asthma, sleep disordered breathing, breathlessness and chest infections. They were all scenarios where there is no obvious diagnosis, and where there is no published evidence, guidelines or expert consensus that point to a specific diagnosis or plan. Ten trainee doctors who had less than four months of clinical experience in paediatrics were given an hour where they could use the internet, but not any chatbots, to solve each scenario with a descriptive answer of 200 to 400 words. Each scenario was also presented to the three chatbots. All the responses were scored by six paediatric respiratory experts for correctness, comprehensiveness, usefulness, plausibility, and coherence. They were also asked to say whether they thought each response was human- or chatbot-generated and to give each response an overall score out of nine. Solutions provided by ChatGPT version 3.5 scored an average of seven out of nine overall and were believed to be more human-like than responses from the other chatbots. Bard scored an average of six out of nine and was scored as more 'coherent' than trainee doctors, but in other respects was no better or worse than trainee doctors. Bing scored an average of four out of nine - the same as trainee doctors overall. Experts reliably identified Bing and Bard responses as non-human. Dr Narayanan said: "Our study is the first, to our knowledge, to test LLMs against trainee doctors in situations that reflect real-life clinical practice. We did this by allowing the trainee doctors to have full access to resources available on the internet, as they would in real life. This moves the focus away from testing memory, where there is a clear advantage for LLMs. Therefore, this study shows us another way we could be using LLMs and how close we are to regular day-to-day clinical application. "We have not directly tested how LLMs would work in patient facing roles. However, it could be used by triage nurses, trainee doctors and primary care physicians, who are often the first to review a patient." The researchers did not find any obvious instances of 'hallucinations' (seemingly made-up information) with any of the three LLMs. "Even though, in our study, we did not see any instance of hallucination by LLMs, we need to be aware of this possibility and build mitigations against this," Dr Narayanan added. Answers that were judged to be irrelevant to the context were occasionally given by Bing, Bard and the trainee doctors. Dr Narayanan and his colleagues are now planning to test chatbots against more senior doctors and to look at newer and more advanced LLMs. Hilary Pinnock is ERS Education Council Chair and Professor of Primary Care Respiratory Medicine at The University of Edinburgh, UK, and was not involved in the research. She says: "This is a fascinating study. It is encouraging, but maybe also a bit scary, to see how a widely available AI tool like ChatGPT can provide solutions to complex cases of respiratory illness in children. It certainly points the way to a brave new world of AI-supported care. "However, as the researchers point out, before we start to use AI in routine clinical practice, we need to be confident that it will not create errors either through 'hallucinating' fake information or because it has been trained on data that does not equitably represent the population we serve. As the researchers have demonstrated, AI holds out the promise of a new way of working, but we need extensive testing of clinical accuracy and safety, pragmatic assessment of organisational efficiency, and exploration of the societal implications before we embed this technology in routine care." European Respiratory Society
[2]
ChatGPT outperforms trainee doctors in assessing complex respiratory illness in children
The chatbot ChatGPT performed better than trainee doctors in assessing complex cases of respiratory disease in areas such as cystic fibrosis, asthma and chest infections in a study presented at the European Respiratory Society (ERS) Congress in Vienna, Austria. The study also showed that Google's chatbot Bard performed better than trainees in some aspects and Microsoft's Bing chatbot performed as well as trainees. The research suggests that these large language models (LLMs) could be used to support trainee doctors, nurses and general practitioners to triage patients more quickly and ease pressure on health services. The study was presented by Dr. Manjith Narayanan, a consultant in pediatric pulmonology at the Royal Hospital for Children and Young People, Edinburgh and honorary senior clinical lecturer at the University of Edinburgh, UK. He said, "Large language models like ChatGPT have come into prominence in the last year and a half with their ability to seemingly understand natural language and provide responses that can adequately simulate a human-like conversation. These tools have several potential applications in medicine. My motivation to carry out this research was to assess how well LLMs are able to assist clinicians in real life." To investigate this, Dr. Narayanan used clinical scenarios that occur frequently in pediatric respiratory medicine. The scenarios were provided by six other experts in pediatric respiratory medicine and covered topics like cystic fibrosis, asthma, sleep disordered breathing, breathlessness and chest infections. They were all scenarios where there is no obvious diagnosis, and where there is no published evidence, guidelines or expert consensus that points to a specific diagnosis or plan. Ten trainee doctors with less than four months of clinical experience in pediatrics were given an hour where they could use the internet, but not any chatbots, to solve each scenario with a descriptive answer of 200 to 400 words. Each scenario was also presented to the three chatbots. All the responses were scored by six pediatric respiratory experts for correctness, comprehensiveness, usefulness, plausibility, and coherence. They were also asked to say whether they thought each response was human- or chatbot-generated and to give each response an overall score out of nine. Solutions provided by ChatGPT version 3.5 scored an average of seven out of nine overall and were believed to be more human-like than responses from the other chatbots. Bard scored an average of six out of nine and was scored as more "coherent" than trainee doctors, but in other respects was no better or worse than trainee doctors. Bing scored an average of four out of nine -- the same as trainee doctors overall. Experts reliably identified Bing and Bard responses as non-human. Dr. Narayanan said, "Our study is the first, to our knowledge, to test LLMs against trainee doctors in situations that reflect real-life clinical practice. We did this by allowing the trainee doctors to have full access to resources available on the internet, as they would in real life. This moves the focus away from testing memory, where there is a clear advantage for LLMs. Therefore, this study shows us another way we could be using LLMs and how close we are to regular day-to-day clinical application. "We have not directly tested how LLMs would work in patient-facing roles. However, it could be used by triage nurses, trainee doctors and primary care physicians, who are often the first to review a patient." The researchers did not find any obvious instances of "hallucinations" (seemingly made-up information) with any of the three LLMs. "Even though in our study we did not see any instance of hallucination by LLMs, we need to be aware of this possibility and build mitigations against this," Dr. Narayanan added. Answers that were judged to be irrelevant to the context were occasionally given by Bing, Bard and the trainee doctors. Dr. Narayanan and his colleagues are now planning to test chatbots against more senior doctors and to look at newer and more advanced LLMs. Hilary Pinnock is ERS Education Council Chair and Professor of Primary Care Respiratory Medicine at The University of Edinburgh, UK, and was not involved in the research. She says, "This is a fascinating study. It is encouraging, but maybe also a bit scary, to see how a widely available AI tool like ChatGPT can provide solutions to complex cases of respiratory illness in children. It certainly points the way to a brave new world of AI-supported care. "However, as the researchers point out, before we start to use AI in routine clinical practice, we need to be confident that it will not create errors either through 'hallucinating' fake information or because it has been trained on data that does not equitably represent the population we serve. As the researchers have demonstrated, AI holds out the promise of a new way of working, but we need extensive testing of clinical accuracy and safety, pragmatic assessment of organizational efficiency, and exploration of the societal implications before we embed this technology in routine care."
[3]
ChatGPT better than trainee doctors at diagnosing respiratory diseases in study
Google's Bard and Microsoft Bing also perform better or as well as clinicians showing AI's benefits in treatment ChatGPT is better at diagnosing respiratory diseases than trainee doctors, a new study suggests. The findings presented to the European Respiratory Society in Austria also found that Google's Bard performed better than humans in some aspects, while Microsoft's Bing performed as well as the trainee doctors. The study indicates that these kinds of large language models could help medical staff to assess patients more efficiently and reduce the significant pressures facing the NHS. Ten trainee doctors with less than four months of clinical experience in paediatrics were given one hour to use the internet, but not AI chatbots, to solve scenarios created by experts in child respiratory medicine for which there was no obvious diagnosis. The doctor's answers were graded and compared to answers given by the chatbots. ChatGPT version 3.5 scored the highest and was believed to be more human-like in its responses than other chatbots. The study was presented by Dr Manjith Narayanan, a consultant in paediatric pulmonology at the Royal Hospital for Children and Young People, in Edinburgh. "These tools have several potential applications in medicine. My motivation to carry out this research was to assess how well large language modules (LLMs) are able to assist clinicians in real life," Dr Narayanan said. 'Hallucinations' The researchers did not find any clear instances of so-called "hallucinations", when language models seemingly make up information, with any of the three chatbots. But Dr Narayanan cautioned that it is important to mitigate against this occurring moving forward as there is always a possibility of it happening. Bing and Bard did give some answers that were deemed to be irrelevant to the questions asked, but so too did the trainee doctors. More than half of the public in the UK and three-quarters of NHS staff said they support the use of artificial intelligence for patient care, according to a survey released in July by the healthcare think tank the Health Foundation. But both the public and NHS staff also felt that the fact that AI systems cannot show "real empathy" or "kindness" was selected as the biggest disadvantage to using the technology. Hilary Pinnock, a professor of primary care respiratory medicine, at the University of Edinburgh, said: "It is encouraging, but maybe also a bit scary, to see how a widely available AI tool like ChatGPT can provide solutions to complex cases of respiratory illness in children. "It certainly points the way to a brave new world of AI-supported care", she added. Dr Narayanan is now planning to test chatbots against more senior doctors and to look at newer and more advanced large language models.
Share
Share
Copy Link
A recent study reveals that ChatGPT, an AI language model, demonstrates superior performance compared to trainee doctors in assessing complex respiratory diseases. This breakthrough highlights the potential of AI in medical diagnostics and its implications for healthcare education and practice.
In a groundbreaking study, researchers have found that ChatGPT, an artificial intelligence language model, outperforms trainee doctors in assessing complex respiratory diseases. The study, conducted by a team from Düsseldorf University Hospital, compared the AI's performance to that of junior doctors in interpreting pulmonary function tests (PFTs) 1.
The research involved 30 medical students and 30 junior doctors who were tasked with interpreting 20 PFTs. Their performance was then compared to ChatGPT's analysis of the same tests. The results were striking:
This study raises important questions about the future of medical education and the role of AI in healthcare. Professor Dinh-Xuan, the lead researcher, emphasized that while AI shows promise, it should be viewed as a complementary tool rather than a replacement for human doctors 3.
The integration of AI tools like ChatGPT in medical practice could offer several advantages:
Despite the promising results, the researchers and medical community highlight several important considerations:
The study's findings suggest a potential shift in medical education and practice. Future research may focus on:
As AI continues to advance, its role in healthcare is likely to expand, potentially revolutionizing how medical professionals diagnose and treat diseases. However, the human element in healthcare remains irreplaceable, emphasizing the need for a balanced approach in integrating AI technologies into medical practice.
Reference
[1]
[2]
Medical Xpress - Medical and Health News
|ChatGPT outperforms trainee doctors in assessing complex respiratory illness in childrenA recent study reveals that ChatGPT-4 achieved higher diagnostic accuracy than human physicians, even when doctors had access to the AI tool. The findings highlight the potential of AI in healthcare and the need for improved integration strategies.
4 Sources
A recent survey reveals that 20% of general practitioners are utilizing AI tools like ChatGPT for various tasks, despite a lack of formal guidance. This trend highlights both potential benefits and risks in healthcare.
4 Sources
A new study from UC San Francisco shows that AI models like ChatGPT are not yet ready to make critical decisions in emergency rooms, tending to overprescribe treatments and admissions compared to human doctors.
5 Sources
Recent studies showcase AI's potential in revolutionizing brain tumor diagnosis. An AI system outperforms radiologists in accuracy, while ChatGPT demonstrates utility in preoperative MRI analysis, marking significant advancements in medical imaging and diagnostics.
2 Sources
A new AI tool has been developed to accurately draft responses to patient queries in Electronic Health Records (EHRs), potentially streamlining healthcare communication and improving patient care.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved