Curated by THEOUTPOST
On Wed, 18 Sept, 8:04 AM UTC
4 Sources
[1]
One in five GPs use AI such as ChatGPT for daily tasks, survey finds
Doctors are using the technology for activities such as suggesting diagnoses and writing letters, according to BMA A fifth of GPs are using artificial intelligence (AI) tools such as ChatGPT to help with tasks such as writing letters for their patients after appointments, according to a survey. The survey, published in the journal BMJ Health and Care Informatics, spoke to 1,006 GPs. They were asked whether they had ever used any form of AI chatbot in their clinical practice, such as ChatGPT, Bing AI or Google's Gemini, and were then asked what they used these tools for. One in five of the respondents said that they had used generative AI tools in their clinical practice and, of these, almost a third (29%) said that they had used them to generate documentation after patient appointments, while 28% said that they had used the tools to suggest a different diagnosis. A quarter of respondents said they had used the AI tools to suggest treatment options for their patients. These AI tools, such as ChatGPT, work by generating a written answer to a question posed to the software. The researchers said that the findings showed that "GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning". However, the researchers went on to question whether these AI tools being used could risk harming and undermining patient privacy "since it is not clear how the internet companies behind generative AI use the information they gather". They added: "While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice." Dr Ellie Mein, medico-legal adviser at the Medical Defence Union, said that the use of AI by GPs could raise issues including inaccuracy and patient confidentiality. "This is an interesting piece of research and is consistent with our own experience of advising MDU members," Mein said. "It's only natural for healthcare professionals to want to find ways to work smarter with the pressures they face. Along with the uses identified in the BMJ paper, we've found that some doctors are turning to AI programs to help draft complaint responses for them. We have cautioned MDU members about the issues this raises, including inaccuracy and patient confidentiality. There are also data protection considerations." She added: "When dealing with patient complaints, AI drafted responses may sound plausible but can contain inaccuracies and reference incorrect guidelines which can be hard to spot when woven into very eloquent passages of text. It's vital that doctors use AI in an ethical way and comply with relevant guidance and regulations. Clearly this is an evolving area and we agree with the authors that current and future doctors need greater awareness of the benefits and risks of using AI at work."
[2]
Fifth of family doctors using AI despite lack of guidance or clear work policies, UK survey suggests
A fifth of family doctors (GPs) seem to have readily incorporated AI into their clinical practice, despite a lack of any formal guidance or clear work policies on the use of these tools, suggest the findings of an online UK-wide snapshot survey, published in the open access journal BMJ Health & Care Informatics. Doctors and medical trainees need to be fully informed about the pros and cons of AI, especially because of the inherent risks of inaccuracies ('hallucinations'), algorithmic biases, and the potential to compromise patient privacy, conclude the researchers. Following the launch of ChatGPT at the end of 2022, interest in large language model-powered chatbots has soared, and attention has increasingly focused on the clinical potential of these tools, say the researchers. To gauge current use of chatbots to assist with any aspect of clinical practice in the UK, in February 2024 the researchers distributed an online survey to a randomly chosen sample of GPs registered with the clinician marketing service Doctors.net.uk. The survey had a predetermined sample size of 1,000. The doctors were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google's Bard; or "Other." And they were subsequently asked what they used these tools for. Some 1,006 GPs completed the survey: just over half the responses came from men (531; 53%) and a similar proportion of respondents (544;54%) were aged 46 or older. One in five (205; 20%) respondents reported using generative AI tools in their clinical practice. Of these, more than one in four (29%; 47) reported using these tools to generate documentation after patient appointments and a similar proportion (28%; 45) said they used them to suggest a different diagnosis. One in four (25%; 40) said they used the tools to suggest treatment options. The researchers acknowledge that the survey respondents may not be representative of all UK GPs, and that those who responded may have been particularly interested in AI -- for good or bad -- potentially introducing a level of bias into the findings. Further research is needed to find out more about how doctors are using generative AI and how best to implement these tools safely and securely into clinical practice, they add. "These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases," they say. And they point out, "[These tools] may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather. "While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice." They conclude, "The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarizing information but also the risks in terms of hallucinations [perception of non-existent patterns or objects], algorithmic biases, and the potential to compromise patient privacy."
[3]
One in five GPs using AI at work despite lack of training - with some even using it in diagnosis
One in five GPs is using generative AI tools to help them do their job, despite the risks that come with it, and a lack of training. GPs are using commercially available tools like ChatGPT to write documentation and even suggest alternative diagnoses for patients, a new study in the British Medical Journal shows. It's the largest study of its kind and shows for the first time the extent of AI use in the UK's GP surgeries. The study's author, Dr Charlotte Blease, associate professor at Sweden's Uppsala University, called the extent of AI use by doctors "surprising" because "doctors haven't received formal training on these tools and they're still very much a regulatory black hole". Tools like ChatGPT have known issues that could be harmful for patients, including their tendency to "hallucinate", or make things up. "Perhaps the biggest risk is with patient privacy," said Dr Blease. "Our doctors may unintentionally be gifting patients' highly sensitive information to these tech companies." "They may [also] embed biases within care so some patients may be at risk of unfair clinical judgements. "We don't know if [AI's biases] are worse than what arises in ordinary human health care, but there certainly is a risk of bias." The team surveyed over 1,000 doctors and of the one in five who said they do use generative AI in their work, 28% said they use it to suggest different diagnoses for their patients. Another 29% said they use AI to generate documentation after patient appointments. More from Sky News: Meta bans Russian state media from Facebook and Instagram Three mpox scenarios the UK is preparing for Teen Instagram users to get strict privacy settings in major update The majority of NHS staff support the use of AI to help with patient care, with 81% also in favour of its use for administrative tasks, a study by the Health Foundation found in July. The NHS doesn't offer much guidance for doctors around how they should use AI, despite healthcare professionals wanting to use it more. Instead, it asks them to use their "professional judgement" when working with the technology. Dr Blease, who is also the author of a book on how AI can be used in healthcare, said doctors are "crying out for some concrete advice" on how to use the technology. "There does need to be targeted training and advice being offered to doctors," she said.
[4]
GPs use ChatGPT to help them treat patients, Harvard study warns
A survey of British family doctors found one in five using AI software to suggest treatments or write letters Michael Searles Health Correspondent 18 September 2024 7:01am GPs have been using ChatGPT to treat patients, a Harvard study has warned. Researchers at the American university found one in five family doctors in the UK had used artificial intelligence tools while treating patients, despite a lack of regulation. The survey of 1,006 GPs found dozens were using AI to help diagnose conditions and find treatment options. A quarter of the 205 who admitted using machine-learning tools to help them do their jobs said they had asked the software to suggest treatments. Almost three in 10 said they had used AI to help diagnose a patient. Others admitted they had used it to write letters, generate documents after an appointment with a patient, or create patient summaries and timelines based on past records. Experts warned that unregulated use of tools such as ChatGPT, Microsoft's Bing AI or Google's Bard, could "risk harm and undermine patient privacy". The study, which involved disseminating a survey to family doctors through doctors.net.uk in February this year, was the largest of its kind to assess the use of AI in medical practice. ChatGPT was the most commonly used AI tool, with 16 per cent of GPs admitting to using the chatbot, which launched in 2022. Already in use in NHS AI is already being used in other NHS settings, for example helping radiologists to interpret scans or building personalised 3D images of tumours, as well as assisting with administrative tasks such as booking-in patients. But the researchers warned there was a "lack of guidance" and "unclear work policies" for AI in general practice. They cautioned doctors about the technology's limitations because it "can embed subtle errors and biases". The study was conducted by an international team led by Dr Charlotte Blease, a healthcare researcher at Harvard Medical School and associate professor at Uppsala University in Sweden. "These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases," the authors wrote. "They may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather." The researchers said it was "unclear" how legislation to regulate AI in medical practice would work in reality and called for doctors to be trained about the benefits and risks. 'AI must be regulated' Prof Kamila Hawthorne, chair of the Royal College of GPs, said AI "must be closely regulated to guarantee patient safety and the security of their data". She added: "For general practice, AI could help to solve a longstanding problem - high levels of unnecessary bureaucracy and administrative processes are a significant drain on GP time." Other studies have shown that GPs can spend a quarter of their time on admin, and so using AI to alleviate this could free up time for patients, Prof Hawthorne added. "Technology will always need to work alongside and complement the work of doctors and other healthcare professionals, and it can never be seen as a replacement for the expertise of a qualified medical professional," she said. The Harvard-led research was published in the BMJ Health and Care Informatics journal. It comes after a separate study published yesterday revealed that GPs were contracted to work just 26 hours a week on average in 2022, based on analysis of NHS data. The study in the British Journal of General Practice found that family doctors had been reducing their working hours despite a growing number of patients. The average contracted hours of a GP fell 10 per cent between 2015 and 2022, and GPs worked fewer hours in total, despite their number increasing 5 per cent in that period.
Share
Share
Copy Link
A recent survey reveals that 20% of general practitioners are utilizing AI tools like ChatGPT for various tasks, despite a lack of formal guidance. This trend highlights both potential benefits and risks in healthcare.
A groundbreaking survey has revealed that one in five general practitioners (GPs) in the UK are now using artificial intelligence (AI) tools, such as ChatGPT, for their daily tasks 1. This significant adoption rate highlights the growing influence of AI in healthcare, particularly in primary care settings.
GPs are employing AI for a variety of purposes, including:
Some doctors have even reported using AI to help explain complex medical conditions to patients in simple terms 3.
Despite the widespread use, there is a concerning lack of formal guidance or training for healthcare professionals on the appropriate use of AI tools. The survey found that 73% of GPs using AI had not received any training on its application in clinical practice 2. This gap in education raises questions about the potential risks and ethical considerations of AI use in healthcare.
Proponents argue that AI could help alleviate the workload of overstretched GPs and potentially improve patient care. Dr. David Wrigley, deputy chair of the British Medical Association's GP committee, stated that AI has the potential to support GPs in their administrative tasks, freeing up more time for patient care 1.
However, concerns have been raised about the reliability and safety of using AI in clinical settings without proper guidelines. Critics worry about the potential for misdiagnosis or inappropriate treatment recommendations if AI tools are not used correctly 3.
The trend is not limited to the UK. A study from Harvard Medical School found that 39% of surveyed doctors in the United States have used ChatGPT to help treat patients 4. This indicates a global shift towards AI adoption in healthcare.
In light of these findings, there is a growing call for clear guidelines and regulations on the use of AI in healthcare. Dr. Wrigley emphasized the need for proper governance to ensure patient safety and data protection 1. Healthcare organizations and policymakers are now faced with the challenge of developing comprehensive frameworks to guide the responsible use of AI in medical practice.
Reference
[2]
Medical Xpress - Medical and Health News
|Fifth of family doctors using AI despite lack of guidance or clear work policies, UK survey suggests[3]
[4]
A recent survey reveals that one in five UK doctors are using generative AI tools in clinical practice, raising questions about patient safety and the need for proper regulations.
2 Sources
A recent study reveals that ChatGPT-4 achieved higher diagnostic accuracy than human physicians, even when doctors had access to the AI tool. The findings highlight the potential of AI in healthcare and the need for improved integration strategies.
4 Sources
A recent study reveals that ChatGPT, an AI language model, demonstrates superior performance compared to trainee doctors in assessing complex respiratory diseases. This breakthrough highlights the potential of AI in medical diagnostics and its implications for healthcare education and practice.
3 Sources
Healthcare providers are increasingly using AI to draft responses to patient inquiries. This trend raises questions about efficiency, accuracy, and the changing nature of doctor-patient relationships in the digital age.
4 Sources
A collaborative research study explores the effectiveness of GPT-4 in assisting physicians with patient diagnosis, highlighting both the potential and limitations of AI in healthcare.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved