4 Sources
[1]
AI could dull your doctor's detection skills, study finds
Favorable studies of AI in medicine may be corrupted by the study design. It's important to get a colonoscopy, especially past a certain age, as colorectal cancer is the second-most common cancer in the world after breast cancer. It's also the most common in people over age 55, according to the World Health Organization. Most patients probably don't ask their endoscopist how good their adenoma detection rate (ADR), the typical measure of the doctor's skill in detecting potential cancers. It might be worth asking in the future, however, because artificial intelligence could decrease your endoscopist's skill level, according to a new study in the scholarly journal "The Lancet Gastroenterology & Hepatology." Lead author Krzysztof Budzyń and collaborators at Poland's Academy of Silesia's Department of Gastroenterology and multiple partner institutions describe a phenomenon called "deskilling." It refers to the use of AI as a tool in medicine that may reduce physicians' competence -- in this case, a reduction in the endoscopist's ADR level. Also: Two subscription-free smart rings were just banned in the US - here's why "We found that routine exposure to AI in colonoscopy might reduce the ADR of standard, non-AI-assisted colonoscopy," wrote Budzyń and team. "To our knowledge, this is the first study that suggests AI exposure might have a negative impact on patient-relevant endpoints in medicine in general." Budzyń and team started from a simple premise: studies where endoscopists use AI resulted in improvements in ADR level, which means more cancers detected. The application of AI is seen as part of the growing trend of using computers for colonoscopies, which are called computer-assisted polyp detection systems. However, it's not known what the effects are on the physician who uses such a tool. "[...] Ongoing exposure to AI might change behaviour in different ways," wrote Budzyń and team, "positively, by training clinicians, or negatively, through a deskilling effect, whereby automation use leads to a decay in cognitive skills." To test what effects there might be, Budzyń and team conducted a randomized trial at four endoscopy centers in Poland where 1,443 patients were given colonoscopies both before and after AI was introduced into the centers toward the end of 2021. They then looked at how the quality of the colonoscopies changed from before AI started being used to after. Also: You can learn AI for free with these new courses from Anthropic As they described it, "We evaluated changes in the quality of all diagnostic, non-AI-assisted colonoscopies between Sept 8, 2021, and March 9, 2022, by comparing two different phases: the period approximately 3 months before AI implementation in clinical practice versus the period 3 months after AI implementation in clinical practice." The AI software was an application called CADe running on a dedicated machine called an endoscopy CAD system, which analyzes what's coming out of the endoscope put into the patient. The software and hardware are made by Olympus of Japan, better known for their digital cameras. CADe "uses artificial intelligence (AI) to suggest the potential presence of lesions," said Olympus, and the company has studies that show CADe can boost detection rates. From before CADe was put into service and after, Budzyń and team measured the change in ADR in the non-AI-assisted colonoscopies, defined as "the proportion of colonoscopies in which one or more adenomas [pre-cancerous lesions] are detected," known as "adenomas per colonoscopy," or APC. It's "a widely accepted indicator of colonoscopist performance, with a higher ADR associated with a greater cancer prevention effect." They found a noticeable drop in ADR after CADe had been introduced. "ADR at standard, non-AI assisted colonoscopies decreased significantly from 28·4% (226 of 795) before AI exposure to 22·4% (145 of 648) after AI exposure, corresponding to an absolute difference of -6·0% (95% CI -10·5 to -1·6, p=0·0089)," wrote Budzyń and team. Moreover, of the 19 endoscopists who were evaluated, each of whom had performed more than 2,000 procedures, all but four of them "had a lower ADR when performing standard colonoscopies after AI exposure than before," which they interpret as "suggesting a detrimental effect on endoscopist capability." Budzyń and team make a lot of caveats about the findings. They warn that "Interpretation of these data is challenging" because of statistical confounders and possible "selection bias." Also, there really aren't enough colonoscopies per endoscopist in the study to reliably assess the individual ADR competency of each endoscopist, they concede. More examples per doctor would need to be observed. They caution that further studies are needed. But they offer a hypothesis as to what is happening: human abilities are being degraded by reliance on the machine. Also: Why AI chatbots make bad teachers - and how teachers can exploit that weakness "We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations," they write, "leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance." There have already been studies suggesting such a degradation, they noted, such as a 2019 study "that showed reduced eye movements during colonoscopy when using AI for polyp detection, indicating a risk of overdependence." A study of AI in breast cancer mammography found that "physicians' detection capability decreased significantly when AI support was expected." The authors also said that the one Olympus CADe system might not be representative of all AI medical applications. However, they offer that "Given the general nature of the AI tools and the tendency for humans to over-rely on them, we do not think the study results apply only to this specific AI."
[2]
Are A.I. Tools Making Doctors Worse at Their Jobs?
Physicians are using the technology for diagnoses and more -- but may be losing skills in the process. In the past few years, studies have described the many ways A.I. tools have made doctors better at their jobs: It's aided them in spotting cancer, allowed them to make diagnoses faster and in some cases, helped them more accurately predict who's at risk of complications. But new research suggests that collaborating with A.I. may have a hidden cost. A study published in the Lancet Gastroenterology and Hepatology found that after just three months of using an A.I. tool designed to help spot precancerous growths during colonoscopies, doctors were significantly worse at finding the growths on their own. This is the first evidence that relying on A.I. tools might erode a doctor's ability to perform fundamental skills without the technology, a phenomenon known as "deskilling." "This is a two-way process," said Dr. Omer Ahmad, a gastroenterologist at University College Hospital London who published an editorial alongside the study. "We give A.I. inputs that affect its output, but it also it seems to affect our behavior as well." The study began like many A.I. trials in medicine. Doctors at four endoscopy centers in Poland were given access to an A.I. tool that flagged suspicious growths while they performed a colonoscopy, drawing a box around them in real time. Several other large clinical trials have shown this technology significantly improves doctors' detection rate of precancerous growths, a widely accepted indicator of an endoscopist's performance. Then, unlike in past studies, the researchers measured what happened when the tool was taken away. In the three months before the technology was introduced, the doctors spotted growths in about 28 percent of colonoscopies. Now, the detection rate had fallen to about 22 percent -- well below their base line. This was an observational study, which means it can't answer whether the technology caused the decline in performance. There could be other explanations for the effect: For example, doctors performed about double the number of colonoscopies after the A.I. tool was introduced compared to beforehand, which might have meant they paid less attention to each scan. But experts said the fact that there is a deskilling effect is hardly unexpected. This phenomenon is well-documented in other fields: Pilots, for instance, undergo special training to brush up on their skills in the age of autopilot. "I think the big question is going to be: So what? Is that important?" said Dr. Robert Wachter, chair of the medicine department at the University of California, San Francisco, and author of "A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future." On one hand, Dr. Wachter said, there are plenty of harmless examples of new technology making old skills obsolete. Thanks to the invention of the stethoscope, for example, many doctors would struggle to examine a patient's heart and lungs without one, as was common in the 1700s. But to Dr. Ahmad, A.I. is distinct in that it needs long-term oversight from humans. Algorithms are trained for a specific moment in time, and as the world changes around them, they perform differently -- sometimes for the worse -- and need monitoring and maintenance to make sure they still function as intended. Sometimes unexpected factors, like changes in overhead lighting, can make A.I. results "go completely wrong and haywire," he said. Doctors are supposed to be included in the process to protect patients against those possibilities. "If I lose the skills, how am I going to spot the errors?" Dr. Ahmad asked. Even if the tools were perfect, Wachter cautioned that deskilling could be dangerous for patients during the current transition period, when A.I. tools are not available in every health system and a doctor accustomed to using it might be asked by a new employer to function without it. And while the erosion of skill is obvious to someone looking at data from thousands of procedures, Dr. Wachter said, he doubted that each individual doctor noticed a change in their own ability. It's still not entirely clear why a doctor's skills might decline so quickly while using A.I. One small eye-tracking study found that while using the A.I., doctors tended to look less at the edges of the image, suggesting that some of the muscle memory involved in reviewing a scan was altered by using the tool. Dr. Ahmad said it might also be the case that after months of relying on a helper, the cognitive stamina that's required to carefully evaluate each scan had atrophied. Either way, medical education experts and health care leaders are already considering how to combat the effect. Some health systems, like UC San Diego Health, have recently invested in simulation training, which may be used to help doctors practice procedures without A.I. to keep their skills sharp, said Dr. Chris Longhurst, chief clinical and innovation officer at the health system. Dr. Adam Rodman, director of A.I. programs at Beth Israel Deaconess Medical Center in Boston, said some medical schools have also considered banning A.I. for students' first years of training. If just three months of using an A.I. tool could erode the skills of the experienced physicians included in the study (on average, the doctors had been practicing for about 27 years), what would happen to medical students and residents who are just starting to develop those skills? "We're increasingly calling it never-skilling," Dr. Rodman said.
[3]
AI may revolutionise healthcare, but at the cost of doctors' skills, says Lancet
A recent study in Poland suggests that relying too heavily on AI in colonoscopies may negatively impact doctors' skills. Researchers found a decrease in adenoma detection rates after AI tools were introduced. This raises concerns about the potential for over-reliance on AI to diminish clinicians' focus and responsibility, despite its promise in healthcare. Artificial intelligence has become a trusted ally in modern medicine, helping doctors make quicker and more accurate decisions. From spotting tumours on scans to predicting treatment outcomes, AI has shown remarkable potential. But a new study published in The Lancet Gastroenterology and Hepatology has raised an uncomfortable question: could too much reliance on AI actually weaken doctors' own skills? The study was carried out across four colonoscopy centres in Poland, where AI tools were introduced in late 2021 to detect polyps, small growths in the colon that can develop into cancer. Researchers noticed something surprising. The average detection rate of adenomas (non-cancerous but potentially risky cells) dropped from 28% before AI exposure to 22% after AI exposure. That is a 20% relative and 6% absolute reduction, suggesting that doctors who regularly used AI may have become less sharp when performing colonoscopies without it. "To our knowledge, this is the first study to suggest a negative impact of regular AI use on healthcare professionals' ability to complete a patient-relevant task in medicine of any kind," said Dr Marcin Romarnczyk of the Academy of Silesia. He warned that with AI rapidly spreading in healthcare, urgent research is needed to understand how it affects doctors' long-term skills. The findings also raised doubts about earlier randomised controlled trials, many of which reported higher adenoma detection rates with AI-assisted colonoscopy. According to co-author Yuichi Mori from the University of Oslo, the trials may have overlooked a crucial detail: repeated AI use could subtly dull doctors' performance during standard, non-AI procedures. The researchers argue that overexposure to decision-support systems may encourage a natural human tendency, over-reliance. This can make clinicians less focused, less motivated, and ultimately less responsible for the outcomes. Not everyone views the findings as a cause for alarm. Dr Vidur Mahajan, founder and CEO of CARPL.AI, argued that the focus should be on lifting average doctors to world-class levels rather than worrying about skill erosion. "Technology is an inevitable part of our lives and we must embrace the advantages of it by enabling the democratisation of it," he said. Drawing an analogy, he added: "Imagine a world without Google Maps, would you trust a driver who does not use it?" The study, funded by the European Commission, Japan Society for the Promotion of Science, and the Italian Association for Cancer Research, underscores a critical dilemma: while AI promises to make healthcare safer and smarter, it may also carry hidden risks if doctors start trusting machines more than their own judgement. Inputs from TOI
[4]
Is AI making doctors lazy? Study reveals overreliance may be undermining their critical skills
AI in healthcare: As artificial intelligence becomes more common in hospitals and clinics, a new concern is emerging that doctors may be losing critical skills the more they rely on these tools. A recent study published in The Lancet Gastroenterology & Hepatology suggests that AI, while helpful in the moment, might be quietly reshaping how doctors perform and not always in good ways. The research focused on endoscopists performing colonoscopies and found that their ability to detect abnormalities dropped after losing access to AI assistance, as per a report. Cleveland Clinic has described that colonoscopy is an examination of the inside of your large intestine (colon). Dr. Marcin Romańczyk, a gastroenterologist at H-T Medical Center in Tychy, Poland, led the study and what he found was unexpected from him, according to a Fortune report. ALSO READ: Top AI Tools of 2025: Is ChatGPT still leading or is Gemini, Grok, DeepSeek taking over? The study observed 1,443 patients who underwent colonoscopies with and without the help of AI tools. While using the AI system, which highlighted possible polyps with a green box on the screen, doctors detected abnormalities at a rate of 28.4%, as per Fortune. But when the same doctors later performed procedures without the AI, their detection rate fell to 22.4%, which is a 20% decrease in detection rates, according to the report. Romańczyk and his team did not collect data on why this happened as they hadn't anticipated the decline. But he has a theory. He believes that the doctors became too accustomed to relying on the green box. Without it, the specialists no longer knew exactly where to pay attention. He compared it to how people navigate with GPS today, a shift he calls the "Google Maps effect," he means that drivers have transitioned from the era of paper maps to that of GPS and most people rely on automation to show the most efficient route, when 20 years ago, one had to find out that route for themselves, as reported by Fortune. ALSO READ: Is Michael Saylor's Bitcoin bet backfiring? Strategy stock takes a hit Romańczyk explained that, "We were taught medicine from books and from our mentors. We were observing them. They were telling us what to do," adding, "And now there's some artificial object suggesting what we should do, where we should look, and actually we don't know how to behave in that particular case," as quoted in the report. The findings not only highlight the potential laziness developing as a result of an overreliance on AI, but also the changing relationship between medical practitioners and a longstanding tradition of analog training, as reported by Fortune. ALSO READ: After birthday parade by US forces, now Trump may get a giant Navy boat parade to cheer him up His study contributes to this growing research of questioning humans' ability to use AI without compromising their own skillset, according to the report. While AI is increasingly been used in hospitals and doctors' offices, it's also rapidly reshaping workspaces with the hopes of enhancing performance, as per Fortune. Last year, Goldman Sachs had forcasted that the technology could increase productivity by 25%, reported Fortune. However, the emerging researches have also warned of the challenges of adopting AI tools, like the study from Microsoft and Carnegie Mellon University earlier this year found that among surveyed knowledge workers, AI increased work efficiency, but reduced critical engagement with content, reducing judgment skills, as reported by Fortune. ALSO READ: Is AI therapy safe? Hidden risks you must know before using chatbots for mental health Romańczyk doesn't suggest avoiding the presence of AI in medicine, pointing out that, "AI will be, or is, part of our life, whether we like it or not," adding, "We are not trying to say that AI is bad and [to stop using] it. Rather, we are saying we should all try to investigate what's happening inside our brains, how we are affected by it? How can we actually effectively use it?" as quoted in the report. Even Lynn Wu, associate professor of operations, information, and decisions at the University of Pennsylvania's Wharton School, emphasised that, "We have to maintain those critical skills, such that when AI is not working, we know how to take over," as quoted by Fortune. ALSO READ: How AI agents are taking control of your company -- sharing secrets, making costly decisions, and deleting data Is AI helping or hurting doctors? AI helps improve efficiency and accuracy, but overreliance may reduce doctors' own decision-making skills. What is the "Google Maps effect"? It's when people become so dependent on tools like GPS (or AI) that they lose their own navigation or thinking skills.
Share
Copy Link
A new study reveals that while AI tools improve medical diagnoses, they may inadvertently lead to a decline in doctors' skills, raising concerns about the long-term impact of AI adoption in healthcare.
Artificial Intelligence (AI) has been making significant strides in the medical field, particularly in enhancing diagnostic capabilities. Studies have shown that AI tools can aid doctors in spotting cancer, making faster diagnoses, and more accurately predicting complications 12. However, a recent study published in The Lancet Gastroenterology and Hepatology has raised concerns about the potential negative impact of AI on doctors' skills 123.
The study, conducted across four endoscopy centers in Poland, focused on the use of AI in colonoscopies. Researchers observed 1,443 patients undergoing colonoscopies both before and after the introduction of an AI tool called CADe, developed by Olympus 13. The AI system was designed to highlight potential polyps with a green box on the screen during the procedure 4.
Source: Economic Times
The results were surprising:
Source: ZDNet
This phenomenon, termed "deskilling," suggests that routine exposure to AI in medical procedures might reduce doctors' competence when performing tasks without AI assistance 12. Dr. Krzysztof Budzyń, the lead author, noted that this is the first study indicating AI exposure might negatively impact patient-relevant endpoints in medicine 1.
Several factors may contribute to this deskilling effect:
Dr. Omer Ahmad from University College Hospital London emphasized that this is a two-way process, where AI affects doctors' behavior, and doctors' inputs influence AI outputs 2.
The study's findings have significant implications for medical education and practice:
Source: Economic Times
While the study raises concerns, it's essential to consider the broader context of AI in healthcare:
As AI continues to permeate healthcare, several challenges and questions emerge:
In conclusion, while AI shows great promise in enhancing medical diagnostics and treatment, the potential for skill erosion among healthcare professionals presents a significant challenge. Balancing the benefits of AI with the need to maintain and develop critical medical skills will be crucial as healthcare continues to evolve in the age of artificial intelligence.
Summarized by
Navi
[2]
Nvidia reports record Q2 revenue of $46.7 billion, with two unidentified customers contributing 39% of the total. This concentration raises questions about the company's future prospects and potential risks.
2 Sources
Business
5 hrs ago
2 Sources
Business
5 hrs ago
Julie Sweet, CEO of Accenture, discusses the importance of AI integration in business operations and warns against failed AI projects. She emphasizes the need for companies to reinvent themselves to fully leverage AI's potential.
2 Sources
Business
5 hrs ago
2 Sources
Business
5 hrs ago
Stanford researchers have developed a brain-computer interface that can translate silent thoughts in real-time, offering hope for paralyzed individuals but raising privacy concerns.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago
The term 'clanker' has emerged as a popular anti-AI slur, reflecting growing tensions between humans and artificial intelligence. This story explores its origins, spread, and the complex reactions it has sparked in both anti-AI and pro-AI communities.
2 Sources
Technology
5 hrs ago
2 Sources
Technology
5 hrs ago
Tesla and Waymo are employing radically different strategies in their pursuit of autonomous ride-hailing services, with Tesla aiming for rapid expansion and Waymo taking a more cautious approach.
4 Sources
Technology
2 days ago
4 Sources
Technology
2 days ago