5 Sources
5 Sources
[1]
AI could dull your doctor's detection skills, study finds
Favorable studies of AI in medicine may be corrupted by the study design. It's important to get a colonoscopy, especially past a certain age, as colorectal cancer is the second-most common cancer in the world after breast cancer. It's also the most common in people over age 55, according to the World Health Organization. Most patients probably don't ask their endoscopist how good their adenoma detection rate (ADR), the typical measure of the doctor's skill in detecting potential cancers. It might be worth asking in the future, however, because artificial intelligence could decrease your endoscopist's skill level, according to a new study in the scholarly journal "The Lancet Gastroenterology & Hepatology." Lead author Krzysztof Budzyń and collaborators at Poland's Academy of Silesia's Department of Gastroenterology and multiple partner institutions describe a phenomenon called "deskilling." It refers to the use of AI as a tool in medicine that may reduce physicians' competence -- in this case, a reduction in the endoscopist's ADR level. Also: Two subscription-free smart rings were just banned in the US - here's why "We found that routine exposure to AI in colonoscopy might reduce the ADR of standard, non-AI-assisted colonoscopy," wrote Budzyń and team. "To our knowledge, this is the first study that suggests AI exposure might have a negative impact on patient-relevant endpoints in medicine in general." Budzyń and team started from a simple premise: studies where endoscopists use AI resulted in improvements in ADR level, which means more cancers detected. The application of AI is seen as part of the growing trend of using computers for colonoscopies, which are called computer-assisted polyp detection systems. However, it's not known what the effects are on the physician who uses such a tool. "[...] Ongoing exposure to AI might change behaviour in different ways," wrote Budzyń and team, "positively, by training clinicians, or negatively, through a deskilling effect, whereby automation use leads to a decay in cognitive skills." To test what effects there might be, Budzyń and team conducted a randomized trial at four endoscopy centers in Poland where 1,443 patients were given colonoscopies both before and after AI was introduced into the centers toward the end of 2021. They then looked at how the quality of the colonoscopies changed from before AI started being used to after. Also: You can learn AI for free with these new courses from Anthropic As they described it, "We evaluated changes in the quality of all diagnostic, non-AI-assisted colonoscopies between Sept 8, 2021, and March 9, 2022, by comparing two different phases: the period approximately 3 months before AI implementation in clinical practice versus the period 3 months after AI implementation in clinical practice." The AI software was an application called CADe running on a dedicated machine called an endoscopy CAD system, which analyzes what's coming out of the endoscope put into the patient. The software and hardware are made by Olympus of Japan, better known for their digital cameras. CADe "uses artificial intelligence (AI) to suggest the potential presence of lesions," said Olympus, and the company has studies that show CADe can boost detection rates. From before CADe was put into service and after, Budzyń and team measured the change in ADR in the non-AI-assisted colonoscopies, defined as "the proportion of colonoscopies in which one or more adenomas [pre-cancerous lesions] are detected," known as "adenomas per colonoscopy," or APC. It's "a widely accepted indicator of colonoscopist performance, with a higher ADR associated with a greater cancer prevention effect." They found a noticeable drop in ADR after CADe had been introduced. "ADR at standard, non-AI assisted colonoscopies decreased significantly from 28·4% (226 of 795) before AI exposure to 22·4% (145 of 648) after AI exposure, corresponding to an absolute difference of -6·0% (95% CI -10·5 to -1·6, p=0·0089)," wrote Budzyń and team. Moreover, of the 19 endoscopists who were evaluated, each of whom had performed more than 2,000 procedures, all but four of them "had a lower ADR when performing standard colonoscopies after AI exposure than before," which they interpret as "suggesting a detrimental effect on endoscopist capability." Budzyń and team make a lot of caveats about the findings. They warn that "Interpretation of these data is challenging" because of statistical confounders and possible "selection bias." Also, there really aren't enough colonoscopies per endoscopist in the study to reliably assess the individual ADR competency of each endoscopist, they concede. More examples per doctor would need to be observed. They caution that further studies are needed. But they offer a hypothesis as to what is happening: human abilities are being degraded by reliance on the machine. Also: Why AI chatbots make bad teachers - and how teachers can exploit that weakness "We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations," they write, "leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance." There have already been studies suggesting such a degradation, they noted, such as a 2019 study "that showed reduced eye movements during colonoscopy when using AI for polyp detection, indicating a risk of overdependence." A study of AI in breast cancer mammography found that "physicians' detection capability decreased significantly when AI support was expected." The authors also said that the one Olympus CADe system might not be representative of all AI medical applications. However, they offer that "Given the general nature of the AI tools and the tendency for humans to over-rely on them, we do not think the study results apply only to this specific AI."
[2]
Are A.I. Tools Making Doctors Worse at Their Jobs?
Physicians are using the technology for diagnoses and more -- but may be losing skills in the process. In the past few years, studies have described the many ways A.I. tools have made doctors better at their jobs: It's aided them in spotting cancer, allowed them to make diagnoses faster and in some cases, helped them more accurately predict who's at risk of complications. But new research suggests that collaborating with A.I. may have a hidden cost. A study published in the Lancet Gastroenterology and Hepatology found that after just three months of using an A.I. tool designed to help spot precancerous growths during colonoscopies, doctors were significantly worse at finding the growths on their own. This is the first evidence that relying on A.I. tools might erode a doctor's ability to perform fundamental skills without the technology, a phenomenon known as "deskilling." "This is a two-way process," said Dr. Omer Ahmad, a gastroenterologist at University College Hospital London who published an editorial alongside the study. "We give A.I. inputs that affect its output, but it also it seems to affect our behavior as well." The study began like many A.I. trials in medicine. Doctors at four endoscopy centers in Poland were given access to an A.I. tool that flagged suspicious growths while they performed a colonoscopy, drawing a box around them in real time. Several other large clinical trials have shown this technology significantly improves doctors' detection rate of precancerous growths, a widely accepted indicator of an endoscopist's performance. Then, unlike in past studies, the researchers measured what happened when the tool was taken away. In the three months before the technology was introduced, the doctors spotted growths in about 28 percent of colonoscopies. Now, the detection rate had fallen to about 22 percent -- well below their base line. This was an observational study, which means it can't answer whether the technology caused the decline in performance. There could be other explanations for the effect: For example, doctors performed about double the number of colonoscopies after the A.I. tool was introduced compared to beforehand, which might have meant they paid less attention to each scan. But experts said the fact that there is a deskilling effect is hardly unexpected. This phenomenon is well-documented in other fields: Pilots, for instance, undergo special training to brush up on their skills in the age of autopilot. "I think the big question is going to be: So what? Is that important?" said Dr. Robert Wachter, chair of the medicine department at the University of California, San Francisco, and author of "A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future." On one hand, Dr. Wachter said, there are plenty of harmless examples of new technology making old skills obsolete. Thanks to the invention of the stethoscope, for example, many doctors would struggle to examine a patient's heart and lungs without one, as was common in the 1700s. But to Dr. Ahmad, A.I. is distinct in that it needs long-term oversight from humans. Algorithms are trained for a specific moment in time, and as the world changes around them, they perform differently -- sometimes for the worse -- and need monitoring and maintenance to make sure they still function as intended. Sometimes unexpected factors, like changes in overhead lighting, can make A.I. results "go completely wrong and haywire," he said. Doctors are supposed to be included in the process to protect patients against those possibilities. "If I lose the skills, how am I going to spot the errors?" Dr. Ahmad asked. Even if the tools were perfect, Wachter cautioned that deskilling could be dangerous for patients during the current transition period, when A.I. tools are not available in every health system and a doctor accustomed to using it might be asked by a new employer to function without it. And while the erosion of skill is obvious to someone looking at data from thousands of procedures, Dr. Wachter said, he doubted that each individual doctor noticed a change in their own ability. It's still not entirely clear why a doctor's skills might decline so quickly while using A.I. One small eye-tracking study found that while using the A.I., doctors tended to look less at the edges of the image, suggesting that some of the muscle memory involved in reviewing a scan was altered by using the tool. Dr. Ahmad said it might also be the case that after months of relying on a helper, the cognitive stamina that's required to carefully evaluate each scan had atrophied. Either way, medical education experts and health care leaders are already considering how to combat the effect. Some health systems, like UC San Diego Health, have recently invested in simulation training, which may be used to help doctors practice procedures without A.I. to keep their skills sharp, said Dr. Chris Longhurst, chief clinical and innovation officer at the health system. Dr. Adam Rodman, director of A.I. programs at Beth Israel Deaconess Medical Center in Boston, said some medical schools have also considered banning A.I. for students' first years of training. If just three months of using an A.I. tool could erode the skills of the experienced physicians included in the study (on average, the doctors had been practicing for about 27 years), what would happen to medical students and residents who are just starting to develop those skills? "We're increasingly calling it never-skilling," Dr. Rodman said.
[3]
What might an AI de-skilling effect mean for enterprises?
It's often difficult to assess claims that AI will improve the detection rates of problems by medical specialists like radiologists or endoscopists. Vendors are quick to point out how AI does a better job, which usually turns out to be carefully controlled circumstances rather than messy real-world situations. But now we can add AI de-skilling to the list of cautions with the adoption of these tools. Researchers in Poland recently reported discovering a de-skilling risk across four hospitals taking part in the Artificial Intelligence in Colonoscopy for Cancer Prevention (ACCEPT) trial. The team was looking at the adenoma detection rate (ADR), which characterizes the percentage of times an endoscopist discovers a precancerous lesion. The higher the rate, the better, since catching tumors before they turn cancerous can save lives. Initially, the average endoscopists had an ADR of 28.4% but after using AI, this average dropped to 22.4%. Indeed, gastroenterologist Marcin Romańczyk at the Academy of Silesia in Poland, who worked on the study, says he was surprised by the findings: When we looked first at the outcomes, I was like 'whoa.' We did a lot of analysis for one year and discussed it within the team. More or less, everyone was surprised. Even some who were expecting something might happen were surprised by the findings. Popular media interpretations of AI studies sometimes claim that AI can detect problems far more accurately than medical experts like endoscopists or radiologists. The Polish researchers came across one prior meta-analysis of multiple studies that reported an 8.1% boost in ADR. One explanation for the relative improvement of AI assistance over unassisted performance is that the improvements were compared after the de-skilling effect occurred. For example, in this study, the AI-assistance achieved an ADR of 25.3% which was 2.9% higher than the unassisted rate. It was also 4.1% lower than the humans alone before being exposed to AI! In the endoscopy workflow, the analysis goes on in real time, unlike radiology, where it happens after the fact. When a polyp is detected, the endoscopy has the option of either removing it or taking a biopsy if there is a concern about it having gone malignant. When they started, they were not sure it would help; endoscopists might see more polyps or subconsciously become lazier and more reliant on the AI. But not all endoscopists got worse - some were stable or even improved. Romańczyk says further research might be required to see how these differ in their behavior from the ones that showed the most decline: We don't know anything about our interactions. We don't know what's happening in our brains, even if it's so simple. It's basically a pointer showing us where we should look, but what's happening inside of us is a bit more complicated. We need to check what the reasons are for that, how our relationship with AI works, how we can measure it, assess it, and then improve it. The researchers were not able to evaluate the duration of procedures before the use of AI, with AI-assisted, and then unassisted, after working with the AI. Romańczyk says they hope to investigate this in the future. Romańczyk is still hopeful about the future of AI despite the de-skilling risks: We are all a bit excited about new technology. We like new devices, better image quality, better endoscopes and so on. So once AI becomes available, then of course we want to check how it works. I think it's quite reasonable to want to get as much as possible from the AI itself. Not just as a customer, but also having some input into the research. The most important part is to collect as many new projects and options as possible to assess what is happening. We haven't turned off and shut down AI. I am pretty sure that once you read our article, you will not turn off AI in your browser or on your phone. We know it's the direction we will eventually be leading to. We need to do our best to understand the mechanisms, how we can modify them, and push ourselves to take advantage of technology, and how to select individuals that could be modified in the future, like overreliance on AI. I think this applies to any new technology. Vendors suggest deskilling risks need to be balanced against better patient outcomes and the development of new skills. Muthu Alagappan, MD, Co-Founder and CEO of Counsel Health, a medical AI vendor, says we need to proceed cautiously while looking for new up-skilling opportunities: The concern about de-skilling doctors brought up by the Lancet paper is meaningful. We must accept that there may be inevitable deskilling in current domains, but as a result, there may be up-skilling and empowerment in new domains of medical practice. For example, the advent of Google Maps has made humans less skilled at turn-by-turn directions but has made us more skilled at quickly exploring new cities. Gathering metrics to measure this skill transition in healthcare is critical to ensuring the AI remains a supportive diagnostic tool, not a hindrance. In terms of de-skilling countermeasures, Alagappan envisions a future where AI and physicians work together to deliver high-quality, high-value care. For example, one form might be where AI offers recommendations, and the physician chooses to accept, reject, or modify those recommendations. This might turn physicians into the editors rather than the authors of care. Alagappan argues this keeps the clinician actively engaged in critical thinking, while still benefiting from an AI's speed and breadth of knowledge: The result is that physicians should have far more clinical capacity, and as a result, patients will have better access to care. The endoscopist's findings point toward broader implications outside of medicine that might be harder to measure and assess than ADR. Indeed, some researchers have theorized that AI might change behavior in different ways and may even prevent users from recognizing these deleterious effects. Romańczyk says many of these same de-skilling effects need to be studied in other fields like AI-assisted software development: An important part is that we need to support research. All of these factors would be present for developers. If I were a developer, I would do my best to cover all aspects of the system, like integrations, understanding how people use it, and how they are going to act. I would also have researchers investigate this topic. These deskilling effects might be a disappointment, especially for developers optimistic about AI systems. However, I don't think they need to reconsider whether they need it or not. They need to find a solution to use it as successfully as possible. There has been plenty written on how AI might cause a great unravelling in the talent development pipeline as it automates away entry-level jobs. In software development, it may also introduce new bugs and security vulnerabilities that outweigh any benefits from vibe coding. But there has been little discussion about AI deskilling risks. Some metrics, like ADR, are relatively straightforward to measure and compare. But then, how do you measure more nuanced and qualitative changes, such as an expert's ability to understand and think through systemic problems? Alagappan's suggestion that AI might turn doctors from editors rather than authors of care seems like one approach. It would certainly speed up medical processes, reduce backlogs for understaffed hospitals, and make the bean counters happy. But then it also has the potential to introduce pressure on doctors to go faster and spend less time thinking through problems in the process. We will be cautious that we adopt these tools in a way that does not erode the skills required to balance AI's limitations. Romańczyk concludes:
[4]
Is AI making doctors lazy? Study reveals overreliance may be undermining their critical skills
AI in healthcare: As artificial intelligence becomes more common in hospitals and clinics, a new concern is emerging that doctors may be losing critical skills the more they rely on these tools. A recent study published in The Lancet Gastroenterology & Hepatology suggests that AI, while helpful in the moment, might be quietly reshaping how doctors perform and not always in good ways. The research focused on endoscopists performing colonoscopies and found that their ability to detect abnormalities dropped after losing access to AI assistance, as per a report. Cleveland Clinic has described that colonoscopy is an examination of the inside of your large intestine (colon). Dr. Marcin Romańczyk, a gastroenterologist at H-T Medical Center in Tychy, Poland, led the study and what he found was unexpected from him, according to a Fortune report. ALSO READ: Top AI Tools of 2025: Is ChatGPT still leading or is Gemini, Grok, DeepSeek taking over? The study observed 1,443 patients who underwent colonoscopies with and without the help of AI tools. While using the AI system, which highlighted possible polyps with a green box on the screen, doctors detected abnormalities at a rate of 28.4%, as per Fortune. But when the same doctors later performed procedures without the AI, their detection rate fell to 22.4%, which is a 20% decrease in detection rates, according to the report. Romańczyk and his team did not collect data on why this happened as they hadn't anticipated the decline. But he has a theory. He believes that the doctors became too accustomed to relying on the green box. Without it, the specialists no longer knew exactly where to pay attention. He compared it to how people navigate with GPS today, a shift he calls the "Google Maps effect," he means that drivers have transitioned from the era of paper maps to that of GPS and most people rely on automation to show the most efficient route, when 20 years ago, one had to find out that route for themselves, as reported by Fortune. ALSO READ: Is Michael Saylor's Bitcoin bet backfiring? Strategy stock takes a hit Romańczyk explained that, "We were taught medicine from books and from our mentors. We were observing them. They were telling us what to do," adding, "And now there's some artificial object suggesting what we should do, where we should look, and actually we don't know how to behave in that particular case," as quoted in the report. The findings not only highlight the potential laziness developing as a result of an overreliance on AI, but also the changing relationship between medical practitioners and a longstanding tradition of analog training, as reported by Fortune. ALSO READ: After birthday parade by US forces, now Trump may get a giant Navy boat parade to cheer him up His study contributes to this growing research of questioning humans' ability to use AI without compromising their own skillset, according to the report. While AI is increasingly been used in hospitals and doctors' offices, it's also rapidly reshaping workspaces with the hopes of enhancing performance, as per Fortune. Last year, Goldman Sachs had forcasted that the technology could increase productivity by 25%, reported Fortune. However, the emerging researches have also warned of the challenges of adopting AI tools, like the study from Microsoft and Carnegie Mellon University earlier this year found that among surveyed knowledge workers, AI increased work efficiency, but reduced critical engagement with content, reducing judgment skills, as reported by Fortune. ALSO READ: Is AI therapy safe? Hidden risks you must know before using chatbots for mental health Romańczyk doesn't suggest avoiding the presence of AI in medicine, pointing out that, "AI will be, or is, part of our life, whether we like it or not," adding, "We are not trying to say that AI is bad and [to stop using] it. Rather, we are saying we should all try to investigate what's happening inside our brains, how we are affected by it? How can we actually effectively use it?" as quoted in the report. Even Lynn Wu, associate professor of operations, information, and decisions at the University of Pennsylvania's Wharton School, emphasised that, "We have to maintain those critical skills, such that when AI is not working, we know how to take over," as quoted by Fortune. ALSO READ: How AI agents are taking control of your company -- sharing secrets, making costly decisions, and deleting data Is AI helping or hurting doctors? AI helps improve efficiency and accuracy, but overreliance may reduce doctors' own decision-making skills. What is the "Google Maps effect"? It's when people become so dependent on tools like GPS (or AI) that they lose their own navigation or thinking skills.
[5]
AI may revolutionise healthcare, but at the cost of doctors' skills, says Lancet
A recent study in Poland suggests that relying too heavily on AI in colonoscopies may negatively impact doctors' skills. Researchers found a decrease in adenoma detection rates after AI tools were introduced. This raises concerns about the potential for over-reliance on AI to diminish clinicians' focus and responsibility, despite its promise in healthcare. Artificial intelligence has become a trusted ally in modern medicine, helping doctors make quicker and more accurate decisions. From spotting tumours on scans to predicting treatment outcomes, AI has shown remarkable potential. But a new study published in The Lancet Gastroenterology and Hepatology has raised an uncomfortable question: could too much reliance on AI actually weaken doctors' own skills? The study was carried out across four colonoscopy centres in Poland, where AI tools were introduced in late 2021 to detect polyps, small growths in the colon that can develop into cancer. Researchers noticed something surprising. The average detection rate of adenomas (non-cancerous but potentially risky cells) dropped from 28% before AI exposure to 22% after AI exposure. That is a 20% relative and 6% absolute reduction, suggesting that doctors who regularly used AI may have become less sharp when performing colonoscopies without it. "To our knowledge, this is the first study to suggest a negative impact of regular AI use on healthcare professionals' ability to complete a patient-relevant task in medicine of any kind," said Dr Marcin Romarnczyk of the Academy of Silesia. He warned that with AI rapidly spreading in healthcare, urgent research is needed to understand how it affects doctors' long-term skills. The findings also raised doubts about earlier randomised controlled trials, many of which reported higher adenoma detection rates with AI-assisted colonoscopy. According to co-author Yuichi Mori from the University of Oslo, the trials may have overlooked a crucial detail: repeated AI use could subtly dull doctors' performance during standard, non-AI procedures. The researchers argue that overexposure to decision-support systems may encourage a natural human tendency, over-reliance. This can make clinicians less focused, less motivated, and ultimately less responsible for the outcomes. Not everyone views the findings as a cause for alarm. Dr Vidur Mahajan, founder and CEO of CARPL.AI, argued that the focus should be on lifting average doctors to world-class levels rather than worrying about skill erosion. "Technology is an inevitable part of our lives and we must embrace the advantages of it by enabling the democratisation of it," he said. Drawing an analogy, he added: "Imagine a world without Google Maps, would you trust a driver who does not use it?" The study, funded by the European Commission, Japan Society for the Promotion of Science, and the Italian Association for Cancer Research, underscores a critical dilemma: while AI promises to make healthcare safer and smarter, it may also carry hidden risks if doctors start trusting machines more than their own judgement. Inputs from TOI
Share
Share
Copy Link
A new study suggests that reliance on AI tools in medical procedures like colonoscopies may lead to a decrease in doctors' skills, raising questions about the long-term impact of AI adoption in healthcare.
Artificial Intelligence (AI) has been hailed as a revolutionary force in healthcare, promising improved efficiency and accuracy in medical diagnoses and procedures. However, a recent study published in The Lancet Gastroenterology and Hepatology has raised concerns about a potential downside to this technological advancement: the "deskilling" of medical professionals
1
.Source: The New York Times
The Artificial Intelligence in Colonoscopy for Cancer Prevention (ACCEPT) trial, conducted across four hospitals in Poland, has revealed a surprising trend. After introducing AI tools to assist in colonoscopy procedures, researchers observed a significant decline in the adenoma detection rate (ADR) when doctors later performed procedures without AI assistance
2
.Before AI implementation, endoscopists had an average ADR of 28.4%. However, this rate dropped to 22.4% when performing standard, non-AI-assisted colonoscopies after exposure to AI tools. This represents a 20% relative decrease and a 6% absolute reduction in detection rates
3
.Source: ZDNet
Dr. Marcin Romańczyk, a lead researcher in the study, hypothesizes that this decline may be attributed to a phenomenon he calls the "Google Maps effect." Just as reliance on GPS navigation can erode natural navigation skills, overreliance on AI in medical procedures might lead to a deterioration of critical observational and decision-making abilities in healthcare professionals
4
.This study's findings challenge the prevailing narrative about AI's unequivocal benefits in healthcare. While previous studies have shown improvements in detection rates with AI assistance, the ACCEPT trial suggests that these gains might come at the cost of diminishing human skills over time
5
.The researchers emphasize that their goal is not to discourage AI adoption but to highlight the need for a more nuanced understanding of human-AI interaction in medical settings. Dr. Romańczyk states, "AI will be, or is, part of our life, whether we like it or not. We are not trying to say that AI is bad and [to stop using] it. Rather, we are saying we should all try to investigate what's happening inside our brains, how we are affected by it? How can we actually effectively use it?"
4
Related Stories
Source: Economic Times
As AI continues to permeate various aspects of healthcare, the medical community faces the challenge of harnessing its benefits while mitigating potential risks. Some proposed solutions include:
3
.5
.2
.The ACCEPT trial's findings serve as a crucial reminder that while AI holds immense promise for advancing healthcare, its integration must be approached thoughtfully. As the medical community grapples with these challenges, the ultimate goal remains clear: to leverage AI's capabilities in a way that enhances, rather than diminishes, the irreplaceable human expertise at the heart of patient care.
Summarized by
Navi
[2]
[4]