Curated by THEOUTPOST
On Tue, 7 Jan, 4:06 PM UTC
6 Sources
[1]
AI boosts breast cancer detection rates while cutting radiologist workload
By Tarun Sai LomteReviewed by Susha Cheriyedath, M.Sc.Jan 8 2025 AI-powered tools in mammography screening deliver groundbreaking improvements in cancer detection, helping radiologists catch more cancers early while reducing unnecessary patient recalls. Study: Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Image Credit: Gorodenkoff / Shutterstock In a recent study published in the journal Nature Medicine, researchers examined the impact of artificial intelligence (AI) on cancer detection and recall rates. Mammography screening contributes to reducing breast cancer-related mortality. Further, improving the sensitivity and specificity of screening could result in lower interval cancer rates, recall rates, and more effective treatment of breast cancer patients. Screening programs generate considerable volumes of mammograms, which, in most programs, require interpretation by two radiologists. Additionally, a consensus conference may be required to achieve high specificity and sensitivity. As such, the work of radiologists involves repetitive tasks of interpreting a vast number of images weekly. Notably, this workload will likely increase as recent guidelines recommend mammography screening for additional age groups. Incorporating AI into cancer screening programs could mitigate some problems. Studies suggest that AI is similar to and sometimes higher than radiologists in accuracy. Several studies observed increases in cancer detection for workflows integrating AI despite inconsistent results regarding recall rates. Nonetheless, the authors of this study emphasized that smaller samples and poor heterogeneity in radiologists, screening sites, and equipment vendors in these earlier studies limit their generalizability. The Study and Findings Enhanced detection of DCIS: AI integration led to a notable increase in detecting ductal carcinoma in situ (DCIS), with detection rates rising from 0.8 per 1,000 women in the control group to 1.4 per 1,000 in the AI group. This could represent earlier cancer detection but raises concerns about overdiagnosis. In the present study, researchers assessed the impact of AI on cancer recall and detection rates. The study was conducted within a breast cancer screening program in Germany targeting asymptomatic individuals aged 50-69. Data were collected from multiple screening sites implementing the AI system between July 2021 and February 2023. In the screening program, four mammograms were acquired for each participant, initially read by two independent radiologists. If one of the radiologists deemed the case suspicious, a consensus conference was held. If suspicious findings persisted in the conference, the participant would be recalled for additional assessments. Examinations were included in the AI group when the report was read and submitted using the AI-supported viewer by at least one radiologist. Examinations not submitted using the AI-based viewer were included in the control group. Radiologists could use their existing (non-AI-based) software or the AI-supported viewer. The AI system, Vara MG, utilized two critical features: normal triaging, which flagged highly unsuspicious examinations as normal, and a safety net, which highlighted highly suspicious cases and provided localization of suspicious regions. This safety net aimed to reduce missed diagnoses by prompting radiologists to review unsuspicious findings flagged by AI. In total, 461,818 females who underwent mammography screening were included, and 119 radiologists interpreted the examinations. Of these, 260,739 were included in the AI group and 201,079 in the control group. Around 42 per 1,000 females had suspicious findings and were recalled for additional assessments. Around one-fourth of them underwent biopsies, and over six females per 1,000 were diagnosed with breast cancer. The AI system classified 59.4% of examinations as normal, reducing the radiologists' workload significantly. The safety net was triggered for 1.5% of examinations in the AI group, leading to 541 recalls and 208 cancer diagnoses. Additionally, 3.1% of AI-group examinations flagged as normal by AI underwent further evaluation by the consensus group, which resulted in 20 additional cancer diagnoses. The breast cancer detection rate (BCDR) was 6.7 and 5.7 per 1,000 females for the AI and control groups, respectively. The AI group had statistically higher BCDR and a slightly lower recall rate than the control group. AI and control groups had positive predictive values (PPVs) of recall of 17.9% and 14.9%, respectively. The AI group had an 8.2% higher biopsy rate than the control group. Nevertheless, the AI group had a higher PPV of biopsy (64.5%) than the control group (59.2%). Broader Implications and Future Considerations AI workload reduction: AI classified 59.4% of mammograms as normal in the AI group, leading to a potential 43% reduction in radiologists' time spent interpreting normal cases, freeing up capacity for more complex analyses. The study highlighted that integrating AI into screening workflows could increase the detection of ductal carcinoma in situ (DCIS) cases. While this may represent earlier detection, concerns about overdiagnosis and overtreatment of DCIS were noted, as these cases may not always progress to invasive cancer. The long-term impact on interval cancer rates and stage distribution requires further follow-up over two to three years. Additionally, the researchers emphasized that rejected safety net cases represent a crucial area for further analysis, as they may include missed opportunities to detect cancers early or demonstrate the value of reducing unnecessary recalls. Taken together, the AI approach for mammography screening provided confident suspicious and confident normal predictions. The BCDR in the AI group was 17.6% higher than in the control group. AI use also resulted in a slightly lower recall rate, albeit statistically insignificant. These findings contribute to the evidence base that AI-assisted mammography screening is safe, feasible, and can reduce workload. Journal reference: Eisemann N, Bunk S, Mukama T, et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nature Medicine, 2025, DOI: 10.1038/s41591-024-03408-6, https://www.nature.com/articles/s41591-024-03408-6
[2]
AI improves mammography cancer detection rates in large cohort study
An observational, multicenter, real-world study conducted at 12 screening sites in Germany has reported a 17.6% higher cancer detection rate among women aged 50-69 who received AI-supported double-reading mammography screenings compared to those who received standard double-reading. Recall rates remained unchanged. Mammography screening programs often rely on double reading to identify breast cancer at earlier stages. Radiologists face substantial workloads interpreting mammograms, most of which include cases with no signs of cancer. Screening centers struggle to keep up with providing efficient and accurate assessments, a problem only getting more urgent with a growing shortage of trained radiologists. Many breast cancers elude early detection only to be diagnosed at later stages, reflecting ongoing issues with current screening methods. False positive results burden both participants and health care systems with needless worry and unnecessary follow-up (recall) appointments. Efforts to boost early detection sensitivity and lower unnecessary false positives are top priorities. In a study titled "Nationwide real-world implementation of AI for cancer detection in population-based mammography screening," published in Nature Medicine, researchers compared screening results of a large cohort with and without AI-assisted prediction software. Investigators enrolled 463,094 women in the German mammography screening program (PRAIM study). Participants were divided into an AI group (260,739) and a control group (201,079). AI-based software classified certain examinations as normal and triggered a "safety net" alert for high-suspicion cases. Results showed a higher breast cancer detection rate of 6.7 per 1,000 in the AI group compared to 5.7 per 1,000 in the control group. The recall rate was 37.4 per 1,000 with AI and 38.3 per 1,000 without AI. Recall rate measures how many participants return for further tests and includes correct initial detections as well as false positives. Positive predictive value, the portion of suspicious findings that truly represent cancer, reached 17.9% for AI-supported reading versus 14.9% in the control group. The positive predictive value of follow-ups resulting in biopsy was 64.5% in AI-assisted screening, compared to 59.2% under standard double reading. While the false positives were slightly lower, the researchers considered these to be comparable to existing double reading methods. That AI-supported mammograms detected more cancers without increasing recall rates in a large cohort is considered the more critical threshold for recommending further development and greater widespread use of the technology.
[3]
AI boosts breast cancer detection in nationwide screening study in Germany
Breast cancer detection could get a boost from artificial intelligence. When AI helped examine mammograms, doctors caught one more cancer case per 1,000 screened individuals compared with when they didn't use the technology, researchers report January 7 in Nature Medicine. The largest real-world study on AI's potential for breast cancer screening, which included nearly 500,000 women in Germany, suggests that the software could streamline the screening process without affecting the rate of false alarms. "AI in mammography screening is at least as good as a human reader, and our study shows it's even better," says cancer epidemiologist Alexander Katalinic of the University of Lübeck in Germany. Germany's breast cancer screening program requires two radiologists to independently assess each patient's mammograms and look for spots, abnormal masses and other peculiarities. (U.S. clinics mostly rely on one physician.) If at least one doctor suspects cancer in the four X-ray images, which are compared with the patient's previous screening, a third radiologist helps determine if the individual needs more tests. "We have 3 million women participating each year in this project, and 24 million pictures have to be read every year," Katalinic says. "That's a big workload for the radiologists." To see if AI could lighten the load, decision referral software was installed at 12 screening sites across the country. More than 460,000 women ages 50 to 69 took part in the study from July 2021 through late February 2023. AI tagged the mammograms as normal, suspicious or unclassified. The 119 participating radiologists chose to use an AI-supported image viewer, which revealed the software's assessment, for roughly half of the women's screenings. Without AI's assistance, clinicians identified about six breast cancer cases, confirmed via biopsy, per 1,000 patients during screening. Doctors found around seven cases with the software's help, leading to a 17.6 percent higher breast cancer detection rate with AI. Compared with patients who received traditional screenings, the group checked with AI's help had slightly fewer false positives -- when a doctor suspects cancer, but further tests provide an all clear. How the AI would best fit into radiologists' workflow remains an open question, but it could replace one of the initial readers, says Stefan Bunk, chief technology officer and cofounder of Vara, the health care technology company in Berlin that developed the AI. "This discussion should now start."
[4]
More breast cancer cases found when AI used in screenings, study finds
First real-world test finds approach has higher detection rate without having a higher rate of false positives The use of artificial intelligence in breast cancer screening increases the chance of the disease being detected, researchers have found, in what they say is the first real-world test of the approach. Numerous studies have suggested AI could help medical professionals spot cancer, whether it is identifying abnormal growths in CT scans or signs of breast cancer in mammograms. However, many studies are retrospective - meaning AI is not involved at the outset - while trials taking the opposite approach often have small sample sizes. Important, larger studies do not necessarily reflect real-world use. Now researchers say they have tested AI in a nationwide screening programme for the first time, revealing it offers benefits in a real-world setting. Prof Alexander Katalinic, a co-author of the study from the University of Lübeck in Germany, said: "We could improve the detection rate without increasing the harm for the women taking part in breast cancer screening," adding the approach could also reduce the workload of radiologists. Katalinic and his colleagues analysed data from 461,818 women in Germany who underwent breast cancer screening between July 2021 and February 2023 as part of a national programme targeting asymptomatic women aged 50-69. All of the women had their scans independently examined by two radiologists. However, for 260,739 of the women, at least one of the experts used an AI tool to support them. The AI tool not only visibly labels scans it deems unsuspicious as "normal", but it issues a "safety net" alert when a scan it rates as suspicious has been judged unsuspicious by the radiologist. In such a case the tool also highlights the area of the scan it suggests merits scrutiny. Overall, 2,881 of the women in the study, which is published in the journal Nature Medicine, were diagnosed with breast cancer. The detection rate was 6.7% higher in the AI group. However, after taking into account factors such as age of the women and the radiologists involved, the researchers found this difference increased, with the rate 17.6% higher for the AI group at 6.70 per 1,000 women compared with 5.70 per 1,000 women for the standard group. In other words, one additional case of cancer was spotted per 1,000 women screened when AI was used. Crucially, the team said the rate at which women were recalled for further investigation as a result of a suspicious scan was approximately the same. "In our study, we had a higher detection rate without having a higher rate of false positives," said Katalinic. "This is a better result, with the same harm." The team said the tool's "safety net" was triggered 3,959 times in the AI group, and led to 204 breast cancer diagnoses. By contrast, 20 breast cancer diagnoses in the AI group would have been missed had clinicians not examined the scans deemed "normal" by AI. Stefan Bunk, another co-author and a co-founder of Vara, the company that built the AI tool, said the technology increased the speed at which radiologists examined scans flagged as "normal", adding calculations showed even if these scans were not reviewed by experts the overall breast cancer detection rate would be higher and the recall rate lower than without the tool. That, he said, meant fewer false positives for women and a reduced workload for radiologists. Stephen Duffy, emeritus professor of cancer screening at Queen Mary University of London, who was not involved in the work, said the results are credible and impressive. "Here in the UK, there is specific interest in whether use of AI plus a single radiologist can safely replace reading by two radiologists. The sooner this is researched definitively the better," he said. Dr Kristina Lång, of Lund University, said the study adds to the growing body of evidence supporting the potential benefits of incorporating AI into mammography screening. But she added that the large increase in detected in situ cancers raises concerns, as these cancers are more likely to be slow growing and may contribute to the overdiagnosis burden of screening. "Long-term follow-up is essential to fully understand the clinical implications of integrating AI into mammography screening," she said. "The results are encouraging, but it is essential to ensure that we implement a method capable of detecting clinically relevant cancers at an early stage, where early detection can meaningfully improve patient outcomes." Dr Katharine Halliday, the president of the Royal College of Radiologists, said the organisation's most recent census showed a 29% shortfall of radiologists in the NHS. "Any tools that can boost our accuracy and productivity are welcome. But, while the potential benefits are significant, so are the potential risks," she said. "It is vital that deployment of AI into the NHS is done carefully, with expert oversight."
[5]
AI helps radiologists spot breast cancer in real-world tests
Whether AI can assist in cancer detection has been subject to much debate, but now a real-world test with almost 200 radiologists shows that the technology can improve success rates Artificial intelligence models really can help spot cancer and reduce doctors' workload, according to the largest study of its kind. Radiologists who chose to use AI were able to identify an extra 1 in 1000 cases of breast cancer. Alexander Katalinic at the University of Lübeck, Germany, and his colleagues worked with almost 200 certified radiologists to test an AI trained to identify signs of breast cancer from mammograms. The radiologists examined 461,818 women across 12 breast cancer screening sites in Germany between July 2021 and February 2023, and for each person could choose whether or not to use AI. This resulted in 260,739 being checked by AI plus a radiologist, with the remaining 201,079 patients checked by a radiologist alone. Those who elected to use AI successfully detected breast cancer at a rate of 6.7 instances in every 1000 scans - 17.6 per cent higher than the 5.7 per 1000 scans among those who chose not to use AI. Similarly, when women underwent biopsies following a suspected diagnosis of cancer, those who were diagnosed with AI were 64.5 per cent likely to have a biopsy where cancerous cells were found, compared with 59.2 per cent of the women where AI wasn't used. The scale at which AI improved detection of breast cancer was "extremely positive and exceeded our expectations", said Katalinic in a statement. "We can now demonstrate that AI significantly improves the cancer detection rate in screening for breast cancer." "The goal was to show non-inferiority," says Stefan Bunk at Vara, an AI company also involved in the study. "If we can show AI is not inferior to radiologists, that's an interesting scenario to save some workload. We were surprised we were able to show superiority." Over-reliance on AI in medicine has worried some because of the risk it could miss some signs of a condition, or could lead to a two-track system of treatment where those who can pay are afforded the luxury of human interaction. There was some evidence that radiologists spent less time examining scans that AI had already suggested were "normal" - meaning cancer wasn't likely to be present - reviewing them for an average of 16 seconds, compared with 30 seconds on those that the AI couldn't classify. But these latest findings have been welcomed by those specialising in the safe deployment of AI in medicine. "The study offers further evidence for the benefits of AI in breast screening and should be yet another wake-up call for policymakers to accelerate AI adoption," says Ben Glocker at Imperial College London. "Its results confirm what we have been seeing again and again: with the right integration strategy, the use of AI is both safe and effective." He welcomes the way the study allowed radiologists to make their own decisions about when to use AI, and would like to see more tests of AI performed in a similar way. "We cannot easily assess this in the lab or via simulations and instead need to learn from real-world experience," says Glocker. "The technology is ready; we now need the policies to follow."
[6]
Nationwide real-world implementation of AI for cancer detection in population-based mammography screening - Nature Medicine
In a retrospective analysis, Leibig et al.18 demonstrated that the use of AI in a decision referral approach, in which AI confidently predicts normal or highly suspicious examination results and refers uncertain results to the radiologists' expertise, yielded superior metrics than AI or radiologists alone. In the PRAIM (PRospective multicenter observational study of an integrated AI system with live Monitoring) implementation study embedded in the German mammography screening program, we investigated whether the performance metrics achieved by double reading using an AI-supported CE (Conformité Européenne)-certified medical device with a decision referral approach were noninferior to those achieved by double reading without AI support in a real-world setting. Here, we report the impact of AI on cancer detection and recall rates. The study was conducted within Germany's organized breast cancer screening program targeting asymptomatic women aged 50-69 years (Fig. 1). All women participating in the screening program were eligible for study inclusion. Between July 1, 2021, and February 23, 2023, data from screening participants were collected from 12 screening sites that used the AI system (Extended Data Table 1). In the German mammography screening program, which is based on a binding national guideline, four two-dimensional mammograms (craniocaudal and mediolateral oblique views of each breast) are taken for each participating woman. These mammograms are initially read independently by two radiologists (sometimes, a third radiologist supervises). If at least one radiologist deems the case suspicious, a consensus conference is held. The participants of the consensus conference are at least the two initial readers and one head radiologist, but more radiologists of the screening site can participate. If the suspicious finding persists in the consensus conference, the woman is recalled for further diagnostic assessments, which can include, among others, ultrasonography, digital breast tomosynthesis, magnification views, contrast-enhanced mammography or magnetic resonance imaging. For the study, examinations were assigned to the AI group when at least one of the two radiologists read and submitted the report with the AI-supported viewer. All examinations for which neither radiologist submitted the report using the AI-supported viewer formed the control group. The study group assignment was unknown to the women and radiographers as it was not yet assigned at the time of image acquisition. After image acquisition, AI predictions were computed for all women but were displayed only to radiologists using the AI-supported viewer. The radiologists performing the first and second reads were free to use either their existing reporting and viewer software without AI support or the AI-supported viewer. The decision to use the AI-supported viewer was made on a per-examination basis (that is, one radiologist typically delivered examinations for both the AI and control groups). Radiologists in a reader set independently chose whether to use the AI-supported viewer. The AI results were not disclosed to the other radiologist if they did not also choose to use the AI viewer. The AI system used was Vara MG (from the German company Vara), a CE-certified medical device designed to display mammograms (viewer software) and preclassify screening examinations to assist radiologists in their reporting routine. The performance of previous versions of the AI software has been previously reported. When using the AI-supported viewer, radiologists were supported by two AI-based features (Extended Data Fig. 1): Overall, 461,818 women who attended mammography screening at the 12 screening sites participated in the study. A total of 119 radiologists constituting 547 reader sets interpreted the examinations. Mammography hardware systems from five different vendors were used (Extended Data Table 2). Of all the participating women, 260,739 were screened in the AI group (with the AI-supported viewer being used by only one reader for 152,970 women and by both readers for 107,769 women) and 201,079 were screened in the control group. Table 1 presents the characteristics of the screened women and the detected breast cancers by study group. Of the screened women, 41.9 per 1,000 had suspicious findings and were recalled for further assessment. A quarter of them (10.4 per 1,000) underwent biopsy procedures, and 6.2 per 1,000 were finally diagnosed with breast cancer. Most (79.4%) of the cancers were classified as invasive, and 18.9% were ductal carcinoma in situ (DCIS). AI tagged 56.7% (262,055 of 461,818) of the examinations as normal. This proportion was higher in the AI group (59.4%) than in the control group (53.3%; Table 2) due to an observed reading behavior bias. In the AI group (n = 260,739), the safety net was triggered for 3,959 (1.5%) examinations, shown in 2,233 (0.9%) examinations and accepted in 1,077 (0.4%) examinations, leading to 541 (0.2%) recalls and 204 (0.08%) breast cancer diagnoses. Conversely, 8,032 (3.1%) examinations in the AI group underwent further evaluation by the consensus group despite being tagged as normal by AI, resulting in 1,905 (0.7%) recalls, 82 (0.03%) biopsies and 20 (0.008%) subsequent breast cancer diagnoses. We controlled for the identified confounders (reader set and AI prediction; causal graph presented in Extended Data Fig. 2) through overlap weighting based on propensity scores (Extended Data Fig. 3). The model-based breast cancer detection rate (BCDR) per 1,000 women screened was 6.70 for the AI group and 5.70 for the control group. This represents a model-based absolute difference of one additional cancer per 1,000 screened women and a relative increase of 17.6% (95% confidence interval (CI): +5.7%, +30.8%). The BCDR in the AI group was considered noninferior and even statistically superior to that in the control group. The AI group had a lower model-based recall rate (37.4 per 1,000) than the control group (38.3 per 1,000), showing a -2.5% reduction (-6.5%, +1.7%) (Table 3). The positive predictive value (PPV) of recall was 17.9% in the AI group and 14.9% in the control group. The biopsy rate in the AI group was 8.2% higher (-0.4%, +17.6%) than in the control group. Despite this, the AI group demonstrated a statistically significantly higher PPV of biopsy (+9.0% (+2.0%, +16.4%)). Subgroup analyses showed that the BCDR increased in all subgroups by screening round, breast density and age, ranging between +12% and +23% (Table 4). The 95% CIs were completely positive for the subgroups of follow-up screening round, nondense breasts and age 60-69 years. The relative differences in recall rates in the subgroups varied between -5% (age 50-59 years) and +4% (age 60-69 years), but all CIs except for women aged 50-59 years contained zero. We conducted various sensitivity analyses, all of which showed that our analyses were robust to different analytical decisions. In a model that, in addition to AI prediction and reader set, further adjusted for age, screening round, breast density and supervision in the propensity score model, the BCDR remained unchanged at 17.6% (5.7%, 30.8%). Similarly, in the additionally adjusted model, the PPV of recall and biopsy was 18.3% (-7.3%, 30.5%) and 9.3% (0.5%, 18.8%) higher, respectively, for the AI group than the control group (Extended Data Table 3). The results of the subgroup analyses by age group, screening round and breast density did not change meaningfully following additional adjustments. Sensitivity analyses in which we adjusted for each reader individually instead of the reader set also provided data similar to the main results: in the AI group, the BCDR was 19.0% (7.4%, 31.8%) higher and the recall rate was -1.5% (-5.4%, 2.6%) lower, indicating that the results were robust to the different parameterization of the reader set variable. The results were robust toward sampling error, as they remained nearly unchanged when the study sample was varied (bootstrapping and 80% random subset selection, each done 1,000 times): the mean BCDR was 17.6% (5.7%, 30.8%) for bootstrapping and 17.4% (11.4%, 23.8%) for the subset selection. A propensity score-based alternative to overlap weighting is inverse propensity score weighting with trimming. After applying various trimming thresholds (Extended Data Table 4), the results remained similar. Another alternative to propensity score weighting as a method for confounder adjustment is stratification. Again, the results of sensitivity analyses including all confounder strata containing a certain minimum sample size (between 0 and 200) in each study group were in line with the main results. We conducted a placebo intervention analysis to check whether the AI effect observed in the main analysis would vanish (as it should) when there is only a placebo intervention while all assumptions of the model are kept (that is, in the presence of residual confounding due to the reading behavior). As expected, the average model-based difference was minimal (0.8% (-9.9%, 11.6%)), indicating no residual confounding. The average reading time per screening examination was measured in the AI group only as it was technically not possible to measure this in the control group. On average, examinations tagged as normal were read more quickly (median reading time, 16 s) than unclassified examinations (median reading time, 30 s) and safety net examinations (median reading time, 99 s). Overall, radiologists spent 43% less time interpreting examinations tagged as normal, with a mean reading time of 39 s for normal examinations compared to 67 s for examinations not tagged as normal (Extended Data Fig. 4). To evaluate the potential of AI integration to reduce reading workload through automation, we analyzed a fictitious scenario in which the screening examinations triaged as normal by AI were not read by radiologists. Rather, after an AI prediction of 'normal', the examination directly received the final classification 'normal', and thus, it would not be possible that any breast cancer signs missed by AI were detected by the radiologists, that a recall was made or that a cancer was detected. The analysis of this scenario showed that, when all normal-tagged examinations (56.7%) were automatically classified as normal, the BCDR was still higher and statistically superior by 16.7% (4.9%, 29.9%), the consensus rate was lower by -19.4% (-21.5%, -17.4%), the recall rate was statistically superior and lower by -15.0% (-18.6%, -11.2%), whereas the biopsy rate was higher by 5.8% (-2.7%, 15.0%) in the AI group than in the control group (Table 5).
Share
Share
Copy Link
A nationwide study in Germany shows AI-assisted mammography screening significantly improves breast cancer detection rates without increasing false positives, potentially revolutionizing breast cancer screening practices.
A groundbreaking study conducted across 12 screening sites in Germany has demonstrated that artificial intelligence (AI) can significantly improve breast cancer detection rates in mammography screenings. The research, published in Nature Medicine, involved 461,818 women aged 50-69 and compared AI-assisted mammogram interpretation with standard double-reading practices 1.
The study, part of Germany's breast cancer screening program, divided participants into two groups: 260,739 in the AI group and 201,079 in the control group. In the AI group, at least one radiologist used an AI-supported viewer to interpret mammograms. The AI system, developed by Vara, classified examinations as normal, suspicious, or unclassified and provided a "safety net" feature to highlight highly suspicious cases 2.
Improved Detection Rates: The AI-assisted group showed a 17.6% higher breast cancer detection rate compared to the control group (6.7 vs 5.7 cases per 1,000 women) 3.
Maintained Recall Rates: Importantly, the recall rate (patients called back for additional tests) remained unchanged, with 37.4 per 1,000 in the AI group versus 38.3 per 1,000 in the control group 2.
Enhanced Positive Predictive Value: The AI group demonstrated higher positive predictive values for both recalls (17.9% vs 14.9%) and biopsies (64.5% vs 59.2%) 4.
Workload Reduction: The AI system classified 59.9% of examinations as normal, potentially reducing radiologists' workload by 43% for normal cases 1.
The study noted an increase in detecting ductal carcinoma in situ (DCIS) cases with AI integration. While this could represent earlier detection, it also raises concerns about potential overdiagnosis and overtreatment, as not all DCIS cases progress to invasive cancer 1.
This large-scale, real-world study provides strong evidence for the potential of AI in improving breast cancer screening efficiency. Professor Alexander Katalinic from the University of Lübeck, a co-author of the study, emphasized that the AI approach improved detection rates without increasing harm to participants 4.
While the results are promising, experts stress the need for long-term follow-up to fully understand the clinical implications of integrating AI into mammography screening. Dr. Kristina LÃ¥ng from Lund University highlighted the importance of ensuring that AI implementation detects clinically relevant cancers at an early stage, where early detection can meaningfully improve patient outcomes 4.
The study's findings have sparked discussions about how AI could be integrated into existing screening workflows. Stefan Bunk, co-founder of Vara, suggested that AI could potentially replace one of the initial readers in double-reading systems, which could address radiologist shortages and streamline the screening process 5.
As health systems worldwide grapple with radiologist shortages and increasing screening demands, this study provides compelling evidence for the potential of AI to enhance breast cancer detection while maintaining efficiency and accuracy in large-scale screening programs.
Reference
[1]
[2]
Medical Xpress - Medical and Health News
|AI improves mammography cancer detection rates in large cohort study[5]
A study reveals that AI-enhanced mammography screening could increase breast cancer detection rates by 21%, highlighting the potential of AI in improving early diagnosis and patient care in radiology.
4 Sources
4 Sources
A recent study reveals that AI can detect breast cancer risk up to six years before clinical diagnosis, potentially revolutionizing early detection and personalized screening approaches.
2 Sources
2 Sources
A review article in Trends in Cancer highlights how artificial intelligence is revolutionizing breast cancer screening and risk prediction, offering potential for personalized screening strategies and improved early detection.
8 Sources
8 Sources
A groundbreaking study reveals that AI can significantly improve lung cancer screening efficiency by accurately identifying negative CT scans, potentially reducing radiologists' workload by up to 79% while maintaining diagnostic accuracy.
2 Sources
2 Sources
A comprehensive review highlights the transformative potential of AI in cervical cancer screening, offering hope for improved detection rates and expanded access to healthcare services globally.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved