AI-Generated Letters Flood Scientific Journals as 'Prolific Debutante' Authors Exploit Publishing System

Reviewed byNidhi Govil

2 Sources

Share

A comprehensive investigation reveals that AI chatbots are being used to mass-produce letters to scientific journal editors, with some authors publishing hundreds of letters annually across diverse topics they likely lack expertise in, threatening the integrity of scientific discourse.

News article

The Discovery: A Suspicious Letter Sparks Investigation

The investigation began with an unusual letter to The New England Journal of Medicine that immediately raised red flags for Dr. Carlos Chaccour, a physician-scientist at the University of Navarra. Just 48 hours after publishing a malaria control study, Chaccour received a well-written letter raising "robust objections" to his work

1

2

. However, the letter cited Chaccour's own previous research while misrepresenting its findingsβ€”a telltale sign of AI fabrication.

The letter claimed that a "seminal paper" from 2017 showed mosquitoes become resistant to ivermectin, referencing work that Chaccour himself had co-authored. The problem: his paper said no such thing. Similarly, the letter cited another economic model by Chaccour's team, again mischaracterizing its conclusions

2

.

Unprecedented Scale of AI-Generated Letters

Chaccour's suspicions led to a comprehensive analysis of over 730,000 letters published across scientific journals since 2005. The findings, detailed in a preprint posted on Research Square, reveal a dramatic surge in suspicious letter-writing activity beginning in 2023β€”coinciding with the widespread availability of ChatGPT

1

.

The research identified nearly 8,000 "prolific debutante" authors who suddenly emerged as top letter writers after 2022. Despite representing only 3% of all active authors from 2023-25, this group contributed 22% of all letters publishedβ€”nearly 23,000 submissions appearing across 1,930 journals, including 175 in The Lancet and 122 in The New England Journal of Medicine

1

.

Extreme Productivity Patterns Expose AI Use

The most striking evidence comes from authors displaying impossible productivity levels. One physician from Qatar, who published no letters in 2024, suddenly produced more than 80 letters in 2025 spanning 58 separate medical topicsβ€”an unlikely breadth of expertise for any single scholar

1

. Another author from a Southeast Asian country published 234 letters in 2024 and 243 by October 2025, after publishing none in 2023

2

.

The researchers identified 128 authors who had never published a single letter before suddenly producing at least 10 letters in their debut year. As many as 3,000 authors who had never published letters before 2023 managed to publish at least three

2

.

AI Detection Confirms Suspicions

To validate their findings, Chaccour's team subjected 81 letters from the Qatari physician to the Pangram AI text detector, which returned an average score of 80 on a 100-point scale indicating likelihood of AI use. In contrast, 74 letters from a randomly chosen prolific writer from the late 2000sβ€”before ChatGPT existedβ€”scored zero across all submissions

1

.

Why Letters Appeal to AI Exploiters

Letters to the editor present an attractive target for AI exploitation for several reasons. They are typically short, can appear timely and relevant without requiring substantial original research or data, and most journals don't subject them to peer review

1

. Crucially, these letters are indexed in the same databases as full research articles, making them valuable for padding academic CVs.

Dr. Eric Rubin, editor-in-chief of The New England Journal of Medicine, explains the incentive: "For doing a very small amount of work, someone can get an article in The New England Journal of Medicine on their C.V. The incentive to cheat is high"

2

.

Impact on Scientific Publishing

This phenomenon represents what Chaccour calls "a zero-sum ecosystem of editorial attention," where AI-generated letters crowd out legitimate scientific discourse

1

. Journal editors report that many AI-written letters lack competent writing and substance, failing to pose useful questions about the research they purport to address.

Dr. Seth Leopold, editor-in-chief of Clinical Orthopaedics and Related Research, notes that his journal received 43 suspicious letters through June 2025β€”more than in any previous full year, with AI-text detectors flagging 21 of them

1

. Dr. Amy Gelfand of Headache journal has observed that suspicious letters often arrive within days of paper publication, whereas human authors typically take weeks to craft responses

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo