2 Sources
2 Sources
[1]
Letters to scientific journals surge as 'prolific debutante' authors likely use AI
Just 2 days after Carlos Chaccour and Matthew Rudd published a paper in The New England Journal of Medicine about controlling malaria, an editor of the journal shared with them a letter he had received raising "robust objections." The letter was well written, thought Chaccour, a physician-scientist at the University of Navarra, and Rudd, a statistician at the University of the South. But something unsettled them: The letter cited some of their previous work that did not support its claims. Because the researchers knew artificial intelligence (AI) can fabricate references, they suspected the letter was written by a machine. That was the start of Chaccour and Rudd's investigation into more than 730,000 letters published over the past 20 years. They found that from 2023-25, a small group of "prolific debutante" authors suddenly appeared in the top 5% of letter writers. They suspect much of the rise was driven by programs such as ChatGPT, the generative AI chatbot that debuted in 2022, Chaccour, Rudd, and co-authors write in a preprint posted today on the Research Square server. "I was not surprised that it's happening but was surprised by the magnitude and how blatant it is," Chaccour says. Other studies have documented a rise in the share of research articles that bear signs of AI-written text. But this study appears to be the first to examine the phenomenon among letters to the editor -- a key venue for postpublication reviews, but also a potential avenue for exploitation by unscrupulous authors aiming to pad their CVs. The nearly 8000 letter writers who moved from the bottom to the top in productivity had an outsize influence on all letters published: They made up only 3% of all active authors from 2023-25 but contributed 22% of the letters published, nearly 23,000 in all. Their letters appeared in 1930 journals, including 175 in The Lancet and 122 in The New England Journal of Medicine (NEJM). That growth is coming at the expense of a decline in the share of letters signed by others, in what Chaccour calls "a zero-sum ecosystem of editorial attention." Newcomers -- authors who published 10 or more letters in the first year they published any letter -- have been the fastest growing group of authors. The last author of the letter responding to Chaccour and Rudd's NEJM study -- which was not ultimately published -- fell into that category: After publishing no letters in 2024, the author, a physician in Qatar, published more than 80 this year. They appeared in journals covering 58 separate topics, an unlikely breadth of expertise for a single scholar. That extreme productivity leap can't be explained by broader trends in letter writing. The average annual number of letters written by all authors studied crept up only slightly during the past 20 years, from 1.16 to 1.34 per author, and the number of journals in which the letters appeared has been flat since 2022. Chaccour and Rudd's team concluded it would have been prohibitively difficult to subject the mass of letters they studied to an AI text detector. But as a case study, the team ran 81 letters published by the Qatari author through the Pangram AI text detector. The average score was 80 on a 100-point scale indicating the likelihood of AI use. Chaccour's team also ran the same test on 74 letters published in the late 2000s, before ChatGPT, by another, randomly chosen prolific letter writer. The average score was zero -- no hits on any paper. Letters may be an especially appealing venue for AI use because they are typically short and can appear timely and relevant without requiring substantial original input or data, Chaccour says. Journals typically do not subject letters to peer review. And as an investigation by Retraction Watch and Science reported last year, letter writers can use them as an easy way to inflate their publication counts, which may impress some colleagues who don't take a closer look. The findings come as journals continue to grapple with how to police AI-written submissions. Many require disclosure of AI use, but authors underreport. (Chaccour's study did not analyze the extent of disclosure of AI use by the letter writers, though his case study found that the Qatari author declared AI use on 13 of his 81 letters.) Previous research has also indicated an increase in the number of prolific authors of research articles, possibly because of AI. Many AI-written letters lack competent writing and substance, says Seth Leopold, a surgeon and researcher at the University of Washington who is editor-in-chief of Clinical Orthopaedics and Related Research. In a recent editorial, Leopold and a colleague lamented a rise in submissions of such letters. Through June of this year, the journal received 43, more than in any previous full year, and an AI-text detector flagged 21. Besides awkward syntax, these tended not to pose useful questions about the papers on which they comment; often, the limitations they identify in a study were pointed out by the paper itself, he says. "They all have the same paragraph structure, like a middle school essay." Some of the recently prolific letter writers identified in Chaccour and Rudd's study are from developing countries where English is not the primary language, and Leopold acknowledges that AI programs can help nonnative English speakers with writing. But he also worries about "the mass production of junk." As a result, when potential AI-written text is flagged in a submission, he and colleagues began asking the author to provide a verifiable quote from each cited source that substantiates the claim being made. It's more work for the small staff of his journal, which is published by a scientific society. But "how can we not if we care about the quality of what we're putting out there?" he says. "If we lose the confidence of the people who are reading these journals, you've really lost everything, and you aren't going to get it back easily. And so this uncritical acceptance of these [AI] tools, to me, is a problem." AI-generated incorrect references and other errors can mislead readers and damage the reputation of authors who may have dedicated years to their research, Chaccour and his team conclude. "It took me 6 years and $25 million [in grant funding] to put out that [NEJM] paper." But it may have taken the Qatari author only minutes to draft the letter about it. "I can't compete with that," he says. "The legitimate discussion risks being drowned by the synthetic noise."
[2]
Chatbot Correspondence Invades the Letters to the Editor Page
The rise of artificial intelligence has produced serial writers to science and medical journals, most likely seeking to boost the number of citations they've published. Letters to the editor from writers using chatbots are flooding the world's scientific journals, according to new research and journal editors. The practice is putting at risk a part of scientific publishing that editors say is needed to sharpen research findings and create new directions for inquiry. A new study on the problem started with a tropical disease specialist who had a weird experience with a chatbot-written letter. He decided to figure out just what was going on and who was submitting all those letters. The scientist, Dr. Carlos Chaccour, at the Institute for Culture and Society at the University of Navarra in Spain, said his probing began just after he had released a paper in The New England Journal of Medicine, one of the world's most prestigious journals. The paper, published in July, was on controlling malaria infections with ivermectin, and it appeared with a laudatory editorial. Then, 48 hours later, the journal received a strongly worded letter. The editors considered publishing it and, as is customary, sent it to Dr. Chaccour for his reply. "We want to raise robust objections," the letter began, going on to say that Dr. Chaccour and his colleagues had not referred to a seminal paper published in 2017 showing that mosquitoes become resistant to ivermectin. Dr. Chaccour was in fact well aware of the "seminal paper." He and a colleague had written it, and it did not say that mosquitoes become resistant. The letter then went on to say that an economic model showed the malaria control method would not work. Once again, the reference was to a paper by Dr. Chaccour and colleagues. "Me again? Really?" Dr. Chaccour thought. That paper did not say the method would not work. "This has to be A.I.," Dr. Chaccour decided. A large language model must have been used to compose the letter, Dr. Chaccour reasoned. He thinks that, searching for references in a niche field where there aren't many, it popped in two of Dr. Chaccour's own papers. He told the journal what he had found. It did not publish the letter. Dr. Eric Rubin, the journal's editor in chief, said he hadn't thought about the possibility that chatbots might be writing letters until he heard about Dr. Chaccour's experience. There's a reason authors might turn to A.I., Dr. Rubin noted in an interview. Letters to the editor published in scientific journals are listed in databases that also list journal articles, and Dr. Rubin said that "they count as much as an article." "For doing a very small amount of work, someone can get an article in The New England Journal of Medicine on their C.V.," he said. "The incentive to cheat is high," he added. Dr. Chaccour had to wonder: Who was this person who sent the letter? He discovered that the author was a doctor from a Middle Eastern country who had published no letters to editors of scientific journals until 2024. Suddenly, in 2025, he published 84 letters, on 58 topics. "He's a Leonardo," Dr. Chaccour said. Dr. Chaccour thought he would write about his experience. He wanted to use the doctor's initials to identify him but realized that wouldn't work: His initials are B.S. He didn't think he could write about a Dr. B.S., so he called him Author No. 1 in a report that examined the proliferation of letters to the editor after the end of 2022, when A.I. became widely available. The study involved an analysis of more than 730,000 letters to journal editors published since 2005. It has been posted online ahead of submission to a peer-reviewed journal. "Something happened in 2023," Dr. Chaccour said. There was a sudden emergence of authors who had published few or no letters before then but who suddenly had letters appearing on a regular basis -- going, Dr. Chaccour said, "from zero to hero." One author, from a Southeast Asian country, published 234 letters in 2024 and 243 as of Oct. 31 of this year, after having not published any in 2023. He also identified 128 authors who had never published a single letter. Then, in their first year as letter writers, they had at least 10 published letters. As many as 3,000 authors who had never had a published letter before 2023 published at least three, he said. "If someone comes out of the blue and writes three, five or 10 letters in a single year, that raises eyebrows," Dr. Chaccour said. "To write a letter, you need expertise -- you need to be really up-to-date with the literature." Dr. Amy Gelfand, editor in chief of the journal Headache, has started receiving suspicious letters. One clue, she said, is letters that arrive a couple of days after a paper has been published. Human authors, she said, usually take a few weeks. She has begun looking up the authors of questionable letters in PubMed, a database of scientific publications. An author of one recent letter had published six letters to the editor this month in six journals on six topics. Keith Humphreys, deputy editor in chief of the journal Addiction, received what looked like a reasonable letter to the editor. He sent it to the paper's authors for comments. It turned out that the authors, based in China, were highly productive. Within six months, they had published letters to the editors of journals in cardiology, emergency medicine, endocrinology, gastroenterology, hepatology, immunology and intensive care medicine. "They had mastered every single field," Dr. Humphreys said. The number of suspicious letters keeps growing, Dr. Chaccour and his colleagues found. In 2023, the share of letters written by prolific authors -- those who had three or more published in a year -- was 6 percent. In 2024, it was 12 percent. This year, the investigators report, it is approaching 22 percent. They're invading journals "like Omicron," Dr. Chaccour said, referring to the Covid variant that quickly became dominant. The situation "is not good," Dr. Rubin said. But the answer is not to stop publishing letters. "Sometimes letters have critical information," he said. "Good letters ask good questions or raise points the authors don't raise." Without letters, Dr. Gelfand said, "you miss all the value, the new insights and important critiques and discussions of what they mean for science." Another idea is to stop indexing the letters, so they do not appear in PubMed. That too, is not a good solution, Dr. Gelfand said. Links in PubMed from letters are helpful for people doing research. For now, there is no agreement on what to do. Dr. Chaccour said that while his experience with Dr. B.S.'s letter was funny, the bigger picture is not. "It is terrifying," he said.
Share
Share
Copy Link
A comprehensive investigation reveals that AI chatbots are being used to mass-produce letters to scientific journal editors, with some authors publishing hundreds of letters annually across diverse topics they likely lack expertise in, threatening the integrity of scientific discourse.

The investigation began with an unusual letter to The New England Journal of Medicine that immediately raised red flags for Dr. Carlos Chaccour, a physician-scientist at the University of Navarra. Just 48 hours after publishing a malaria control study, Chaccour received a well-written letter raising "robust objections" to his work
1
2
. However, the letter cited Chaccour's own previous research while misrepresenting its findingsβa telltale sign of AI fabrication.The letter claimed that a "seminal paper" from 2017 showed mosquitoes become resistant to ivermectin, referencing work that Chaccour himself had co-authored. The problem: his paper said no such thing. Similarly, the letter cited another economic model by Chaccour's team, again mischaracterizing its conclusions
2
.Chaccour's suspicions led to a comprehensive analysis of over 730,000 letters published across scientific journals since 2005. The findings, detailed in a preprint posted on Research Square, reveal a dramatic surge in suspicious letter-writing activity beginning in 2023βcoinciding with the widespread availability of ChatGPT
1
.The research identified nearly 8,000 "prolific debutante" authors who suddenly emerged as top letter writers after 2022. Despite representing only 3% of all active authors from 2023-25, this group contributed 22% of all letters publishedβnearly 23,000 submissions appearing across 1,930 journals, including 175 in The Lancet and 122 in The New England Journal of Medicine
1
.The most striking evidence comes from authors displaying impossible productivity levels. One physician from Qatar, who published no letters in 2024, suddenly produced more than 80 letters in 2025 spanning 58 separate medical topicsβan unlikely breadth of expertise for any single scholar
1
. Another author from a Southeast Asian country published 234 letters in 2024 and 243 by October 2025, after publishing none in 20232
.The researchers identified 128 authors who had never published a single letter before suddenly producing at least 10 letters in their debut year. As many as 3,000 authors who had never published letters before 2023 managed to publish at least three
2
.To validate their findings, Chaccour's team subjected 81 letters from the Qatari physician to the Pangram AI text detector, which returned an average score of 80 on a 100-point scale indicating likelihood of AI use. In contrast, 74 letters from a randomly chosen prolific writer from the late 2000sβbefore ChatGPT existedβscored zero across all submissions
1
.Related Stories
Letters to the editor present an attractive target for AI exploitation for several reasons. They are typically short, can appear timely and relevant without requiring substantial original research or data, and most journals don't subject them to peer review
1
. Crucially, these letters are indexed in the same databases as full research articles, making them valuable for padding academic CVs.Dr. Eric Rubin, editor-in-chief of The New England Journal of Medicine, explains the incentive: "For doing a very small amount of work, someone can get an article in The New England Journal of Medicine on their C.V. The incentive to cheat is high"
2
.This phenomenon represents what Chaccour calls "a zero-sum ecosystem of editorial attention," where AI-generated letters crowd out legitimate scientific discourse
1
. Journal editors report that many AI-written letters lack competent writing and substance, failing to pose useful questions about the research they purport to address.Dr. Seth Leopold, editor-in-chief of Clinical Orthopaedics and Related Research, notes that his journal received 43 suspicious letters through June 2025βmore than in any previous full year, with AI-text detectors flagging 21 of them
1
. Dr. Amy Gelfand of Headache journal has observed that suspicious letters often arrive within days of paper publication, whereas human authors typically take weeks to craft responses2
.Summarized by
Navi
[2]
13 May 2025β’Science and Research

04 Sept 2024

05 Aug 2025β’Science and Research

1
Business and Economy

2
Business and Economy

3
Technology
