2 Sources
[1]
ArXiv will ban researchers who upload papers full of AI slop
Last year, ArXiv also updated its policies to reduce AI slop by only allowing computer science review articles and position papers to be published if they have been peer reviewed and have been accepted at a conference or a journal. "The advent of large language models have made this type of content relatively easy to churn out on demand, and the majority of the review articles we receive are little more than annotated bibliographies, with no substantial discussion of open research issues," ArXiv said at the time.
[2]
ArXiv to Ban Researchers for a Year if They Submit AI Slop
The change comes as arXiv and others struggle to manage an influx of AI-generated materials masquerading as rigorous science. ArXiv, the open-access repository of preprint academic research, will ban authors of papers for a year if they submit obviously AI-generated work. Late Thursday evening, Thomas Dietterich, chair of the computer science section of ArXiv, wrote on X: "If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s). We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper." Examples of incontrovertible evidence, he wrote, include "hallucinated references, meta-comments from the LLM ('here is a 200 word summary; would you like me to make any changes?'; 'the data in this table is illustrative, fill it in with the real numbers from your experiments'." "The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue," Dietterich wrote. Dietterich told me in an email on Friday morning that this is a one-strike rule -- meaning authors caught just once including AI slop in submissions will be banned -- but that decisions will be open to appeal. "I want to emphasize that we only apply this to cases of incontrovertible evidence," he said. "I should also add that our internal process requires first a moderator to document the problem and then for the Section Chair to confirm before imposing the penalty." In November 2025, arXiv announced it would no longer accept computer science review articles and position papers because it was being "flooded" with AI slop. "Generative AI/large language models have added to this flood by making papers -- especially papers not introducing new research results -- fast and easy to write. While categories across arXiv have all seen a major increase in submissions, it's particularly pronounced in arXiv's CS category," arXiv wrote in a press release about the change at the time. And in January, it announced first-time submitters would need an endorsement from an established author due to a rise in fraudulent submissions. AI-generated, fabricated citations are a huge problem in research. A recent study by Columbia University researchers examined 2.5 million biomedical papers across three years, and found that one in 277 papers published in the first seven weeks of 2026 contained fabricated references; In 2023, it was one in 2,828, and in 2025, one in 458. AI-generated citations and papers are already straining the peer-review process, and more and more papers are making it through the pipeline with those meta-comments and hallucinated data intact. ArXiv is managed by Cornell Tech, but this July, it will become an independent nonprofit corporation. Greg Morrisett, dean and vice provost of Cornell Tech, told Science.org that this change will help arXiv raise more money from a wider range of donors, which Morrisett said is needed to deal with the emergence of "AI slop."
Share
Copy Link
ArXiv, the open-access preprint repository, will ban authors for one year if they submit papers containing obvious AI-generated content that wasn't checked by humans. The penalty applies to submissions with hallucinated references or LLM meta-comments, as the platform struggles with a flood of AI slop masquerading as rigorous science.
ArXiv, the widely-used preprint academic research repository, has introduced stringent penalties for researchers who submit AI slop without proper human oversight. Thomas Dietterich, chair of the computer science section of ArXiv, announced late Thursday that authors caught submitting papers with incontrovertible evidence of not being reviewed by humans will face a one-year ban from the platform
2
. Following the ban, researchers must have their subsequent submissions accepted at a reputable peer-reviewed venue before returning to ArXiv.The policy operates as a one-strike rule, though decisions remain open to appeal. Dietterich emphasized that the penalty applies only to clear-cut cases where AI-generated materials contain obvious errors that authors failed to catch. "If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s)," he wrote on X
2
.
Source: The Verge
The new enforcement targets specific markers of unreviewed AI-generated content. Hallucinated references—citations to papers that don't exist—serve as primary red flags. Equally damning are meta-comments from large language models that authors failed to remove, such as "here is a 200 word summary; would you like me to make any changes?" or "the data in this table is illustrative, fill it in with the real numbers from your experiments"
2
. These telltale signs reveal that authors simply copied LLM output without verification, undermining the integrity of rigorous science.Before imposing penalties, ArXiv's internal process requires a moderator to document the problem, followed by confirmation from the Section Chair. This two-step verification aims to ensure fairness while maintaining strict standards against unoriginal content.
The crackdown reflects ArXiv's mounting struggle with AI-generated materials flooding its platform. In November 2025, the repository stopped accepting peer-reviewed computer science review articles and position papers entirely due to being overwhelmed by AI slop
2
. "The advent of large language models have made this type of content relatively easy to churn out on demand, and the majority of the review articles we receive are little more than annotated bibliographies, with no substantial discussion of open research issues," ArXiv stated1
.By January, first-time submitters needed endorsements from established authors due to rising fraudulent submissions
2
. The problem extends beyond ArXiv. A Columbia University study examining 2.5 million biomedical papers found that one in 277 papers published in early 2026 contained fabricated references—a sharp increase from one in 2,828 in 2023 and one in 458 in 20252
. AI-generated citations increasingly strain the peer-review process, with more papers containing meta-comments and hallucinated data making it through quality checks.Related Stories
ArXiv's transition to an independent nonprofit corporation this July may prove crucial for addressing these challenges. Greg Morrisett, dean and vice provost of Cornell Tech, told Science.org that independence will help ArXiv raise funds from diverse donors—resources needed specifically to combat AI slop
2
. The financial demands of moderating submissions and implementing detection systems require substantial investment as LLMs become more sophisticated.Researchers should watch how other academic platforms respond to ArXiv's strict stance. If the one-year ban proves effective at deterring low-quality submissions, similar policies may spread across scholarly publishing. The short-term impact focuses on cleaning up existing submission pipelines, but longer-term questions remain about how academic institutions will adapt verification processes as AI tools become standard research aids. The challenge lies in distinguishing legitimate AI assistance from lazy copy-pasting that threatens scientific integrity.
Summarized by
Navi
03 Nov 2025•Science and Research

21 Jan 2026•Science and Research

12 Jul 2025•Science and Research

1
Technology

2
Health

3
Policy and Regulation
