ArXiv cracks down on AI slop with one-year bans for researchers submitting unvetted papers

2 Sources

Share

ArXiv, the open-access preprint repository, will ban authors for one year if they submit papers containing obvious AI-generated content that wasn't checked by humans. The penalty applies to submissions with hallucinated references or LLM meta-comments, as the platform struggles with a flood of AI slop masquerading as rigorous science.

ArXiv Implements Strict One-Year Ban for AI Slop Submissions

ArXiv, the widely-used preprint academic research repository, has introduced stringent penalties for researchers who submit AI slop without proper human oversight. Thomas Dietterich, chair of the computer science section of ArXiv, announced late Thursday that authors caught submitting papers with incontrovertible evidence of not being reviewed by humans will face a one-year ban from the platform

2

. Following the ban, researchers must have their subsequent submissions accepted at a reputable peer-reviewed venue before returning to ArXiv.

The policy operates as a one-strike rule, though decisions remain open to appeal. Dietterich emphasized that the penalty applies only to clear-cut cases where AI-generated materials contain obvious errors that authors failed to catch. "If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s)," he wrote on X

2

.

Source: The Verge

Source: The Verge

What Constitutes Incontrovertible Evidence

The new enforcement targets specific markers of unreviewed AI-generated content. Hallucinated references—citations to papers that don't exist—serve as primary red flags. Equally damning are meta-comments from large language models that authors failed to remove, such as "here is a 200 word summary; would you like me to make any changes?" or "the data in this table is illustrative, fill it in with the real numbers from your experiments"

2

. These telltale signs reveal that authors simply copied LLM output without verification, undermining the integrity of rigorous science.

Before imposing penalties, ArXiv's internal process requires a moderator to document the problem, followed by confirmation from the Section Chair. This two-step verification aims to ensure fairness while maintaining strict standards against unoriginal content.

Rising Tide of AI-Generated Materials Strains Academic Publishing

The crackdown reflects ArXiv's mounting struggle with AI-generated materials flooding its platform. In November 2025, the repository stopped accepting peer-reviewed computer science review articles and position papers entirely due to being overwhelmed by AI slop

2

. "The advent of large language models have made this type of content relatively easy to churn out on demand, and the majority of the review articles we receive are little more than annotated bibliographies, with no substantial discussion of open research issues," ArXiv stated

1

.

By January, first-time submitters needed endorsements from established authors due to rising fraudulent submissions

2

. The problem extends beyond ArXiv. A Columbia University study examining 2.5 million biomedical papers found that one in 277 papers published in early 2026 contained fabricated references—a sharp increase from one in 2,828 in 2023 and one in 458 in 2025

2

. AI-generated citations increasingly strain the peer-review process, with more papers containing meta-comments and hallucinated data making it through quality checks.

Long-Term Implications for Academic Research

ArXiv's transition to an independent nonprofit corporation this July may prove crucial for addressing these challenges. Greg Morrisett, dean and vice provost of Cornell Tech, told Science.org that independence will help ArXiv raise funds from diverse donors—resources needed specifically to combat AI slop

2

. The financial demands of moderating submissions and implementing detection systems require substantial investment as LLMs become more sophisticated.

Researchers should watch how other academic platforms respond to ArXiv's strict stance. If the one-year ban proves effective at deterring low-quality submissions, similar policies may spread across scholarly publishing. The short-term impact focuses on cleaning up existing submission pipelines, but longer-term questions remain about how academic institutions will adapt verification processes as AI tools become standard research aids. The challenge lies in distinguishing legitimate AI assistance from lazy copy-pasting that threatens scientific integrity.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved