Wikipedia bans AI-generated content after editors vote 44-2 to prohibit LLM-written articles

Reviewed byNidhi Govil

5 Sources

Share

Wikipedia has taken a definitive stance against AI-generated content, prohibiting the use of large language models to write or rewrite articles. The policy, passed with overwhelming support from volunteer editors, allows limited exceptions for copyediting and translation but requires strict human oversight to prevent inaccuracies from infiltrating one of the internet's most trusted knowledge sources.

Wikipedia Implements Strict AI Ban After Overwhelming Editor Vote

Wikipedia has officially banned the use of large language models (LLMs) to generate or rewrite articles, marking a decisive shift in how one of the world's most visited websites approaches AI-generated content

1

. The policy, which passed through a Request for Comment with 44 votes in favor and only two opposed, closed on March 20, 2026, and applies specifically to English Wikipedia

5

. The new guidelines cite that AI-generated text "often violates several of Wikipedia's core content policies" as the primary justification for the AI ban

2

.

Source: 404 Media

Source: 404 Media

The decision follows months of heated debate among the volunteer editor community, with administrators reporting an overwhelming surge in LLM-related issues. Wikipedia administrator Chaotic Enby, who authored the final proposal, noted that "in recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed"

3

. The mood among editors shifted from cautious optimism to genuine worry as the flood of AI-generated material threatened to outpace human moderators' ability to maintain accuracy and quality standards.

Source: MediaNama

Source: MediaNama

Limited Exceptions for Copyediting and Translation

While the policy establishes a firm prohibition, it does include narrow exceptions for copyediting and AI for language translation. Editors can use LLMs to suggest basic copyedits to their own writing, but only if the tool "does not introduce content of its own"

1

. The guidelines emphasize caution, warning that "LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited"

2

. This requirement for human oversight ensures that editors maintain control over their content while potentially benefiting from AI assistance.

For translation work, editors can use LLMs to produce an initial version of a translated Wikipedia article, but they must be fluent enough in both the original and translated languages to verify accuracy and catch inaccuracies

4

. This approach prioritizes human judgment over automated translation, acknowledging that while AI tools can accelerate the translation process, they cannot replace the nuanced understanding required to ensure faithful and accurate rendering of content across languages.

Enforcement Challenges and Detection Limitations

The policy encounters significant practical challenges in enforcement, particularly around AI detection. Wikipedia acknowledges that current AI detection tools are unreliable, and that some editors may naturally write in ways similar to LLM output

5

. The guidelines specify that stylistic or linguistic characteristics alone do not justify sanctions against editors. Instead, human moderators must consider whether the text complies with content policies and review the editor's recent editing history to make informed judgments.

This reliance on human moderators highlights a critical asymmetry: generating AI content takes seconds, but verifying and cleaning it up takes hours, placing a disproportionate burden on Wikipedia's volunteer editor community

5

. Pages with less active moderation may be more susceptible to hallucinated text going undetected, creating potential gaps in quality control across the platform.

Broader Implications for Training Data and Platform Policies

The stakes extend far beyond Wikipedia's own content quality. As a primary source of training data for AI models, Wikipedia faces a compounding risk: inaccurate or hallucinated text enters the encyclopedia, gets scraped by AI companies, and re-enters future model training data

5

. This feedback loop threatens to degrade both Wikipedia's reliability and the quality of AI systems trained on its content.

Chaotic Enby framed the policy as part of a broader movement, stating: "The same flood of AI-generated content has been seen from social media to open-source projects, where agents submit pull requests much faster than human reviewers can keep up with"

3

. The administrator expressed hope that this decision would "empower communities on other platforms to decide whether AI should be welcome. On their own terms"

2

.

Independent Language Editions Chart Different Courses

The ban applies only to English Wikipedia, as each language edition operates independently with its own rules and editing teams

4

. Spanish Wikipedia has implemented an even stricter approach, fully banning the use of LLMs without the carve-outs for copyediting or translation assistance that the English edition now includes

2

5

. Other language editions may choose to embrace LLMs more openly or establish their own balanced approaches.

This decentralized governance structure reflects Wikipedia's fundamental nature as a collection of independent communities rather than a monolithic platform. The Wikimedia Foundation's AI strategy, published in April 2025, positioned AI tools strictly as support for human editors while excluding AI-generated article content as a primary use case

5

. The policy developed by WikiProject AI Cleanup, a group of editors dedicated to finding and removing AI-generated errors, represents the community translating that vision into enforceable guidelines

3

.

Source: The Verge

Source: The Verge

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo