5 Sources
5 Sources
[1]
Wikipedia bans AI-generated articles
Wikipedia will no longer allow editors to write or rewrite articles using AI. The update, which was added to Wikipedia's guidelines late last week, cites the tendency for AI-written articles to violate "several of Wikipedia's core content policies" as the reason for the ban. The change applies to the English version of Wikipedia and will still allow editors to use AI in certain scenarios. That includes using large language models to "suggest basic copyedits" to their writing, but only if it "does not introduce content of its own." Editors can also use AI to translate articles from another language's Wikipedia into English. However, they still must follow the site's rules on LLM-assisted translations, which require editors to have enough knowledge of the original language to confirm the accuracy of the translation.
[2]
Wikipedia has banned AI-generated articles
English Wikipedia when writing or rewriting articles. The platform says it came to this decision because using AI to whip up copy "often violates several of Wikipedia's core content policies." There are a couple of minor exceptions. Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs "can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited." Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies. "My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent," Wikipedia administrator . The administrator also called the policy a "pushback against and the forceful push of AI by so many companies in these last few years." There is one thing worth noting. Wikipedia is not a monolith. Each Wikipedia site has its own independent rules and editing teams. Some may decide to embrace LLMs. However, others may go even further. Spanish Wikipedia, for instance, has fully banned the use of LLMs, . Also, identifying text written by LLMs is not an exact science so Wikipedia's human moderators could miss some spots of slop every now and again. This is more likely on pages with less frequent moderation.
[3]
Wikipedia Bans AI-Generated Content
"In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed." After months of heated debate and previous attempts to restrict the use of large language models on Wikipedia, on March 20 volunteer editors accepted a new policy that prohibits using them to create articles for the online encyclopedia. "Text generated by large language models (LLMs) often violates several of Wikipedia's core content policies," Wikipedia's new policy states. "For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below." The new policy, which was accepted in an overwhelming 40 to 2 vote among editors, allows editors to use LLMs to suggest basic copyedits to their own writing, which can be incorporated into the article or rewritten after human review if the LLM doesn't generate entirely new content on its own. "Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited," the policy states. "The use of LLMs to translate articles from another language's Wikipedia into the English Wikipedia must follow the guidance laid out at Wikipedia:LLM-assisted translation." I previously reported about editors using LLMs to translate Wikipedia articles and introducing errors to those articles in the process. Wikipedia editor, Ilyas Lebleu, who goes by Chaotic Enby on Wikipedia and who proposed the guideline said that it seemed unlikely the policy will last because previously the editor community has been divided on the issue. However, Lebleu said "The mood was shifting, with holdouts of cautious optimism turning to genuine worry." "A few months ago, a much more bare-bones guideline had passed, only banning the creation of brand new articles with LLMs," Lebleu told me in an email. "A follow-up proposal to reword it into something more substantial failed to pass, but was noted to have 'consensus for better guidelines along the lines of and/or in the spirit of this draft.' In recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed." The policy was written with the help of WikiProject AI Cleanup, a group of Wikipedia editors dedicated to finding and removing AI-generated errors on the site. Editors have been dealing with an increasing number of AI-generated articles or edits lately, and have made some minor adjustments to its guidelines as a result, like streamlining the process for removing AI-generated articles. Editors' position, as well as the position of the Wikimedia Foundation, has been to not make blanket rules against AI because Wikipedia already uses some forms of automation, and because AI tools could assist editors in the future. The new policy doesn't ban the use of other automated tools that are already in use or future implementations, but it does show the Wikipedia community is less optimistic about the benefit of AI-generated content, and taking a stand against it. "In context, this has implications far beyond Wikipedia," Lebleu said. "The same flood of AI-generated content has been seen from social media to open-source projects, where agents submit pull requests much faster than human reviewers can keep up with. StackOverflow and the German Wikipedia paved the way in recent months with similar policies, and, as anxiety over the AI bubble grows, I foresee a domino effect, empowering communities on other platforms to decide whether AI should be welcome. On their own terms."
[4]
Wikipedia just banned AI-written articles
This cautious approach prioritizes human oversight while other Wikipedia language versions may establish their own separate AI content rules. After much debate, Wikipedia has now taken a stance on AI-generated content on its platform: "the use of LLMs to generate or rewrite article content is prohibited." So says the internal policy page, although the declaration does come with a few exceptions. Wikipedia editors are permitted to use AI services for basic editing of text they've written themselves. Any AI-altered text must be reviewed by humans, both to ensure that the AI model hasn't added its own material and that the core meaning of the text wasn't changed. Wikipedia is also allowing AI for translations. AI services may be used to produce an initial version of a translated Wikipedia article, but the translating editor must themselves be sufficiently proficient in both (original and translated) languages to be able to check that the translation is accurate and without errors. These new rules apply only to the English-language Wikipedia. Wikipedia editors for other languages may come up with their own rules and guidelines for using AI in their articles.
[5]
Wikipedia bans AI-generated article content after RfC
You can access the source documents here: Wikipedia LLM policy | Request for Comment discussion | LLM-assisted translation guideline English Wikipedia has banned the use of large language models (LLMs) for generating or rewriting article content. The policy passed a Request for Comment (RfC) with 44 votes in favour and two opposed; it closed on 20 March 2026. Two narrow exceptions apply: editors can use AI to suggest basic copyedits to their own writing and to produce a first-pass translation. Why it matters: Wikipedia is not only one of the most visited websites in the world, but also a primary source of training data for AI models. LLM-generated content on Wikipedia presents a compounding risk: inaccurate or hallucinated text enters the encyclopedia, gets scraped by AI companies, and re-enters future model training data. The RfC discussion flagged a specific enforcement concern: generating AI content takes seconds, but verifying and cleaning it up takes hours, placing a disproportionate burden on Wikipedia's volunteer editor community. A suspected AI agent named TomWikiAssist -- an autonomous agent that authored and edited multiple articles in early March 2026 -- illustrated this threat. What the policy allows: Editors can run their text through an LLM for basic copyediting, but must verify the output and ensure it does not introduce its own content. The policy warns that LLMs can change the meaning of text beyond what the editor intended, in ways not supported by cited sources. For translation, LLM-assisted work must follow Wikipedia's separate LLM-assisted translation guideline. Why did earlier attempts fail? Earlier attempts at a policy repeatedly failed. Wikipedia administrator Chaotic Enby, who authored the final proposal, noted that prior efforts collapsed not because editors disagreed on the need for a policy, but because individuals raised specific objections to wording, finding proposals either too vague or too prescriptive. How will Wikipedia detect AI-generated content? This is where the policy encounters a core challenge. Wikipedia notes that AI detection tools are currently unreliable, and that some editors may naturally write in ways similar to LLM output. The policy specifies that stylistic or linguistic characteristics alone do not justify sanctions, and that moderators should also consider whether the text complies with content policies and the editor's recent editing history. The policy does not define a technical detection mechanism, so enforcement relies on human moderators. Pages with less active moderation communities may be more susceptible to AI-generated text going undetected. Does this apply to all Wikipedia editions? The ban covers only the English Wikipedia. Each language edition operates independently. Spanish Wikipedia bans LLMs for creating new articles or expanding existing ones, but without the carve-outs for copyediting or translation assistance that the English edition now includes. Background: Wikipedia has had repeated friction with AI. In June 2025, the Wikimedia Foundation paused an AI summary experiment after editor backlash over accuracy concerns. The Wikimedia Foundation's AI strategy, published in April 2025, positioned AI tools strictly as support for human editors, prioritising onboarding new editors, reducing moderator workload, and strengthening translation capabilities, while excluding AI-generated article content as a use case.
Share
Share
Copy Link
Wikipedia has taken a definitive stance against AI-generated content, prohibiting the use of large language models to write or rewrite articles. The policy, passed with overwhelming support from volunteer editors, allows limited exceptions for copyediting and translation but requires strict human oversight to prevent inaccuracies from infiltrating one of the internet's most trusted knowledge sources.
Wikipedia has officially banned the use of large language models (LLMs) to generate or rewrite articles, marking a decisive shift in how one of the world's most visited websites approaches AI-generated content
1
. The policy, which passed through a Request for Comment with 44 votes in favor and only two opposed, closed on March 20, 2026, and applies specifically to English Wikipedia5
. The new guidelines cite that AI-generated text "often violates several of Wikipedia's core content policies" as the primary justification for the AI ban2
.
Source: 404 Media
The decision follows months of heated debate among the volunteer editor community, with administrators reporting an overwhelming surge in LLM-related issues. Wikipedia administrator Chaotic Enby, who authored the final proposal, noted that "in recent months, more and more administrative reports centered on LLM-related issues, and editors were being overwhelmed"
3
. The mood among editors shifted from cautious optimism to genuine worry as the flood of AI-generated material threatened to outpace human moderators' ability to maintain accuracy and quality standards.
Source: MediaNama
While the policy establishes a firm prohibition, it does include narrow exceptions for copyediting and AI for language translation. Editors can use LLMs to suggest basic copyedits to their own writing, but only if the tool "does not introduce content of its own"
1
. The guidelines emphasize caution, warning that "LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited"2
. This requirement for human oversight ensures that editors maintain control over their content while potentially benefiting from AI assistance.For translation work, editors can use LLMs to produce an initial version of a translated Wikipedia article, but they must be fluent enough in both the original and translated languages to verify accuracy and catch inaccuracies
4
. This approach prioritizes human judgment over automated translation, acknowledging that while AI tools can accelerate the translation process, they cannot replace the nuanced understanding required to ensure faithful and accurate rendering of content across languages.The policy encounters significant practical challenges in enforcement, particularly around AI detection. Wikipedia acknowledges that current AI detection tools are unreliable, and that some editors may naturally write in ways similar to LLM output
5
. The guidelines specify that stylistic or linguistic characteristics alone do not justify sanctions against editors. Instead, human moderators must consider whether the text complies with content policies and review the editor's recent editing history to make informed judgments.This reliance on human moderators highlights a critical asymmetry: generating AI content takes seconds, but verifying and cleaning it up takes hours, placing a disproportionate burden on Wikipedia's volunteer editor community
5
. Pages with less active moderation may be more susceptible to hallucinated text going undetected, creating potential gaps in quality control across the platform.Related Stories
The stakes extend far beyond Wikipedia's own content quality. As a primary source of training data for AI models, Wikipedia faces a compounding risk: inaccurate or hallucinated text enters the encyclopedia, gets scraped by AI companies, and re-enters future model training data
5
. This feedback loop threatens to degrade both Wikipedia's reliability and the quality of AI systems trained on its content.Chaotic Enby framed the policy as part of a broader movement, stating: "The same flood of AI-generated content has been seen from social media to open-source projects, where agents submit pull requests much faster than human reviewers can keep up with"
3
. The administrator expressed hope that this decision would "empower communities on other platforms to decide whether AI should be welcome. On their own terms"2
.The ban applies only to English Wikipedia, as each language edition operates independently with its own rules and editing teams
4
. Spanish Wikipedia has implemented an even stricter approach, fully banning the use of LLMs without the carve-outs for copyediting or translation assistance that the English edition now includes2
5
. Other language editions may choose to embrace LLMs more openly or establish their own balanced approaches.This decentralized governance structure reflects Wikipedia's fundamental nature as a collection of independent communities rather than a monolithic platform. The Wikimedia Foundation's AI strategy, published in April 2025, positioned AI tools strictly as support for human editors while excluding AI-generated article content as a primary use case
5
. The policy developed by WikiProject AI Cleanup, a group of editors dedicated to finding and removing AI-generated errors, represents the community translating that vision into enforceable guidelines3
.
Source: The Verge
Summarized by
Navi
[1]
[2]
[3]
[4]
09 Aug 2025β’Technology

01 May 2025β’Technology

06 Mar 2026β’Technology

1
Technology

2
Entertainment and Society

3
Policy and Regulation
