2 Sources
[1]
How Wikipedia is fighting AI slop content
With the rise of AI writing tools, Wikipedia editors have had to deal with an onslaught of AI-generated content filled with false information and phony citations. Already, the community of Wikipedia volunteers has mobilized to fight back against AI slop, something Wikimedia Foundation product director Marshall Miller likens to a sort of "immune system" response. "They are vigilant to make sure that the content stays neutral and reliable," Miller says. "As the internet changes, as things like AI appear, that's the immune system adapting to some kind of new challenge and figuring out how to process it." One way Wikipedians are sloshing through the muck is with the "speedy deletion" of poorly written articles, as reported earlier by 404 Media. A Wikipedia reviewer who expressed support for the rule said they are "flooded non-stop with horrendous drafts." They add that the speedy removal "would greatly help efforts to combat it and save countless hours picking up the junk AI leaves behind." Another says the "lies and fake references" inside AI outputs take "an incredible amount of experienced editor time to clean up." Typically, articles flagged for removal on Wikipedia enter a seven-day discussion period during which community members determine whether the site should delete the article. The newly adopted rule will allow Wikipedia administrators to circumvent these discussions if an article is clearly AI-generated and wasn't reviewed by the person submitting it. That means looking for three main signs: These aren't the only signs of AI Wikipedians are looking out for, though. As part of the WikiProject AI Cleanup, which aims to tackle an "increasing problem of unsourced, poorly written AI-generated content," editors put together a list of phrases and formatting characteristics that chatbot-written articles typically exhibit. The list goes beyond calling out the excessive use of em dashes (" -- ") that have become associated with AI chatbots, and even includes an overuse of certain conjunctions, like "moreover," as well as promotional language, such as describing something as "breathtaking." There are other formatting issues the page advises Wikipedians to look out for, too, including curly quotation marks and apostrophes instead of straight ones. However, Wikipedia's speedy removal page notes that these characteristics "should not, on their own, serve as the sole basis" for determining that something has been written by AI, making it subject to removal. The speedy deletion policy isn't just for AI-generated slop content, either. The online encyclopedia also allows for the quick removal of pages that harass their subject, contain hoaxes or vandalism, or espouse "incoherent text or gibberish," among other things. The Wikimedia Foundation, which hosts the encyclopedia but doesn't have a hand in creating policies for the website, hasn't always seen eye-to-eye with its community of volunteers about AI. In June, the Wikimedia Foundation paused an experiment that put AI-generated summaries at the top of articles after facing backlash from the community. Despite varying viewpoints about AI across the Wikipedia community, the Wikimedia Foundation isn't against using it as long as it results in accurate, high-quality writing. "It's a double-edged sword," Miller says. "It's causing people to be able to generate lower quality content at higher volumes, but AI can also potentially be a tool to help volunteers do their work, if we do it right and work with them to figure out the right ways to apply it." For example, the Wikimedia Foundation already uses AI to help identify article revisions containing vandalism, and its recently-published AI strategy includes supporting editors with AI tools that will help them automate "repetitive tasks" and translation. The Wikimedia Foundation is also actively developing a non-AI-powered tool called Edit Check that's geared toward helping new contributors fall in line with its policies and writing guidelines. Eventually, it might help ease the burden of unreviewed AI-generated submissions, too. Right now, Edit Check can remind writers to add citations if they've written a large amount of text without one, as well as check their tone to ensure that writers stay neutral. The Wikimedia Foundation is also working on adding a "Paste Check" to the tool, which will ask users who've pasted a large chunk of text into an article whether they've actually written it. Contributors have submitted several ideas to help the Wikimedia Foundation build upon the tool as well, with one user suggesting asking suspected AI authors to specify how much was generated by a chatbot. "We're following along with our communities on what they do and what they find productive," Miller says. "For now, our focus with using machine learning in the editing context is more on helping people make constructive edits, and also on helping people who are patrolling edits pay attention to the right ones."
[2]
Volunteers fight to keep 'AI slop' off Wikipedia
Hundreds of Wikipedia articles may contain AI-generated errors. Editors are working around the clock to stamp them out. Check the top of a Wikipedia page before you read, and you might see a new warning: "This article may incorporate text from a large language model." That label has been affixed to hundreds of Wikipedia articles, from "Danish nationalism" to "Natalie Portman," over the past year as the platform's volunteer editors grapple with an internet awash in writing generated by artificial intelligence. Suspicious edits, and even entirely new articles, with errors, made-up citations and other hallmarks of AI-generated writing keep popping up on the free online encyclopedia. Deep in Wikipedia's message boards and edit logs, the site's stewards are toiling for long hours to find them and stamp them out. It's a new challenge for one of the world's most popular websites, which has long prized itself on its community and reliability. While Wikipedia does not outright forbid the use of AI in editing, the site built its reputation through the human volunteers who devote their time to writing its millions of articles and ensuring they're up to standard, community members said. A surge of faulty AI-generated writing could undo that. "People really, really trust Wikipedia," said Lucie-Aimée Kaffee, a policy and AI researcher who has written about Wikipedia. "And that's something we shouldn't erode." Wikipedia, which allows anyone to edit its articles, has fought spam and vandalism since its inception. It relies on its global network of volunteers to monitor changes across articles and vet submissions for new ones. "I like to think of it as like an immune system," said Marshall Miller, the director of product for core experiences at the Wikimedia Foundation, the nonprofit that hosts Wikipedia. That immune system has been taxed by a new bug since AI tools like ChatGPT became widely available in 2022. Large language models have made it easier than ever to generate convincing writing for Wikipedia, Miller and Kaffee said. It's an enticing shortcut for novice contributors or those with an agenda. An October study by Princeton University researchers found that around 5 percent of the roughly 3,000 new English-language Wikipedia pages created in August 2024 contained text generated by AI. Examples the researchers identified included seemingly innocuous articles where editors appeared to use AI as a writing aid. In other cases, contributors used AI to write articles promoting businesses or advocating for political interests. Wikipedia moderators identified and deleted many of the offending articles, the study noted. Wikipedia editors have identified other examples of problematic AI use on the site, including an article that described an Argentine hotel, instead of a similarly named village, and a completely fictitious article about an Ottoman fortress that went unflagged by Wikipedia volunteers for almost a year. In 2023, editors started a team, WikiProject AI Cleanup, dedicated to stamping out AI-generated errors on the site. The project has developed its own guides to help editors spot signs of AI writing and maintains a list of more than 500 articles with suspected AI writing for review. In early August, Wikipedia amended its speedy deletion policy, allowing editors to quickly junk articles with obvious hallmarks of AI-generated writing, like leftover AI prompts in the text, technology news outlet 404 Media reported. Wikipedia also bans the use of AI-generated images. Public logs of messages between Wikipedia editors over the past several years show how the website's policies on AI were developed through lengthy discussion. Editors debated suspected articles on a case-by-case basis, warned offenders and flagged new signs of AI writing to compile into guides and inform site policy. The messages also demonstrate that the problem continues. "Oh man, I've been finding a LOT of AI slop in the submission queue," an editor complained in June. Miller, of the Wikimedia Foundation, praised the Wikipedia community's mobilization to develop responses to AI-generated content. He also said he thinks there are ways that generative AI can assist the encyclopedia's editors. The Wikimedia Foundation has considered developing AI tools to help Wikipedia's moderators automate certain tasks, onboard new editors and translate articles, the foundation said in an April news release. Not all of the Wikimedia Foundation's experiments have been welcome. The foundation scrapped an experiment to add AI-generated summaries to its articles in June after editors protested, 404 Media reported. Miller said that the foundation opted not to proceed with AI summaries after community feedback and that any of the Wikimedia Foundation's AI developments would not replace human oversight. "The way that Wikipedia remains neutral and reliable, and the thing that makes it unique, is that all this content passes through the hands of people," Miller said. Kaffee, the AI researcher, said watching Wikipedia's community develop strategies to identify and respond to AI content in real time could be instructive for other organizations. "This will be a problem that will be in many and most aspects of our life," Kaffee said. "Asking, 'Where do we want AI? ... What kind of rules do we want to set for AI-generated knowledge?' I think is really important."
Share
Copy Link
Wikipedia's volunteer editors are fighting against an influx of AI-generated content, implementing new policies and tools to maintain the encyclopedia's reliability and neutrality.
Wikipedia, one of the world's most trusted online resources, is facing a new challenge in the form of AI-generated content. With the rise of large language models like ChatGPT, the platform has seen an influx of articles and edits containing false information, phony citations, and poorly written content 1. This phenomenon, dubbed "AI slop" by the Wikipedia community, has prompted a robust response from the site's volunteer editors.
Source: The Verge
In response to this growing issue, Wikipedia editors have mobilized to create WikiProject AI Cleanup. This initiative aims to tackle the "increasing problem of unsourced, poorly written AI-generated content" 1. The project has developed guidelines to help editors identify AI-generated text, including:
A study by Princeton University researchers found that approximately 5% of new English-language Wikipedia pages created in August 2024 contained AI-generated text 2. This highlights the scale of the problem and the need for vigilant monitoring.
To combat the influx of AI-generated content, Wikipedia has implemented new policies. In early August, the platform amended its speedy deletion policy, allowing editors to quickly remove articles with obvious hallmarks of AI-generated writing 1. This bypasses the typical seven-day discussion period for article removal, streamlining the process for clearly problematic content.
Additionally, Wikipedia has introduced a new warning label for articles suspected of incorporating AI-generated text. This label, which reads "This article may incorporate text from a large language model," has been affixed to hundreds of Wikipedia articles 2.
While the Wikimedia Foundation, which hosts Wikipedia, doesn't create policies for the website, it is actively involved in developing tools to address the AI challenge. Marshall Miller, the Foundation's product director, likens the community's response to an "immune system" adapting to a new challenge 1.
The Foundation is developing a non-AI-powered tool called Edit Check, designed to help new contributors adhere to Wikipedia's policies and writing guidelines. Features of Edit Check include:
Despite the challenges, the Wikimedia Foundation sees potential benefits in AI technology. They already use AI to help identify article revisions containing vandalism and are exploring ways to support editors with AI tools for automating repetitive tasks and translations 1.
However, the Foundation emphasizes that human oversight remains crucial. "The way that Wikipedia remains neutral and reliable, and the thing that makes it unique, is that all this content passes through the hands of people," says Miller 2.
The fight against AI-generated content on Wikipedia is ongoing, with editors working tirelessly to maintain the site's reliability. This struggle raises important questions about the role of AI in knowledge creation and dissemination. As Lucie-Aimée Kaffee, a policy and AI researcher, notes, "This will be a problem that will be in many and most aspects of our life. Asking, 'Where do we want AI? ... What kind of rules do we want to set for AI-generated knowledge?' I think is really important" 2.
As Wikipedia continues to adapt to this new challenge, its experiences may provide valuable insights for other organizations grappling with the impact of AI on content creation and curation.
Apple plans to launch a major upgrade to Siri with App Intents in spring 2026, promising enhanced voice control across apps. The update faces challenges but could revolutionize iPhone interactions.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Chinese state-affiliated media criticizes Nvidia's H20 AI chips, claiming they pose security risks and are technologically inferior. Nvidia denies these allegations, defending the integrity of their products amidst escalating US-China tech tensions.
10 Sources
Technology
11 hrs ago
10 Sources
Technology
11 hrs ago
China is pushing for the US to ease restrictions on AI chip exports, particularly high-bandwidth memory chips, as part of trade negotiations ahead of a potential summit between Presidents Xi and Trump.
2 Sources
Business and Economy
11 hrs ago
2 Sources
Business and Economy
11 hrs ago
Amazon introduces Alexa+, a major AI upgrade to its voice assistant, aiming to compete with ChatGPT's conversational abilities. While offering improved features, early testing reveals significant bugs and reliability issues.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
Recent computer science graduates face unprecedented unemployment rates as AI tools and industry layoffs reshape the tech job market, challenging long-held promises of prosperity in the field.
2 Sources
Technology
3 hrs ago
2 Sources
Technology
3 hrs ago