Wikipedia just banned AI-written articles — 44 to 2
English Wikipedia voted 44-2 to ban AI-generated content. Only basic spell-checking and translation are allowed. Here's what changed and why it matters.
The world's most-read encyclopedia just told AI to stay out. English Wikipedia has officially banned the use of AI tools like ChatGPT to write or rewrite articles — and the vote wasn't even close. 44 editors voted in favor, just 2 against.
The policy, which closed on March 20 after a formal Request for Comment (RfC), marks one of the strongest institutional pushbacks against AI-generated content to date. For a site that gets over 1.7 billion unique visitors per month and serves as a primary training source for the very AI models it's now banning, this is a significant moment.
What Exactly Got Banned
The new policy prohibits editors from using large language models (LLMs — the technology behind ChatGPT, Claude, Gemini, and similar tools) to generate or rewrite article content. That means you can't ask an AI to draft a Wikipedia article, expand a stub, or rephrase existing sections.
Wikipedia administrator Ilyas Lebleu, who proposed the guideline, explained the scale of the problem: "An AI agent can just run wild 24 hours per day. It can cause disruption at a scale that is much larger than what a human editor can achieve."
- Spell-checking your own writing: Editors can run their own text through an AI for grammar fixes — like an advanced spellchecker. But they must verify every change, because AI can "change the meaning of text beyond what the editor intended."
- Translation drafts: If you're fluent in both languages, you can use AI to create a first-pass translation. You're still responsible for checking every sentence.
The Bot That Proved the Point
The vote didn't happen in a vacuum. In early March 2026, a suspected bot called TomWikiAssist was caught authoring and editing multiple Wikipedia articles autonomously. It demonstrated exactly what editors feared: AI-generated content can flood the encyclopedia at machine speed, while human volunteers spend hours cleaning it up.
That asymmetry — seconds to generate, hours to verify — was one of the strongest arguments for the ban. Wikipedia runs entirely on volunteer labor, and editors argued they shouldn't have to spend their time fact-checking AI hallucinations (confident-sounding statements that are actually wrong).
The Enshittification Pushback
Wikipedia administrator Chaotic Enby, who authored the final successful proposal after earlier attempts failed, framed the decision in broader terms. They called it a "pushback against enshittification and the forceful push of AI by so many companies in these last few years."
Previous attempts to pass an AI policy had collapsed — not because editors disagreed on the need, but because the wording was always either too vague or too strict. Chaotic Enby's approach succeeded by keeping the scope focused: ban AI-generated article text, allow two narrow exceptions, and move on.
Hannah Clover, named 2024 Wikimedian of the Year, praised the clarity: "LLM text has been really frowned upon for a while, but it's good to have that officially be the case."
The Detection Problem
Here's the catch: there's no reliable way to detect AI-generated text. The policy explicitly states that "stylistic or linguistic characteristics alone do not justify sanctions" — meaning editors can't just flag someone because their writing sounds too polished.
Enforcement relies on human moderators reviewing content and editor behavior patterns. Pages with less active moderation remain vulnerable. As one editor noted, some humans naturally write in a style similar to AI, making automated detection tools unreliable.
David Lovett, who writes the Edit History newsletter, argued the rules should go even further to keep Wikipedia "clean" from internet AI slop.
A Feedback Loop Nobody Wants
There's a deeper reason this matters beyond Wikipedia itself. AI companies like OpenAI, Google, and Anthropic have all trained their models on Wikipedia content. If AI-generated text enters Wikipedia, it gets scraped and fed back into the next generation of AI training data — creating a feedback loop of compounding errors.
Imagine an AI hallucinating a false fact into a Wikipedia article. That false fact gets included in training data. The next AI model learns it as truth. It generates more content based on that falsehood. The cycle continues.
This ban is, in part, an attempt to keep one of the internet's most important knowledge sources free from that contamination.
Other Wikipedias Are Moving Too
The English edition isn't alone. Spanish Wikipedia already bans AI from creating new articles or expanding existing ones, though it offers similar carve-outs for editing and translation. Each Wikipedia language edition operates independently with its own rules, so adoption will vary.
What This Means in Practice
If you edit Wikipedia: Don't paste AI-generated text into articles. You can use AI as a grammar checker for your own writing, but you're responsible for every word.
If you read Wikipedia: This policy aims to keep the information you rely on accurate and human-verified. But enforcement isn't perfect — always cross-reference important facts.
If you build AI tools: The world's largest open knowledge base just signaled it won't be a free dumping ground for AI output. Other platforms may follow.
The 44-2 vote sends a clear signal: even in the most open, collaborative corner of the internet, there are limits to how much AI people are willing to accept. Wikipedia didn't ban AI because it doesn't work. It banned AI because cleaning up after it costs more than writing from scratch.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments