AI for Automation
Back to AI News
2026-03-21WikipediaAI policyAI contentChatGPTcontent moderation

Wikipedia just banned AI-written articles — 44 to 2

Wikipedia editors voted 44-to-2 to ban AI-generated articles. Only spell-checking and translation are still allowed.


The world's largest encyclopedia just drew a hard line against AI. In a community vote that closed on March 20, 2026, Wikipedia editors voted 44-to-2 to formally ban the use of AI tools like ChatGPT, Claude, and Gemini for writing encyclopedia articles. The result was so lopsided that moderators invoked the "snowball rule" — Wikipedia's equivalent of calling a game early when the outcome is beyond doubt.

Wikipedia logo — the encyclopedia that just banned AI-generated content

What the new policy actually says

The new guideline is blunt: "Don't use large language models (LLMs) to generate article content." That means you can't paste a prompt into ChatGPT, get a paragraph about quantum physics or the history of Rome, and drop it into a Wikipedia article. The community decided this violates Wikipedia's core content policies — because AI tools routinely invent facts, fabricate sources, and produce text that sounds authoritative but isn't backed by real evidence.

The two narrow exceptions:
  • Spell-checking and grammar fixes — You can ask an AI to suggest edits to something you already wrote, but only if you review every change yourself and the AI doesn't add new information.
  • Translation — You can use AI to translate an article from another language's Wikipedia into English, but only under a separate set of strict rules.

Why 44 editors said yes — and only 2 said no

The overwhelming support came down to a practical problem: AI-generated articles were creating more work, not less. Wikipedia's volunteers already spend countless hours checking facts and sourcing claims. When someone dumps an AI-written article into the encyclopedia, it looks polished on the surface — but nearly every sentence needs to be individually verified. The closing statement put it plainly: "The effort required for large-scale disruption is minimal compared to the effort to clean up and verify every sentence of generated text, placing an unfair burden on volunteers."

The two dissenters argued the wording was too strict. One felt it was harsh on people who use AI lightly during drafting. Another argued that since AI tools are becoming unavoidable, Wikipedia should focus on teaching people how to use them responsibly rather than banning them outright.

An entire cleanup crew already exists

This vote didn't happen in a vacuum. Wikipedia has been fighting AI-generated content for over a year through a dedicated volunteer group called WikiProject AI Cleanup. These volunteers actively hunt for articles that read like AI output — spotting telltale signs like overly confident language with no citations, repetitive sentence structures, and that unmistakable "assistant" tone.

The project even developed a guide for identifying AI-generated writing. Articles that are clearly 100% AI-generated with no human review can now be nominated for immediate deletion.

One important protection for human writers

Here's something the community thought carefully about: some people naturally write in a way that sounds like AI. The new policy explicitly states that "more evidence than just stylistic or linguistic signs is needed to justify sanctions." In other words, you can't accuse someone of using AI just because their writing is too clean or too formal. Editors need to look at whether the content actually follows Wikipedia's sourcing rules and whether the editor's recent history shows a pattern of problematic edits.

Part of a much bigger wave

Wikipedia's vote reflects a pattern sweeping through the internet's most important open platforms. In just the past week:

  • The Django web framework declared that AI-generated code contributions are creating an "open source crisis"
  • 46 Node.js developers petitioned to ban AI-generated code from the project
  • The machine learning conference ICML caught 497 reviewers using AI to write their paper reviews
  • Hacker News updated its guidelines to explicitly warn: "HN is for conversation between humans"

The common thread? Communities built on human expertise and trust are discovering that AI-generated content erodes both. When anyone can produce unlimited text at zero effort, the people who actually check that text — volunteer editors, code reviewers, peer reviewers — get overwhelmed.

What this means if you use Wikipedia

For the 1.7 billion people who visit Wikipedia every month, the practical impact is straightforward: the encyclopedia is doubling down on human-verified knowledge. In an era where AI-generated misinformation is everywhere, Wikipedia is betting that human volunteers checking real sources is still the best way to build a reliable knowledge base.

If you're a Wikipedia editor who has been using ChatGPT to draft articles, this is a clear signal to stop — or at least limit your AI use to the two approved exceptions. And if you're tempted to write an AI-generated article and pass it off as your own work, know that an active community of volunteers is specifically trained to spot exactly that.

The 44-to-2 vote wasn't close. The world's encyclopedia has spoken.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments