AI-Written News: 9% of US Articles Are Machine-Generated
UMD researchers scanned 186,000 articles: 9.1% of US news is AI-generated — yet only 5% disclosed it. AI automation is quietly reshaping American journalism.
The next newspaper article you read might be AI-written — and you'd have almost no way of knowing. A landmark new study confirms that AI-generated news now makes up 9.1% of American journalism, with almost zero disclosure to readers.
A sweeping new study from the University of Maryland, conducted in partnership with AI detection startup Pangram Labs, scanned 186,000 articles from 1,500 American newspapers published during summer 2025. The result: 9.1% of articles were partially or fully AI-generated. Of those, 5.24% were entirely machine-written and 3.98% were a blend of human and AI content.
The most alarming finding isn't the percentage itself — it's the silence surrounding it. Out of 100 AI-flagged articles that researchers manually audited, only 5 disclosed any AI involvement. And just 7 of the 1,500 newspapers surveyed had any public AI policy at all.
Local Papers Bear the Heaviest AI-Generated Content Load
The study's most revealing discovery concerns resources and geography. Small local papers — the ones covering city councils, school boards, and county courts — showed a 9.3% AI content rate. Large-circulation papers (those with over 100,000 subscribers) came in at just 1.7%.
That's a 5.5x gap, and it tells a clear story: AI fills the void where journalism budgets are thinnest. Communities that have already lost reporters to layoffs are now getting machine-generated articles instead — often without knowing it.
Boone News Media, a regional publisher, led the pack with a 20.9% AI rate. Advance Publications (a media conglomerate that owns outlets including Condé Nast) followed at 13.4%.
"The overall number, nine percent, is surprising," said Max Spero, co-founder of Pangram Labs (a startup that builds AI detection tools, founded by ex-Tesla and ex-Google engineers). "But what we also found about where, and perhaps why, this was happening ought to be concerning."
Opinion Pages: Where AI-Generated Content Hides in Plain Sight
At the New York Times, Wall Street Journal, and Washington Post, opinion articles were 6.4 times more likely to contain AI-generated content than news articles from the same papers.
The numbers: 4.5% of opinion pieces at major papers contained AI content, compared to just 0.7% of news articles. Researchers identified 219 AI-flagged opinion articles across the three publications — mostly from guest contributors, not regular columnists.
This pattern makes sense. Guest op-ed (opinion editorial) writers face less editorial scrutiny than staff journalists. They're also more likely to be professionals from other fields — lawyers, executives, academics — who routinely use ChatGPT or Claude as writing assistants. If you're interested in how AI writing tools actually work, the line between "assistance" and "generation" is thinner than most people think.
The NYT "Modern Love" AI Writing Incident
The study's abstract findings came alive in a concrete case. Kate Gilgan, a guest writer for the New York Times' popular "Modern Love" column (a weekly essay series about relationships and dating), published a piece that Pangram Labs flagged as over 60% likely AI-generated.
Gilgan admitted using five different AI tools — ChatGPT, Claude, Copilot, Gemini, and Perplexity — for what she called "inspiration and guidance and correction." The NYT published it without any AI disclosure label.
"I used AI as a collaborative editor and not as a content generator," Gilgan said. But literary commentator Becky Tuch pushed back: "I don't want to falsely accuse writers of AI-use. But this reads EXACTLY like AI slop."
The irony is sharp. The New York Times — the same organization that sued OpenAI for training on its articles — is now publishing content that AI helped create, with no reader-facing label. A NYT spokesperson responded that journalism there "is inherently a human endeavor," while noting the paper has an 8-person AI team and has trained 1,700 of its 2,000 newsroom members on AI tools.
Weather Coverage Leads AI-Generated News at 28%
Not all news beats (the specific topics a journalist covers regularly) are equally affected. Weather coverage had the highest AI prevalence at 28% — unsurprising, since weather articles follow predictable, template-friendly structures. Technology coverage followed at 16%, then health at 12%.
Politics and sports articles showed the lowest AI rates, likely because they require original reporting, source cultivation, and real-time observation that current AI tools can't replicate. The geographic distribution was also uneven: AI content was concentrated in mid-Atlantic and southern U.S. states.
98% of Readers Want Labels — 5% Get Them
Perhaps the study's starkest contrast: 98% of news consumers surveyed by Trusting News (a nonprofit that studies how audiences relate to journalism) said they want disclosure when AI is used in news content. Yet in practice, only 5% of AI-generated articles include any such label.
Meanwhile, 50% of U.S. adults believe AI will negatively impact news quality, and only 10% expect positive effects. The trust gap is growing wider, not narrower.
"As a reader, you generally don't have a way to know if the news or opinions are coming from a human or from AI," said Mohit Iyyer, Associate Professor at the University of Maryland and co-author of the study.
The AI Detection Problem Nobody Has Solved
There's an important caveat in the study itself: AI detection remains unreliable. When researchers tested the Gilgan essay with multiple AI detection tools (software that analyzes writing patterns to estimate whether text was machine-generated), results varied wildly. Pangram flagged 60%+, two other tools found roughly 30%, and one found 0%.
Pangram Labs claims 99.98% accuracy, validated by the University of Chicago's Becker Friedman Institute (a leading economics research center). But even at that rate, a 0.02% false positive rate (the chance of incorrectly flagging human-written text as AI) across 186,000 articles could mean dozens of pieces were wrongly identified.
The study is also a preprint (a research paper shared publicly before formal peer review) on arXiv, not yet published in a peer-reviewed journal. And Pangram Labs co-authored it using their own detection tool — a potential conflict of interest worth flagging.
Still, the directional finding is hard to dismiss entirely. As Daniel Trielli, UMD Assistant Professor of Media and Democracy, put it: "What we teach students is transparency with the public." Right now, American newspapers are failing that standard — and the consequences are measured in reader trust.
The real question isn't whether AI automation is being used in newsrooms. It clearly is. The question is whether readers deserve to know — and on that point, 98% of them have already answered. Check the latest developments in AI transparency and regulation as this debate unfolds.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments