AI for Automation
Back to AI News
2026-03-25New York TimesAI detectionjournalismAI writingModern Love

The New York Times just got accused of publishing AI-written content

A Modern Love essay in The New York Times is under fire for reading 'exactly like AI slop.' The NYT denies it — but can't prove it wasn't AI.


One of the most prestigious essay columns in American journalism just got called out for potentially publishing AI-generated writing — and nobody can definitively prove it either way.

The controversy centers on a "Modern Love" essay published by The New York Times in November 2025, titled "I Was Deemed Unfit to Be a Mother," written by Canadian author Kate Gilgan. The piece describes losing custody of her son due to alcoholism.

New York Times Modern Love column accused of publishing AI-generated content

"This reads EXACTLY like AI slop"

The alarm was raised by Becky Tuch, editor of Lit Mag News, who posted on X (formerly Twitter): "I don't want to falsely accuse writers of AI-use. But this reads EXACTLY like AI slop."

Tuch pointed out that Modern Love is "notoriously competitive, super hard to break into" — making it an unusual venue for writing that exhibits patterns commonly associated with AI output.

Here's the passage that triggered the most scrutiny:

"Not hate. Not anger. Just the flat finality of a heart too tired to keep trying. That's when I stopped fighting. I didn't give up. I shifted."

Three telltale patterns — or just good writing?

Critics identified three structural patterns that large language models (the technology behind ChatGPT, Claude, and other AI writing tools) tend to produce:

1. Parallelism — "Not X. Not Y. Just Z." This mirroring structure appears repeatedly throughout the essay. AI models lean heavily on this pattern because it's statistically common in training data.

2. Rule of Three — Listing ideas in groups of three is a classic rhetorical technique, but AI uses it so consistently that Wikipedia editors have flagged it as a chatbot tell.

3. Em-dash overuse — The essay uses em-dashes (—) at a rate that some analysts associate with AI text generation.

But here's the problem: these are also just features of polished personal writing. Writer Ann Bauer noted the text "resembled typical Modern Love editorial style." Author Dennis Hogan warned that accusing writers without evidence is "a pretty bad road to go down."

The NYT response — and the bigger problem

The New York Times responded with a statement: "Journalism at the newspaper is inherently a human endeavor, and that will not change." They referenced their policy requiring human oversight and disclosure of any substantial generative AI use.

The author, Kate Gilgan, has not responded to requests for comment. AI detection tools produced inconclusive results.

This isn't the first time a major publication has faced this accusation. The list is growing:

  • CNET — caught publishing AI-generated articles in 2023
  • Sports Illustrated — used fake AI-generated author profiles
  • Wired, Business Insider, Ars Technica — all faced similar controversies
  • Chicago Sun-Times — accused of AI content in reporting

Why this matters even if the essay is human-written

The real story isn't whether Kate Gilgan used AI. It's that we can no longer tell. When a respected publication's most competitive essay column publishes work that credible editors flag as potential AI — and neither side can prove their case — trust in written content has fundamentally changed.

For anyone who writes for a living — bloggers, marketers, journalists, copywriters — this is a warning. Your readers are now scanning your work for AI patterns, whether you used AI or not. Writing that "sounds" AI-generated may face skepticism regardless of its origin.

How to check if something was AI-written

No tool is reliable enough to give a definitive answer. Current AI detectors have high false-positive rates, meaning they regularly flag human writing as AI-generated. The most useful approach is looking for multiple telltale patterns together: excessive parallelism, vague emotional language, lack of specific sensory details, and suspiciously consistent sentence rhythm.

But as this controversy shows — even experienced editors disagree on what those patterns prove.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments