A top journalist used ChatGPT for quotes — 15 were made up
Mediahuis suspended its senior fellow after discovering 15 of 53 blog posts contained AI-fabricated quotes. Seven people confirmed they never said what was attributed to them.
One of Europe's most senior media executives just got suspended for doing exactly what he warned others not to do: trusting AI-generated text without checking it.
Peter Vandermeersch, the former CEO of Mediahuis Ireland (publisher of the Irish Independent and Sunday Independent), was caught using ChatGPT, Perplexity, and Google Notebook to write his newsletter — and the AI made up quotes from real people. An investigation found that 15 of his 53 blog posts contained fabricated quotes, and seven individuals confirmed they never said what Vandermeersch attributed to them.
How a journalism veteran fell for AI hallucinations
Vandermeersch is no beginner. He ran Mediahuis Ireland from 2022 to 2025, previously served as editor-in-chief of the Dutch newspaper NRC, and was recently appointed as Mediahuis' first "Journalism and Society" fellow — a prestigious role focused on the future of journalism.
His job was to write a newsletter about the state of journalism itself. The irony is devastating: the person hired to think deeply about journalism's future was undone by the very technology reshaping it.
In his public confession on Substack, Vandermeersch explained his process: he used AI tools to summarize reports and research, then wrote his articles from those summaries — trusting they were accurate. The problem? AI tools like ChatGPT sometimes generate convincing but completely fictional quotes — a phenomenon called "hallucination" (when AI confidently presents false information as fact).
"I was not careful enough. I fell into the trap of hallucinations. The AI-generated quotes were so good that they produce irresistible quotes you are tempted to use as an author."
— Peter Vandermeersch, in his public apology
The investigation: how NRC uncovered the fake quotes
The Dutch newspaper NRC — where Vandermeersch himself used to be editor-in-chief — spotted the problem first. Their journalists noticed they could not verify certain quotes in Vandermeersch's newsletter. When they dug deeper, they found dozens of quotes that simply didn't exist in the sources he cited.
The investigation revealed a troubling pattern across his body of work:
- 15 of 53 blog posts (28%) contained unverifiable quotes
- 7 named individuals confirmed they never said what was attributed to them
- 8 articles were removed from independent.ie
- Vandermeersch admitted he knew some quotes were incorrect but didn't correct them immediately
Mediahuis CEO Gert Ysebaert condemned the breach, stating it "runs counter to the standards we uphold" and that "this should never have happened." Vandermeersch was immediately suspended from his fellowship.
The real lesson: AI summaries aren't sources
This isn't just a journalism story. If you use ChatGPT, Perplexity, or any AI assistant at work, the same trap applies to you.
If you write reports or emails: Never copy a quote or statistic from an AI summary without checking the original source. AI tools routinely fabricate quotes that sound perfectly plausible.
If you do research: Treat AI summaries like a starting point, not a source. If ChatGPT says "According to Dr. Smith...," go find what Dr. Smith actually said. The AI might be paraphrasing — or completely inventing.
If you manage a team: This case shows that even experienced professionals fall for AI hallucinations. The more polished the AI output looks, the easier it is to trust. Build verification steps into your workflow, especially when AI is involved in content creation.
AI hallucinations aren't getting better — they're getting harder to spot
What makes this case especially alarming is that Vandermeersch isn't a careless intern. He's a 30-year veteran of European journalism who knew about hallucinations — he had written about the dangers himself. As AI tools get better at generating realistic text, the hallucinations become more convincing, not less.
The Ars Technica reporter fired earlier this month for the same issue suggests this is becoming a pattern, not an isolated incident. As more professionals rely on AI tools for research and writing, the line between "AI-assisted" and "AI-fabricated" is getting dangerously blurred.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments