OpenAI Lobbyists Ran a 97% AI-Generated News Outlet
97% of a fake news site's articles were AI-written — and it's tied to OpenAI's DC lobbying operation. Inside synthetic journalism at scale.
A researcher analyzing a self-described "collaborative journalism" outlet called The Wire by Acutus made a striking discovery: of 94 articles reviewed, only 3 were written by humans. That's a 97% AI generation rate — with promotional activity tracing directly to the consulting firm that manages OpenAI's Washington lobbying operation.
This is the first documented case of enterprise-scale synthetic journalism (AI-produced news articles at industrial volume, designed to mimic credible reporting) linked to corporate lobbying infrastructure — and it may be a preview of what coordinated AI influence campaigns look like when they scale.
An AI-Generated 'News Site' With No Real Journalists
The Wire by Acutus launched in late 2025 and has published approximately 100 articles across six categories: tech, energy, media, science, business, and healthcare. In format, it resembles any legitimate outlet — professional layout, topic navigation, an About page invoking "collaborative journalism."
But there is no masthead (the section where publications list editors, reporters, and ownership). No named journalists anywhere on the site. No editorial contact. Ownership is not disclosed.
The About page describes a process where "contributors with relevant, firsthand experience" share perspectives that are then "synthesized and edited into stories." The operative word is synthesized. In natural language processing (the branch of AI that teaches machines to read and write), synthesis is precisely the step where a model like GPT-4 turns input prompts into finished text — which is exactly what AI content generation does.
The Numbers: 97% Machine-Written, Just 3 Human Articles
Tyler Johnston, writing for The Midas Project's Model Republic publication, ran 94 of The Wire's articles through Pangram — an AI content detector (software that analyzes writing patterns and statistical signatures to determine whether a human or a machine produced the text) claiming 99.98% accuracy.
The results:
- 65 articles (69%) classified as fully AI-generated
- 26 articles (28%) classified as partially AI-generated
- 3 articles (3%) classified as human-authored
That's 32 machine-generated articles for every one a human contributed to. The editorial bias is equally revealing. Articles include "Escalating Anti-AI Radicalism" — framing criticism of AI development as extremism — and "Will Republicans Let Blue States Set America's AI Rules?" which frames AI regulation as a partisan threat to innovation.
Every category — tech, energy, science, healthcare — carries the same tilt: AI development is progress, critics are obstacles, policymakers who slow deployment are making mistakes. This is not a news operation. It's a content strategy wearing a newsroom's clothes.
The PR Trail That Leads to OpenAI's Lobbyists
Johnston's investigation did not stop at content analysis. He traced who was actually promoting The Wire's articles online and found a heavily concentrated amplification pattern.
Despite having minimal organic social media presence, 50% of The Wire's engagement on X (Twitter) came from a single account: Patrick Hynes, president of Novus Public Affairs — a PR firm (a public relations and government affairs agency) operating in Washington, D.C. Here is where the connections compound:
- Novus Public Affairs represents Targeted Victory, a Republican consulting firm
- Targeted Victory leads OpenAI's Washington lobbying operations — the paid campaign to shape U.S. Congress and federal agency positions on AI policy
- OpenAI has not publicly commented on The Wire or acknowledged any relationship with these firms
Mashable's own reporting adds a pointed caveat: "If Johnston's reporting is correct and his inferences are accurate, we may have an instance of an AI firm deliberately mischaracterizing its work as 'independent journalism' to lobby on its behalf."
This carries regulatory weight. OpenAI's own usage policies prohibit generating content designed to deceive readers about its origins. If The Wire is running GPT-4 or similar models without disclosure, that could constitute a violation of the same company's terms of service — a contradiction that federal regulators may find worth examining.
The Wider Threat: Synthetic Journalism and AI Automation at Scale
Traditional misinformation (false stories spread by identifiable bad actors) leaves traces — domain registrations, funding sources, known ideological networks. Synthetic journalism (AI-produced news at industrial volume, structured to mimic credible reporting) is harder to trace because it borrows the format of journalism without any of the accountability structures reporters are bound by.
The Wire's playbook is technically replicable by anyone with an OpenAI API key (a credential granting programmatic access to GPT-4 and similar models), a hosting account, and a connected PR contact to amplify output. Here's what that playbook produces:
| Metric | The Wire by Acutus | Typical News Outlet |
|---|---|---|
| AI-generated content | 97% | Under 5% |
| Named editors | None disclosed | Publicly listed |
| Editorial transparency | Zero | Clear ownership + funding |
| Social amplification | 50% from 1 PR executive | Organic readership + press |
This investigation arrives as Ziff Davis — Mashable's parent company — filed a copyright lawsuit against OpenAI in April 2026, making AI's relationship with the media industry contested at the legal level. The Wire represents a different dimension of that conflict: not what AI does to existing journalism, but what AI can replace journalism with.
How to Spot a Synthetic Outlet Before It Shapes Your View
No specialized tool is required to identify most synthetic news operations. The signals are visible before you read a single article:
- No named editorial staff: Real publications list editors and reporters by name. An About page that describes a "process" but names no people is a structural red flag.
- No bylines on articles: Every legitimate article credits its author. Anonymous content on policy-sensitive topics is anonymous by design.
- Uniform prose tone across all categories: AI text tends to be consistent, smooth, and personality-free. Human newsrooms have varied voices and stylistic fingerprints.
- 100% ideological alignment across every topic: Real newsrooms publish disagreements. When every article across every category pushes the same direction, that's a content strategy — not reporting.
- Low organic following, high-profile amplifiers: Thin follower counts paired with strategic amplification from industry-connected accounts is a classic astroturfing signature (manufactured grassroots-style support using paid networks).
AI detection tools — Pangram, Originality.ai, and GPTZero (software that scores the probability a given text was generated by a machine rather than a human) — can supplement manual review. None are infallible. All can be partially defeated by light human editing passes.
The Wire by Acutus may not survive scrutiny now that its methodology is documented. But the infrastructure it demonstrates — AI automation at scale, no disclosure, PR amplification, policy targeting — will be used again by others. Start recognizing the pattern now. The AI tools guide covers how generative AI content works and how to evaluate it critically before it shapes your opinion on the next major policy debate.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments