Anthropic just called OpenAI the 'tobacco industry' of AI
Anthropic staff compare OpenAI to Big Tobacco. CEO Amodei called Pentagon deal 'straight up lies.' 2.5M users quit ChatGPT. A judge blocked the Trump ban.
Inside Anthropic, the company behind Claude, staff have a nickname for their biggest competitor. They call OpenAI the "tobacco industry" of artificial intelligence — a company aggressively marketing a potentially harmful product, according to a new biography of Sam Altman based on 250+ interviews.
The comparison emerged from The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, written by Wall Street Journal reporter Keach Hagey. But what began as internal culture became a very public war when the Pentagon, a military deal, and a presidential ban turned two AI rivals into the center of the biggest tech policy fight of 2026.
14 Researchers Walked Out — and Built a $380 Billion Rival
The story starts in December 2020. Dario Amodei, the physicist-turned-AI-researcher who co-led the development of GPT-2 and GPT-3 (two of the most influential language models ever built), walked out of OpenAI. Fourteen colleagues followed — including his sister Daniela, now Anthropic's president.
The official reason was safety. Amodei, who co-invented RLHF (Reinforcement Learning from Human Feedback — the technique that makes chatbots follow instructions instead of producing random text), believed OpenAI was prioritizing growth over caution. One flashpoint: when someone proposed selling AGI (artificial general intelligence — AI that matches human-level reasoning) to governments or UN Security Council nations, Amodei found it "completely unacceptable."
But the split wasn't purely ideological. The biography reveals Amodei felt sidelined by co-founder Greg Brockman, excluded from key meetings — including one with President Obama — and locked in a power struggle over who controlled the company's most important language model projects.
In early 2021, they founded Anthropic with $124 million in initial funding. Five years later, the company raised $30 billion in its Series G at a $380 billion valuation — a 6x increase in just 11 months. Annualized revenue hit $19 billion by early March 2026, up from $9 billion at the end of 2025.
The Pentagon Demanded Unrestricted Access — Anthropic Said No
In late February 2026, the confrontation that had been brewing for years finally exploded. The Pentagon demanded unrestricted access to Claude (Anthropic's AI assistant) for military use — no contractual limits on how the technology could be deployed.
Anthropic insisted on two red lines: contractual bans against mass domestic surveillance of American citizens, and a prohibition on fully autonomous weapons (systems that can select and engage targets without human approval). The Pentagon said it didn't intend to use AI that way but refused to put it in writing, requiring instead that AI companies allow their models to be used "for all lawful purposes."
Negotiations collapsed. On February 28, 2026, Defense Secretary Pete Hegseth designated Anthropic a "supply-chain risk" — a national security label normally reserved for foreign adversaries like Huawei or Kaspersky. President Trump ordered all federal agencies to immediately stop using Anthropic technology.
"Straight Up Lies" — The Memo That Shocked Silicon Valley
Within hours of Anthropic's ban, OpenAI announced it had secured the Pentagon deal instead. The timing was unmistakable. Sam Altman later admitted the deal looked "opportunistic and sloppy."
What happened next was extraordinary. In an internal memo obtained by The Information, Amodei unloaded on his former employer:
- He called OpenAI's Pentagon messaging "straight up lies"
- He said Altman was making a "false attempt to present himself as a peacemaker and dealmaker"
- He characterized OpenAI's entire safety approach as "maybe 20% real and 80% safety theater"
- He called Altman "mendacious," citing "a pattern of behavior" he'd repeatedly witnessed during their years working together
The key difference, according to Amodei: Anthropic demanded contractual, legally enforceable bans on surveillance and autonomous weapons. OpenAI relied on vague assurances about "existing laws" and "human responsibility" policies — with no contractual enforcement mechanism.
"We've actually held our red lines with integrity rather than colluding with them," Amodei wrote.
2.5 Million Users Voted With Their Thumbs
The public response was immediate and massive. A grassroots boycott called #QuitGPT swept across social media, urging users to cancel ChatGPT subscriptions and delete the app.
The numbers were staggering:
- 295% surge in ChatGPT mobile uninstalls day-over-day (vs. a typical 9% daily rate)
- 775% spike in one-star reviews for the ChatGPT app
- 2.5 million people pledged to cancel ChatGPT
- 1.5 million paid subscribers actually cancelled — roughly $30 million in lost monthly revenue
- Claude downloads surpassed ChatGPT for the first time ever, reaching #1 on Apple's App Store
Meanwhile, OpenAI's own staff were furious. CNN reported internal backlash, and Max Schwarzer, OpenAI's VP of Research, resigned to join Anthropic, saying: "Many of the people I most trust and respect have joined Anthropic over the last couple of years."
A Federal Judge Called It "Arbitrary and Capricious"
On March 26, 2026, U.S. District Judge Rita Lin delivered a sharp rebuke to the Trump administration. She blocked the ban on Anthropic, ruling it was "likely both contrary to law and arbitrary and capricious."
Her language was pointed: the Pentagon "provides no legitimate basis to infer from Anthropic's forthright insistence on usage restrictions that it might become a saboteur." She also noted the designation was taken "without any meaningful notice or pre-deprivation process" — essentially saying the government punished a company for asking for ethical guardrails.
The ruling was a significant win for Anthropic, though the legal battle continues. Behind the scenes, reports suggest the Pentagon and Anthropic may be returning to the negotiating table.
The Uncomfortable Paradox Neither Company Can Escape
Here's where the story gets uncomfortable — for everyone involved.
Just three days before criticizing OpenAI's safety theater, Anthropic quietly loosened its own core safety promise. Its updated Responsible Scaling Policy 3.0 (the company's rulebook for when to slow down or stop developing dangerous AI) eliminated a key requirement: the company no longer needs to halt development of risky models if a competitor has already released something similar.
In other words, if OpenAI ships something potentially dangerous first, Anthropic can now build the same thing without triggering its own safety brakes.
Anthropic's explanation? "The policy environment has shifted toward prioritising AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level."
Safety researcher Mrinank Sharma left Anthropic in early 2026, partly over concerns about this exact loosening. And Amodei himself acknowledged the central paradox in a Fortune interview: he is "simultaneously most publicly worried about AI dangers" while "running one of the companies most aggressively building it."
Critics call his reasoning — the race is happening regardless, so it's safest to lead with guardrails — "motivated reasoning on a civilizational scale."
Where the Numbers Stand Today
Despite the controversies, both companies continue to grow:
- Anthropic: $19B annualized revenue, $380B valuation, 8 of Fortune 10 as customers, 500+ companies spending $1M+/year, 32% enterprise market share (up from 18% in 2024)
- OpenAI: ~$12B+ annualized revenue (2025 figures), facing subscriber losses and staff departures
- Amodei's net worth: an estimated $7 billion — nearly all in Anthropic stock. He's pledged to donate 80% of it, alongside six co-founders
The "tobacco industry" label may be unfair, or it may be prophetic. What's certain is that the two companies born from one organization now represent fundamentally different bets on how humanity should handle its most powerful technology — even as both quietly loosen the very guardrails they promised would make the difference.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments