Warren says Pentagon blacklisted Anthropic as 'retaliation'
Senator Elizabeth Warren accuses the Pentagon of retaliating against Anthropic for refusing to build AI surveillance tools. A federal court hearing is set for today.
Senator Elizabeth Warren just sent a letter to Defense Secretary Pete Hegseth accusing the Pentagon of retaliating against Anthropic — the company behind Claude — for refusing to let its AI be used for mass surveillance of Americans or autonomous weapons without human oversight.
The accusation lands one day before a critical federal court hearing on March 24, where Judge Rita Lin will decide whether to temporarily block the Pentagon's 'supply chain risk' label that has effectively frozen Anthropic out of all government-related business.
What the Pentagon actually did
Earlier this month, the Department of Defense designated Anthropic as a 'supply chain risk' — a label normally reserved for companies suspected of espionage or counterfeiting. The designation doesn't just end the Pentagon's own contract with Anthropic. It forces every company that does business with the U.S. military to certify they don't use Anthropic's products either.
The result: more than 100 enterprise customers contacted Anthropic with concerns about continuing to work with the company. Anthropic's CFO estimates the damage could range from hundreds of millions to billions of dollars in lost 2026 revenue.
Why Warren calls it retaliation
"I am particularly concerned that the DoD is trying to strong-arm American companies into providing the Department with the tools to spy on American citizens and deploy fully autonomous weapons without adequate safeguards," Warren wrote. She argued the Pentagon could have simply ended its contract — instead, it chose a nuclear option designed to punish the company across the entire defense ecosystem.
A court filing made things worse for the Pentagon
A March 20 court filing revealed something striking: Pentagon negotiators told Anthropic the two sides were 'nearly aligned' on contract terms — just one week before the Trump administration declared the relationship over. This timeline suggests the supply chain risk label wasn't about genuine security concerns, but a political decision made above the negotiators' heads.
The bigger picture: who controls AI safety lines?
This case could set a precedent for every AI company in America. The core question: can the government punish a company for setting safety boundaries on its own technology?
Anthropic refused to build AI for mass surveillance or fully autonomous weapons. The Pentagon responded by effectively blacklisting the company from the entire defense sector. If the court sides with the Pentagon, it sends a message to every AI lab: either build what the military wants, or lose access to the government's massive procurement network.
The tech industry has already rallied behind Anthropic. Former judges filed briefs raising concerns about the Pentagon's use of the supply chain risk label, and industry groups representing hundreds of companies urged the court to pause the designation.
What happens next
Judge Rita Lin hears Anthropic's preliminary injunction request today in San Francisco federal court. If she grants it, the supply chain risk label would be frozen while the full case plays out — potentially restoring Anthropic's ability to work with government contractors.
If she doesn't, Anthropic faces months of legal battle while its enterprise customers decide whether to stick with Claude or switch to competitors who haven't drawn the Pentagon's ire.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments