AI for Automation
Back to AI News
2026-05-12Pentagon AImilitary AIAnthropicAI policyCSET GeorgetownAI national securityclassified AIAI automation

Pentagon Classified AI: 7 Firms Cleared, Anthropic Blocked

Pentagon's classified AI program cleared Microsoft, Google & xAI — but blocked Anthropic. Georgetown's CSET now vets every military AI model before deployment.


The Pentagon's classified AI program has quietly expanded: the U.S. military signed classified AI access agreements with 7 to 8 of the biggest tech companies in the world — Microsoft, Google, xAI (Elon Musk's AI company), and others. The terms are unlike anything the Pentagon has demanded before: these companies must allow government researchers to test their AI models before public launch, scanning for national security risks before the world sees them. One major player didn't make the list: Anthropic, the company behind Claude, excluded amid an ongoing contractual dispute. And the unelected team deciding who qualifies? A group of researchers operating out of Georgetown University in Washington, D.C.

CSET Georgetown — Pentagon AI model vetting center for national security risk assessment

The Pentagon's Classified AI Shortlist: Who Made the Cut

In May 2026, the U.S. Department of Defense finalized agreements with between 7 and 8 tech companies to integrate AI into classified military networks — systems physically isolated from the public internet (called "air-gapped" networks) that process some of the most sensitive intelligence in the United States government.

These aren't standard commercial cloud deployments. These AI systems would operate alongside classified intelligence feeds, surveillance analysis pipelines, and military decision-support platforms. CSET Interim Executive Director Helen Toner explained the practical application: "A lot of modern warfare is based on people sitting in command centers behind monitors, making complicated decisions about confusing, fast-moving situations. AI systems can be helpful in terms of summarizing information or looking at surveillance feeds and trying to identify potential targets."

The confirmed companies on the Pentagon's list include:

  • Microsoft — via its Azure Government cloud and OpenAI partnership
  • Google — through Google Cloud's Defense contracts
  • xAI — Elon Musk's AI company, maker of the Grok large language model
  • Between 4 and 5 additional unnamed technology companies (total: 7–8 confirmed partners)

The key condition embedded in every agreement: each company must allow the U.S. government to evaluate AI models before public launch — a form of pre-market AI review enforced through contract rather than law. The goal is to catch cybersecurity vulnerabilities and national security risks before models reach the general public.

Meet the Gatekeepers: Georgetown's CSET and Pentagon AI Oversight

The institution sitting at the center of this vetting process is CSET — the Center for Security and Emerging Technology at Georgetown University. Founded to analyze how AI and other emerging technologies affect national security, CSET has quietly become the de facto standards body for U.S. government AI risk assessment.

CSET researchers — not elected officials, not military generals — are embedded in the evaluation pipeline for AI models entering government use. Their threat-assessment framework has effectively become the federal standard for which AI systems can be trusted with military data.

CSET Senior Research Analyst Jessica Ji described the fundamental resource gap facing government AI oversight: "They simply don't have the same amount of resources — like manpower, technical staff and also access to compute (the processing power required to run and test large AI models) — to cull these models, to do rigorous testing."

The "they" in Ji's statement is the U.S. government itself. Major tech companies employ tens of thousands of AI researchers; federal agencies responsible for AI governance employ a fraction of that. Washington has handed CSET enormous evaluative responsibility precisely because it lacks the internal technical capacity to perform it.

What CSET researchers assess in pre-launch model evaluations:

  • Cybersecurity risks — whether an AI model can be manipulated into leaking classified data or producing dangerous outputs under adversarial prompting
  • National security vulnerabilities — whether a model's training data or behavioral tendencies create exploitable weaknesses for foreign intelligence services
  • Technology transfer risks — whether deploying a model on classified networks could expose its model weights (the trained numerical parameters that encode what an AI system has learned) to theft
  • Dual-use potential — whether capabilities built for civilian applications could be repurposed for weapons development or offensive cyber operations

If you're new to how these AI systems work, our AI automation basics guide covers concepts like model weights and compute in plain language — the same foundations CSET researchers are evaluating when they vet Pentagon AI contracts.

Anthropic Blocked from Pentagon AI — Here's What's Known

The most commercially significant detail in the Pentagon's new AI partnerships: Anthropic was not included. The company behind Claude — one of the most capable large language models (AI systems trained on billions of text examples to generate human-like responses) currently available — did not make the classified access list. The DOD cited "an ongoing dispute" as the reason for exclusion, without specifying the nature of the disagreement.

The competitive implications are stark. While Microsoft, Google, and xAI now have direct pipelines into the most sensitive government networks in the world, Anthropic's models remain locked out. In an era where federal AI contracts represent hundreds of millions of dollars in annual revenue — and where government-validated deployment creates significant enterprise credibility — exclusion from the DOD's classified AI program is a material competitive disadvantage.

CSET Senior Fellow Lauren Kahn assessed the Pentagon's broader classified AI expansion as "a step in the right direction — it is necessary and, frankly, inevitable." The question for Anthropic is how long the contractual dispute persists before their models can be submitted to the same pre-launch evaluation process that granted access to their competitors.

Georgetown CSET researchers evaluating AI national security risks for Pentagon classified AI programs

China Is Already Stealing U.S. Military AI Technology — CSET Warns Congress

The urgency behind the Pentagon's classified AI agreements becomes clear when you look at what CSET Senior Fellow Andrew Lohn told Congress in formal testimony before the U.S.-China Economic and Security Review Commission — one of the most influential bodies overseeing America's technology competition with China.

Lohn's assessment was direct: China is actively acquiring American AI technology through multiple channels, despite U.S. export bans and chip purchase restrictions. "Despite banning the models and restricting chip purchases, they are certainly taking active steps to acquire the technology. That includes activities to acquire expertise, hardware, and models."

The acquisition methods China is reportedly using include:

  • Cyber operations — directly stealing model weights (the trained neural network parameters that encode an AI's capabilities), code repositories, and training datasets through hacking campaigns
  • Talent recruitment — hiring AI researchers and engineers who carry institutional knowledge of model architectures and training methods out of American labs
  • Illicit hardware flows — circumventing U.S. chip export controls through third-party intermediaries in countries not subject to the same restrictions
  • Open-source intelligence — reverse-engineering capabilities from models that have been publicly released by American companies

Lohn's recommendation to Congress was pointed: any U.S. government subsidy for AI infrastructure should require verifiable theft-prevention guarantees as a precondition for receiving support. "Congress, and the American taxpayer, should demand assurances that AI developers can protect the technology as a precondition to receiving our support."

This framing positions CSET not just as an academic research center but as a policy architect — the institution proposing the conditions under which billions in government AI investment should flow, and the standards companies must meet to access that investment.

The Warning Washington Didn't Ask For: Stop Overfunding AI

Here is where CSET's position becomes genuinely unusual. While facilitating the Pentagon's classified AI expansion, CSET researchers are simultaneously warning Congress against the very spending boom fueling that expansion.

CSET Research Fellow Julie George made the case explicitly: "The Defense Department should avoid channeling additional funds into technology areas already heavily saturated with private-sector investment."

The evidence backing George's argument is empirical. CSET's research team conducted a landmark study analyzing 260 million scientific publications, categorized into 90,000 granular research groups — one of the most comprehensive studies of government research funding ever assembled. The finding: government investment produces the highest returns in areas where private capital is absent or insufficient. AI, in 2026, is the opposite of that: it's the best-funded technology sector in history.

The Pentagon's incremental AI spending is therefore largely redundant — funding development that would happen anyway through private venture capital, while leaving genuine defense capability gaps chronically underfunded. George's recommendation: redirect focus toward overlooked areas that private investors won't touch because the commercial returns aren't obvious — biotechnology, space infrastructure, advanced materials science, and electronic warfare.

This creates a striking institutional dynamic. CSET is simultaneously the gatekeeper enabling military AI deployment and the research voice arguing that military AI investment is misallocated. That tension is precisely what makes CSET credible to both sides of the congressional aisle — it is not cheerleading for the technology it is paid to evaluate.

Three Shifts Already Underway for AI Automation Companies and Enterprise Buyers

For developers, enterprise buyers, and AI companies watching this space, the CSET-Pentagon framework has practical implications already taking shape:

  • Pre-launch government evaluation is becoming the baseline for any AI company seeking federal contracts. Microsoft, Google, and xAI agreed to submit unreleased models for review — that precedent will expand to other procurement categories and likely influence enterprise procurement standards as well.
  • The Anthropic exclusion is a case study in institutional trust — not just technical performance — determining access to government AI markets. A contractual dispute locked out one of the world's most capable AI systems from the Pentagon's entire classified network. Technical excellence alone is not sufficient.
  • Anti-theft compliance is becoming a procurement requirement. Lohn's Congressional testimony signals that future government AI contracts will include verifiable security conditions alongside performance benchmarks. Companies that cannot demonstrate protection against technology theft to China may find themselves excluded from federal AI programs regardless of model capability.

CSET's most widely shared publication — "Keeping Top AI Talent in the United States" — reached 125 Hacker News points and generated 105 comments, ranking among CSET's most discussed papers. Immigration and visa policy will remain central to U.S. AI competitiveness debates as the talent competition with China intensifies.

For non-technical professionals: think of CSET as the FDA's safety lab, but for military AI. Before a drug enters the U.S. market, the FDA reviews it for safety. Before an AI model enters classified military networks, CSET researchers review it for national security risk. The parallel isn't perfect — CSET has no formal statutory authority — but the functional role is increasingly similar. And unlike the FDA, CSET is a university research center, not a government agency, which raises a genuine governance question: who is accountable when CSET's judgment call turns out to be wrong?

You can track CSET's public research at cset.georgetown.edu — their work on China's AI acquisition strategies and Pentagon technology priorities represents some of the most detailed publicly available analysis of where U.S. AI policy is actually heading. If you're building AI automation tools for enterprise or government clients, their threat-assessment framework is worth understanding before your first federal RFP lands in your inbox.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments