US AI Policy Vacuum: Anthropic-Pentagon Feud Exposes the Gap
Brookings found 4+ US agencies oversee AI with no enforcement power. The Anthropic-Pentagon clash proves 'responsible AI' policy is untested and ungoverned.
The United States has no single agency governing artificial intelligence — and the US AI policy vacuum is now impossible to ignore. The Brookings Institution, Washington's most influential policy think-tank, is finally saying it out loud. Their latest analysis asks a blunt question: "Who is in charge of those in charge?" The answer appears to be nobody, and a public clash between AI maker Anthropic and the Pentagon is forcing that uncomfortable reality into the open. For teams deploying AI automation tools today, understanding this governance gap is essential.
US AI Governance Vacuum: The Problem Nobody Wants to Name
Brookings researchers titled their latest analysis pointedly: "The empty national AI policy framework." That's direct language for an institution known for measured, academic prose — and it signals genuine alarm, not routine scholarly caution.
The structural problem is this: the U.S. currently has at least 4 major agencies with overlapping AI mandates, and none has binding authority to enforce AI safety standards across industries:
- FTC (Federal Trade Commission — handles consumer protection and unfair business practices, but holds no specific AI enforcement authority)
- NIST (National Institute of Standards and Technology — publishes voluntary AI guidelines, but "voluntary" means companies can ignore them without penalty)
- White House OSTP (Office of Science and Technology Policy — advises the president on tech issues but cannot pass or enforce legislation)
- DHS (Department of Homeland Security — covers critical infrastructure security, but operates without a broad AI mandate)
When governance is diffused (spread thin across competing bureaucracies with no clear hierarchy), accountability disappears. Companies can — and routinely do — cite one agency's framework while quietly ignoring another's. Brookings isn't calling this a coordination gap that better meetings would fix. It's calling it a structural design failure.
Brookings' Center for Technology Innovation (CTI — a dedicated research unit focused on digital policy and emerging technology governance) has tracked this problem across 5 major AI policy domains: labor economics, national security, AI ethics, regulatory frameworks, and international competitiveness. In every domain, the same pattern repeats: competing mandates, no lead agency, no enforcement teeth.
Anthropic vs Pentagon: When Responsible AI Policy Hit a Hard Limit
The controversy drawing Brookings researchers' sharpest attention: Anthropic — one of the few AI labs explicitly founded on safety-first principles — has been drawn into conflict with the U.S. Department of Defense over what "responsible AI development" actually obligates a company to do.
Anthropic was built around Constitutional AI (a training methodology that teaches AI models to critique and revise their own outputs based on a defined set of ethical principles, rather than optimizing purely for user approval or commercial utility). The company's founders departed from OpenAI specifically because they believed commercial AI labs needed structural safety constraints baked into the models themselves — not just voluntary commitments that dissolve under government or military pressure.
The Pentagon feud exposes the central tension in that position. When a "responsible AI" company enters national security work — whether through direct contracts, capabilities licensing, or policy advisory roles — the question becomes unavoidable: whose definition of "responsible" actually governs the relationship? The military's (which prioritizes strategic advantage, operational secrecy, and classified use cases) or the company's (which prioritizes harm reduction, transparency, and public accountability)?
Brookings frames this as a litmus test (a critical real-world stress test for principles that look clean on paper but face institutional pressure in practice) for the entire responsible AI movement. Their question — "Does the Anthropic–Pentagon feud mean the end of responsible AI?" — is blunt by think-tank standards. The implied verdict: if even the most principled AI lab bends under institutional military pressure, what does that signal for labs with weaker safety commitments?
Who Gets Hurt When No One Is Governing AI
The AI governance vacuum isn't abstract — it produces concrete, identifiable harm for specific groups. Brookings research explicitly tracks three areas of acute exposure:
Children in AI-Saturated Environments
Brookings is actively studying how AI technologies are reshaping early childhood and education — a domain with essentially zero federal AI-specific regulation in place today. Children encounter AI-driven recommendation systems (algorithms that automatically decide what content, videos, or games to show next, optimized for engagement metrics rather than developmental wellbeing) before they can read. No national framework sets legal limits on what these systems can do when the user is under 10. No agency currently holds the power to compel a platform to adjust its algorithm for child safety.
Workers and the AI Automation Labor Market Transition
Brookings' Future of Work Initiative operates from an explicitly "people-first vision" — centering workers, not corporations or policy theorists, in AI transition planning. This directly challenges the dominant tech-industry narrative that AI displacement is an inevitable force of nature workers simply must adapt to. The Institute's core argument: well-designed policy can shape how and where AI affects employment, not just clean up economic damage after the fact. Without a governance framework with real teeth, that shaping never happens — companies optimize for efficiency, and labor absorbs the full unmitigated cost.
National Security and Global AI Competition
Researchers including Colin Kahl (former U.S. Undersecretary of Defense for Policy) and Manuel Muñiz (European-U.S. relations expert) have identified a geopolitical dimension to the vacuum. The EU AI Act (Europe's landmark binding AI regulation, enacted in 2024 with tiered risk classifications, specific prohibitions, and enforcement timelines) now creates a global compliance standard. U.S. companies operating in Europe must follow it — even without a comparable domestic rule at home. That regulatory asymmetry (different binding rules in different jurisdictions that multinationals must navigate simultaneously) erodes the U.S. position as a standard-setter in global AI governance, a role Brookings argues America cannot afford to cede.
What a Functional US AI Policy Framework Would Actually Require
Brookings isn't only cataloguing failures. The CTI and the Strobe Talbott Center for Security, Strategy, and Technology (a Brookings institutional partner focused on where emerging technology intersects with geopolitical competition) have outlined what a credible national AI framework would require at minimum:
- A designated lead agency with statutory authority — enforceable rules with real penalties, not voluntary guidelines companies can selectively apply
- Formal interagency coordination so FTC, NIST, DHS, and OSTP issue coherent guidance rather than competing signals that sophisticated corporate lawyers exploit
- Sector-specific rules for the highest-risk applications: healthcare diagnostics, criminal justice algorithms, employment screening tools, and child-facing content systems
- Public participation requirements — formal mechanisms for citizens (not just industry lobbyists) to shape AI rules that directly affect their daily lives
- International alignment with the EU AI Act to prevent regulatory arbitrage (companies relocating operations to jurisdictions with weaker rules to avoid compliance costs while still serving U.S. consumers)
The Brookings TechTank podcast — a dedicated series hosting policy researchers, technologists, and governance practitioners — has been examining these structural questions in depth, using the Anthropic-Pentagon situation as a live case study in what "institutional accountability" looks like when tested against real power.
If you're deploying AI tools today — whether in a product, an internal workflow, or a government agency — the voluntary NIST AI Risk Management Framework (published January 2023) is currently the closest thing to a U.S. national standard that exists. Auditors, investors, and increasingly insurers are using it as the baseline for AI governance maturity. Checking your stack against it is the most credible accountability signal you can offer stakeholders right now. Explore practical deployment guidance at AI for Automation's guides or stay current with the latest AI policy developments.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments