U.S. AI Policy Vacuum: Nobody's in Charge
Brookings warns U.S. AI regulation is an 'empty framework': no agency leads, global summits deliver nothing binding, and kids are already in the gap.
America's AI policy has no one in charge — and the country's AI industry just proved it by moving faster than its government. A top Washington think tank is now publicly asking who's responsible for this AI regulation gap. The Brookings Institution (one of the most cited, nonpartisan policy research organizations in the United States) just published a series of reports framing the country's AI oversight as an “empty national framework” — a structure that exists on paper but lacks the leadership, coordination, and enforcement to function in practice.
This matters beyond Washington: if nobody is clearly in charge of AI policy, the rules governing the tools you use at work, in school, or in healthcare can shift unpredictably — from any direction, at any time.
U.S. AI Policy: An Empty Framework, Not a Missing One
The sharpest Brookings report is framed as a question: “Who is in charge of those in charge?” The answer is embedded in the report's own title — The empty national AI policy framework — and it is stark: the U.S. has the scaffolding of AI governance but lacks the institutional authority to make it work.
Unlike nuclear energy — overseen by the NRC (Nuclear Regulatory Commission, a dedicated federal body with statutory authority to set and enforce safety rules) — or aviation, governed by the FAA (Federal Aviation Administration, the sole agency responsible for all civil aviation nationwide), artificial intelligence touches dozens of government domains simultaneously: healthcare, finance, national security, labor, and education. No single regulator owns it all.
The result is a fragmented patchwork of AI oversight with no single coordinator:
- The FTC (Federal Trade Commission — the agency responsible for policing consumer fraud and unfair business practices) handles AI in advertising and data collection
- NIST (National Institute of Standards and Technology — a federal lab that publishes voluntary technical guidelines, not binding law) released an AI Risk Management Framework that companies are encouraged but not required to follow
- OSTP (Office of Science and Technology Policy — the White House's in-house technology advisory body) coordinates executive branch positions on AI strategy
- Sector regulators like the SEC, FDA, and CFPB each address AI only within their narrow existing mandates, with no cross-agency coordination authority
When an AI system causes real harm — a biased hiring algorithm screening out qualified candidates, a medical AI delivering incorrect dosage guidance — there is no clear federal authority to investigate, fine, or require remediation. Brookings frames this as a structural failure with serious public accountability consequences, not a minor bureaucratic gap.
Global AI Governance: Three Summits, Zero Binding Rules
Brookings extended the same critique to international governance in a companion piece: “What got lost in the global AI summit circuit?”
Since 2023, the world's major democracies held at least 3 high-profile AI governance gatherings: the UK Bletchley Park Summit (2023), the Seoul AI Summit (2024), and the Paris AI Action Summit (2025). Each produced communiqués, voluntary pledges, and multi-stakeholder declarations. None produced binding international law, an independent oversight body, or enforcement mechanisms with real consequences for violations.
The Brookings critique is pointed: international AI governance has become a summit circuit — a rotating conference schedule that generates diplomatic goodwill without institutional follow-through. For companies operating globally, this means continued compliance complexity: adhere to the EU's AI Act (a binding regulation that classifies AI systems by risk level — from minimal-risk chatbots to high-risk hiring tools — and requires documentation, testing, and audits for high-risk systems) in Europe, while navigating separate, inconsistent standards everywhere else. The gap between summits and enforceable rules is exactly where liability falls through.
AI in K-12 Education: Classrooms First, Rules Never
Perhaps the most urgent finding in Brookings' current output is a report titled Generation AI Starts Early. The core argument: AI tools are already shaping how young children learn to read, solve problems, and engage with information — and policy governing those tools has not kept pace.
Personalized learning platforms (software that dynamically adapts lesson difficulty to each student's performance and pace), AI-curated content feeds in educational apps, and voice-activated classroom assistants are being deployed at scale in K-12 schools. The critical gap Brookings identifies: standards for what data these tools can collect, how long they retain it, and how the underlying AI models (the trained software systems that generate responses or make predictions based on patterns learned from large datasets) should be tested for accuracy and bias in educational settings — those standards are still being written by product teams, not policymakers.
Experts including Brookings researcher Shriya Methkupally argue this creates a generational stakes problem: the norms embedded in today's educational AI will shape how an entire cohort of children understands learning, authority, and knowledge itself. Getting those norms wrong — or letting them be set entirely by commercial incentives — carries costs that will take decades to fully measure.
AI and Jobs: The Policy That Doesn't Exist Yet
Alongside education, Brookings economist Mark Muro and colleague Sam Manning examine how AI is reshaping career pathways and job quality — not as a future forecast, but as a present reality. Automation (the use of AI or software to perform tasks that previously required human judgment, skilled labor, or specialized domain expertise) is accelerating faster than workforce retraining programs can respond.
The institutional analysis mirrors the governance gap: there is no coordinated federal strategy identifying which workers are most at risk, what large-scale retraining looks like in practice, or how displaced workers should be supported during transitions. Brookings frames this not as a technology problem but as a policy coordination failure — and an urgent one, since workforce transitions of this scale typically require a decade or more to navigate even with strong institutional support already in place.
The AI Regulation Vacuum Has a Price — and You're in It
If you're building with AI, deploying it in your organization, or simply using AI-powered tools at work, the Brookings analysis has three concrete implications right now:
- Regulatory whiplash is real. At least 4 U.S. states — California, Texas, Colorado, and Illinois — have advanced or passed AI-related legislation in the absence of federal standards. These state laws do not agree with each other, and that patchwork is your compliance burden, not Washington's problem to solve first.
- Liability remains unresolved. No clear federal framework means no clear answer to who is responsible when AI causes harm — a growing legal exposure problem for any organization using AI in hiring, lending, healthcare, or customer-facing services.
- Education and workforce expectations are shifting. The next generation of workers is being educated by AI systems operating under no common standard. The skills and expectations they bring to workplaces will be shaped by those tools in ways current hiring practices have not yet accounted for.
Brookings is ringing an alarm that most mainstream tech coverage ignores in favor of benchmark announcements and product launches. The alarm: the governance infrastructure for the most consequential technology in modern history is not ready — and the longer the vacuum persists, the harder it becomes to fill. Watch state-level AI legislation closely in 2026; it's the real leading indicator of where federal rules are heading. If you're building tools for schools or deploying AI among your workforce, the standards governing your use case are about to get much more specific. You can track Brookings' ongoing analysis at Brookings' AI research hub, or explore practical AI implementation frameworks at our AI automation guides library.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments