AI Policy Crisis: 3 Threats Your Government Isn't Ready For
Brookings' AI fellows expose 3 urgent policy crises: power grid overload, AI warfare, and teen safety gaps. Government regulation is 3–5 years behind.
America's most influential AI policy think tank just turned its research firepower on artificial intelligence — and what it found should concern anyone who pays an electricity bill, works in defense, or has a teenager at home. Brookings Institution, a 110-year-old Washington DC policy center with deep ties to Congress and the White House, has quietly transformed into one of the world's most active AI governance research hubs.
The shift matters because Brookings doesn't just observe policy — it shapes it. What their fellows publish today tends to appear in Congressional testimony within 12 months and in actual regulation within 3 years. Their AI research feed is, in effect, your preview of the rulebook coming for every company, school, and government agency that uses AI.
The Research Bench Shaping Washington's AI Policy Playbook
The Brookings Center for Technology Innovation (CTI) now houses 6+ dedicated senior fellows — full-time researchers whose careers are entirely focused on AI governance (the study of rules, accountability frameworks, and rights protections for AI systems). That is not a working group. It is a permanent institutional commitment to governing AI at scale.
The team's composition reveals exactly how seriously Brookings is taking this problem:
- Nicol Turner Lee — CTI Director, leads Washington's AI governance agenda
- Colin Kahl — Former Under Secretary of Defense for Policy, now mapping AI's military dimension
- Sorelle Friedler — Expert in algorithmic fairness (whether AI systems produce equal outcomes regardless of race, gender, or income)
- Courtney C. Radsch — Applies a human rights framework to every AI deployment question
- Kevin C. Desouza — Studies how government institutions restructure themselves around AI
- Molly Kinder (Brookings Metro) — Tracks AI's downstream impact on cities and regional workforces
This cross-disciplinary bench — spanning defense policy, civil rights, energy economics, child development, and urban planning — signals that AI has escalated from a technology question into a systems-level emergency requiring coordinated, multi-domain AI policy responses.
AI Energy Crisis: Your Power Grid Wasn't Built for This
The first pressure point Brookings has identified: AI's electricity consumption is colliding with regulatory frameworks designed for a completely different era of technology. Training a single large language model (an AI system that learns patterns from billions of text examples to generate human-like responses) can consume more electricity than 100 American homes use in an entire year. And that's just training — every time you send a message to ChatGPT, Gemini, or any AI assistant, a data center somewhere runs a high-powered computation using real electricity on the live power grid.
The scale is becoming impossible for grid planners to ignore:
- US data center electricity demand is projected to reach 6–12% of total national consumption by 2028, up from roughly 2% in 2020
- Individual AI facilities now request 1–2 gigawatts of dedicated grid capacity — equivalent to a full nuclear reactor's output
- Virginia, Texas, and Georgia grid operators are already reporting multi-year queues of AI campuses waiting for power connections
- Existing utility regulations, written for hospitals and factories, have no mechanism for facilities whose power demand can double within 18 months
Brookings researchers describe this as "global energy demands within the AI regulatory landscape" — a measured way of saying the rulebook hasn't caught up to reality. What Brookings produces here will likely inform how the Federal Energy Regulatory Commission (FERC) — the agency overseeing US electricity markets — updates its frameworks for AI-era infrastructure. Your electricity rates, your state's grid reliability, and the pace at which AI tools improve are all downstream of this policy question.
AI National Security Crisis: Generative AI Is Already a Military Asset
The second crisis is receiving far less mainstream coverage but carries the highest stakes: generative AI (software that produces text, images, video, or synthetic audio on demand) is already deployed as a strategic tool in active geopolitical conflicts.
Brookings researchers — led in part by former Defense official Colin Kahl — are studying documented cases including Iran's use of generative AI, framing it as an example of AI functioning as a force multiplier (a capability that dramatically amplifies the effectiveness of existing military or intelligence resources without proportionally increasing costs or personnel requirements). This moves the AI policy conversation from "how do we regulate hiring algorithms" to "how do we prevent AI from destabilizing international security."
The policy vacuum is genuinely unprecedented. Analysts who previously focused on nuclear treaties, conventional weapons export controls, and economic sanctions are confronting entirely new questions:
- At what point does AI-generated disinformation constitute an act of aggression under international law?
- Should AI foundation models capable of military application face the same export controls as weapons systems?
- How should democracies regulate AI in defense contexts without creating asymmetric disadvantages against authoritarian actors who face no such constraints?
When a think tank with Kahl's credentials — he served in the Biden administration's Department of Defense — publishes on AI weaponization, the National Security Council reads it. This research will feed directly into export control frameworks, international treaty negotiations, and military doctrine updates, with first drafts likely circulating within the next 24 months.
Teen AI Safety Crisis: Teens Use AI, Just Not How Policy Assumes
The third crisis is the most pervasive and the least technically complex — yet it may produce the most sweeping AI regulations of the three.
A Brookings paper titled "Teens are using AI — but not how we think" flags a fundamental mismatch between how adults (parents, school board members, state legislators) assume teenagers engage with AI and what's actually happening in practice. The headline implies that current AI policy frameworks are being designed around faulty behavioral models — a problem that produces rules which either over-restrict beneficial technology or completely miss the actual harms.
Based on patterns emerging in adjacent research, the gaps most likely include:
- Schools focused on banning AI for homework while students use it heavily for emotional support, social coaching, and mental health — areas existing bans don't address at all
- Policy attention concentrated on high-income demographics while lower-income students face different, underresearched risk and benefit profiles
- Media panic about cheating obscuring the reality that teens heavily use AI for creative work, job applications, and language learning
- Blanket school bans creating a digital literacy divide between students whose families teach responsible AI use and those who receive zero guidance
The stakes are concrete and immediate. Dozens of US states introduced AI-in-education legislation in 2025–2026. The EU's AI Act (the European Union's comprehensive AI regulation framework, which applies to any product serving European users) includes specific provisions for AI systems used with minors. If the foundational behavioral research policymakers are citing is wrong, the resulting regulations will cause real harm at scale — either by restricting tools that help students who need them most, or by leaving genuinely dangerous usage patterns entirely unaddressed.
The AI Regulation Lag: Why the 3–5 Year Policy Gap Keeps Growing
We are 30+ months into mainstream AI adoption, measured from late 2023 when ChatGPT surpassed 100 million users in under 60 days. By historical precedent, we are still at least 2–3 years away from comprehensive regulatory frameworks in any of these three areas — and that lag is not an accident. It is the structural reality of how democratic societies govern new technology.
The pattern repeats: the commercial internet arrived in the mid-1990s; meaningful online privacy law didn't appear until the early 2000s. Social media launched mid-2000s; regulatory responses began in earnest a decade later — after the harms were already embedded in society. Each cycle, the delay costs more in economic disruption, security incidents, and social damage. AI's velocity makes the stakes proportionally higher than any previous technology wave.
Brookings' multi-disciplinary approach is a deliberate attempt to compress that lag by producing policy-ready research faster than academic journals typically allow. Their 6+ AI fellows bridge the gap between what engineers understand technically and what legislators can actually act on — and they do it on Washington's timeline, not academia's.
The practical implication: if your work touches energy infrastructure, defense or government contracting, or users under 18, you are operating in the three highest-priority AI regulation target zones identified by the institution most likely to influence what those regulations look like. Building your AI compliance knowledge now — before the rules arrive — is not optional planning. It is competitive advantage. The organizations that engage with this research today will shape the rules. The ones waiting for final text will be scrambling to comply with frameworks they had no hand in designing.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments