AI Policy: Brookings Ranks 4 Global AI Threats
Brookings' 15+ AI policy experts track 4 crises your tech feed ignores: AI warfare, teen safety, failed summits & job displacement. Free access.
AI policy is finally getting the serious treatment it deserves — from an unlikely source. While your tech feed tracks every model release, a century-old Washington think tank is covering something nobody else does: what happens after AI ships. Brookings Institution — one of the world's most-cited policy research organizations — just formalized its AI research operation, and the 4 threat areas on their radar should change how you think about the tools you use every day.
The AI Policy Story Your Feed Keeps Skipping
Most AI coverage focuses on benchmarks (standardized tests that measure how well a model performs on tasks like coding or reasoning) and product launches. Brookings focuses on consequences. Their Center for Technology Innovation (CTI) — a dedicated policy lab studying how technology reshapes society — runs an active research feed covering 4 simultaneous AI threat areas at once.
That's unusual. Most think tanks publish 1 major AI report per year. Brookings runs 15+ active contributors publishing across warfare, child development, international governance, and economic disruption — all simultaneously, all publicly accessible at no cost. And unlike product-focused tech publications, Brookings has no vendor relationships to protect.
The 4 frontlines where AI is already changing real lives
1. AI as a weapon of war
Brookings national security researchers — including Valerie Wirtschafter — are actively tracking AI's documented role in military conflict, including its deployment in Iran-related geopolitical operations. This isn't speculative: AI-powered targeting (algorithms that identify military objectives autonomously), real-time surveillance systems, and AI-generated disinformation campaigns have already been used in active conflicts. The unanswered policy question is who regulates military AI — and how fast legal frameworks can catch up to weapons already in the field.
2. Teenagers and AI — the experiment nobody consented to
Rebecca Winthrop, Co-Director of Brookings' Center for Universal Education, is tracking something striking: teenagers are adopting AI tools at a rate that outpaces any existing safety research about developmental impact. This isn't a screen-time debate. It's about how AI-mediated learning (using AI to write essays, solve homework, and study) reshapes cognitive development (the brain's ability to form independent reasoning skills) during the most formative years — with zero policy framework currently governing how these tools are deployed in schools or homes for minors.
3. What 4 global AI summits failed to produce
In just 24 months, governments staged at least 4 major international AI summits: Bletchley Park (UK, 2023), Seoul (South Korea, 2024), San Francisco (US, 2024), and Paris (France, 2025). Tom Wheeler — the former FCC (Federal Communications Commission — the U.S. agency that previously regulated broadcast and internet industries) chairman turned Brookings senior fellow — has catalogued what all 4 summits failed to produce:
- No binding enforcement mechanisms (rules with actual legal consequences for AI violations)
- No liability frameworks (no clarity on who's legally responsible when AI causes real-world harm)
- No cross-border data governance (no agreed rules on where user data moves and who controls it)
- No safety floor standards that all participating countries must meet
The summits produced press releases and declarations. They didn't produce enforceable law.
4. AI Automation and Economic Displacement — City by City, Wage Band by Wage Band
Mark Muro, a senior fellow at Brookings, translates AI automation risk into specific, actionable numbers: which job categories face the highest displacement probability, which U.S. cities are most economically exposed, and which wage levels are most at risk over the next 5 years. His research goes far beyond generic 'AI will take jobs' headlines — it identifies exactly which workers, in which metros, face the most pressure, and what targeted policy responses could realistically help them.
The 15+ person team you probably haven't heard of
Brookings has assembled a genuinely cross-disciplinary AI research team — not just technologists, but economists, educators, and national security experts working in parallel:
- Nicol Turner Lee — Director of CTI; leads overall AI governance strategy and coordinates the institution's multi-front research agenda
- Elham Tabassi — Heads the AI and Emerging Technology Initiative; formerly a senior scientist at NIST (National Institute of Standards and Technology — the U.S. body that sets voluntary AI safety standards referenced globally)
- Tom Wheeler — Former FCC chairman; analyzes AI regulation through the lens of how prior tech industries (telecom, broadband) were left ungoverned too long
- Mark Muro — Senior fellow; tracks automation's economic impact on specific workers and regions using granular employment and wage data
- Rebecca Winthrop — Co-Director, Center for Universal Education; focuses on AI in learning environments for children and teenagers
- Kevin Desouza — AI governance scholar; studies how public institutions responsibly adopt AI tools at the city and federal level
- Valerie Wirtschafter — Covers AI and national security, disinformation operations, and geopolitical risk across multiple active conflict zones
- Punya Mishra — Cross-domain analyst connecting learning science with AI's practical implications in classroom settings globally
The team publishes via written reports, policy briefs, and the TechTank Podcast — translating dense research into 20-30 minute audio episodes for people who don't have time to read 80-page white papers (dense academic documents packed with methodology sections and citations that most non-specialists never finish).
Why this matters even if you've never lobbied for anything in your life
Every AI product you use right now — ChatGPT, Claude, Copilot, Gemini — currently operates inside a regulatory vacuum (a legal space with no formal rules governing AI behavior, liability, or safety standards). That vacuum is exactly what Brookings is working to fill. The policy arguments being drafted in CTI reports today will directly shape:
- What U.S. Congress hears in AI-related committee hearings — and what legislation gets drafted
- How school districts decide which AI tools to allow or ban for students under 18
- What 'responsible AI' means in corporate procurement decisions (when companies evaluate AI tools for their teams and need to justify the risk to leadership)
- International treaty negotiations that will eventually determine what AI products can and can't do — including data privacy rules, algorithmic transparency requirements, and safety floors
If you care about whether AI gets regulated in ways that help or hurt your work, knowing who's writing the foundational arguments matters. The Brookings AI feed is entirely free — no subscription required. You can start reading here, or check the AI automation guides on this site to connect policy context directly to the tools you're already using today.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments