All 50 states tried AI laws in 2025 — 88% failed
All 50 U.S. states introduced AI bills in 2025 -- a historic first. Only 12% became law. Here is why, and what comes next.
Every State Tried. Almost Nothing Stuck.
For the first time in American history, all 50 U.S. states introduced at least one artificial intelligence bill during a single legislative year. According to Brookings Institution research tracking 386 AI bills across all 50 states as of October 20, 2025, the national legislative response to AI reached a genuine milestone in 2025 -- every state capitol, from Sacramento to Tallahassee, had lawmakers drafting rules for artificial intelligence.
The final tally is striking: of 1,208 AI-related bills introduced across all 50 states during 2025, only 145 were enacted into law. That is a passage rate of just 12%. Put another way, 88% of every AI bill introduced in the United States last year died somewhere between introduction and the governor's desk. The National Conference of State Legislatures documented the full scope of this legislative surge, and the picture is simultaneously impressive and sobering: America's statehouses are paying attention to AI, but translating that attention into enforceable law has proven extraordinarily difficult.
The story of 2025's AI legislation is not simply one of inaction or indifference. It is a story about complexity, competing interests, political alignment, and the structural realities of how laws get made -- and killed -- across 50 different systems operating under 50 different sets of rules.
Breaking Down the Numbers: Which Bills Passed and Which Did Not
Not all AI bills are created equal, and the data reveals dramatic differences in passage rates depending on what a bill actually tries to do. Researchers at Brookings categorized the 1,208 bills into several thematic groups, each with its own passage dynamics.
Responsible Governance bills (a category covering bills that require government agencies to adopt internal AI policies, conduct impact assessments, create AI advisory committees, or establish oversight frameworks for how public-sector AI systems are deployed) had the highest passage rate of any category at 38.6%, despite being the smallest group with only 114 total bills introduced. These bills succeeded for a simple reason: they ask governments to regulate themselves. There are fewer external stakeholders to antagonize, and lawmakers on both sides of the aisle tend to agree that the government should know what AI tools it is using.
Transparency and Trust bills -- legislation requiring AI systems to disclose when content is AI-generated, mandate watermarking of synthetic media, or compel companies to explain AI-driven decisions -- achieved a 15.5% passage rate. Notably, 80% of bills in this category are still in active legislative consideration as of early 2026, meaning the story is not over. These bills have broad conceptual support but frequently stall over definitional disputes: What counts as AI-generated content? What disclosure is sufficient? At what threshold does a system become regulated?
Protection of the Individual bills -- the largest category by volume, covering consumer privacy rights in AI systems, algorithmic discrimination protections, AI use in hiring and credit decisions, and critically, NCII and CSAM-focused legislation (NCII stands for Non-Consensual Intimate Imagery, the legal term for sexually explicit images distributed without the subject's consent; CSAM stands for Child Sexual Abuse Material -- both represent some of the most viscerally harmful applications of AI image generation) -- had the highest volume of introductions but the lowest passage rates. Remarkably, none of the NCII- or CSAM-focused bills had become law at the time of the Brookings analysis, despite representing some of the most urgent and bipartisan areas of public concern.
Employment-focused AI bills -- requiring disclosure when AI is used in hiring, banning fully automated termination decisions, or mandating human review of AI-generated performance evaluations -- were the only substantive category with a statistically significant passage rate in 2025. Several states, particularly those with large unionized workforces, moved these bills faster than other categories, reflecting organized labor's effective lobbying presence in those capitols.
The deepfake picture deserves its own examination. AEI's analysis of the full legislative corpus found that 301 out of 1,080 tracked bills directly targeted deepfakes. Of those 301, only 68 were enacted -- and the enacted laws were almost entirely focused on sexual deepfakes, addressed through either criminal penalties or civil liability provisions. Election-integrity deepfake bills and broader synthetic media transparency bills faced much harder paths to enactment.
The Wealthy States Paradox: More Money, More Bills, More Failures
One of the most counterintuitive findings in the Brookings data is what researchers have termed the wealthy states paradox. Across the dataset, states with higher per capita income introduced significantly more AI bills than lower-income states. Yet those same high-income states had meaningfully higher failure rates for the bills they introduced.
The mechanism behind this pattern is not mysterious once you examine it. Wealthy states tend to have larger, more professionalized lobbying ecosystems. When a state like California, New York, or Massachusetts introduces an AI bill, it immediately attracts a dense cloud of stakeholders -- technology companies, civil liberties organizations, industry trade groups, academic institutions, labor unions, and advocacy nonprofits -- each with the resources and sophistication to engage the legislative process. This stakeholder density does not kill bills through outright opposition alone; it kills them through fragmentation. Each group wants amendments. Each amendment satisfies one constituency and alienates another. The bill grows more complex, more contested, and ultimately more likely to die in committee or fail a floor vote.
States with higher passage rates share three observable traits according to the data: strong and organized business ecosystems (meaning business interests and legislative goals are relatively aligned), demonstrated records of successfully reducing poverty levels (which correlates with less legislative gridlock over distributive conflicts), and fewer competing legislative priorities during the same session. A state legislature consumed by budget crises, healthcare fights, or education funding battles has less bandwidth for nuanced AI policy -- and nuanced AI policy requires bandwidth.
Highly educated states introduce more ambitious legislation -- bills that attempt to address algorithmic bias across multiple industries simultaneously, or that create comprehensive AI liability frameworks covering both civil and criminal dimensions. These technically complex bills fragment stakeholder coalitions and create more surface area for opposition. The result: more ambition, more failure.
The Democratic Advantage in AI Regulation
The Brookings analysis found a measurable partisan dimension to AI bill success rates. Democratic governors and Democratic-leaning state legislatures showed statistically stronger momentum toward actually passing AI regulation compared to their Republican counterparts. This does not mean Republican states introduced fewer bills -- the data shows bill introductions were relatively evenly distributed across the partisan spectrum -- but it does mean that bills introduced in Democratic-controlled states were more likely to advance through committee, receive floor votes, and reach the governor's desk.
This pattern reflects broader ideological differences about the appropriate role of government in regulating private technology companies. It also reflects differences in the political coalitions that support AI regulation: consumer advocates, civil rights organizations, and labor unions -- all of whom are more organizationally active in Democratic-leaning states -- have been the most consistent advocates for AI legislation at the state level.
2026: A Larger Wave Is Already Building
If 2025 felt like a surge, 2026 looks like a flood. As of March 2026, 45 states have already introduced 1,561 new AI bills -- with the full legislative session still underway. The IAPP State AI Governance Legislation Tracker is updating in real time as new bills land daily.
One of the most significant 2026 developments is the proliferation of AI chatbot safety bills. According to AI2Work's tracking, 78 AI chatbot safety bills are currently moving through legislatures in 27 states. These bills generally require AI chatbot providers to implement safety guardrails for minors, disclose AI identity in conversations, and maintain liability for harms caused by chatbot outputs.
But there is a structural complication carrying over from 2025 into 2026 that will quietly kill hundreds of bills before they ever receive a vote. Of the 44 states currently in the second year of a two-year legislative session, only 23 allow bill carryover (bill carryover is the procedural rule that allows a bill introduced in year one of a two-year legislative session to remain active and eligible for consideration in year two, rather than expiring and requiring reintroduction from scratch -- without carryover, every bill from year one is automatically dead at the end of that year). In 21 states, bills that did not pass in 2025 are legally dead and must be reintroduced in 2026, consuming legislative bandwidth and staff resources while resetting whatever momentum the bill had previously built. IAPP's 2026 legislative trends analysis identifies this carryover asymmetry as one of the underappreciated structural barriers to building cumulative AI regulatory progress at the state level.
The Federal Preemption Threat That Could Erase Everything
Every state effort described in this article faces a single existential risk: federal preemption. Legislative preemption occurs when federal law supersedes -- and thereby invalidates -- state law on the same subject matter. Under the Supremacy Clause of the U.S. Constitution, when Congress passes a law or when a federal regulatory agency promulgates rules that occupy a regulatory field, state laws covering the same ground become unenforceable, regardless of how carefully crafted or democratically adopted they were.
In March 2026, the Trump administration released a federal AI framework outlining the administration's approach to AI governance at the national level. If that framework is codified into enforceable federal law or regulation -- and if it includes explicit preemption language displacing state AI rules -- the 78 AI chatbot safety bills currently moving through 27 state legislatures could be wiped out overnight. Every state that spent 2025 and 2026 building AI consumer protection frameworks would see those frameworks rendered void wherever they conflict with the federal approach.
This is not a hypothetical concern. Federal preemption has happened before in adjacent technology policy areas -- the CAN-SPAM Act of 2003 preempted stronger state anti-spam laws, and the Communications Decency Act has repeatedly been used to argue against state-level platform liability rules. The AI regulatory landscape is ripe for the same dynamic, particularly as large technology companies -- many of which face a patchwork of 50 different potential state requirements -- have strong incentives to support a single federal standard that they help shape.
The irony is substantial. States rushed to fill a federal vacuum on AI regulation because Congress moved slowly. Now that federal action is materializing, it may not complement state efforts -- it may eliminate them.
What AI Laws Actually Affect You Today
For ordinary people trying to understand what any of this means in practice, the honest answer is: most of the 145 enacted laws are either administrative in nature or narrowly targeted at specific harms.
The laws most likely to affect you directly today are: state laws criminalizing or creating civil liability for non-consensual sexual deepfakes (if you are a victim of NCII, more than 30 states now provide legal remedies that did not exist five years ago); disclosure requirements in a handful of states requiring AI-generated political advertisements to be labeled; and in a small number of states, requirements that employers notify job applicants when AI is used to screen their resumes or make hiring decisions.
What is not yet affecting most people, but is actively working its way through legislatures: comprehensive consumer AI rights frameworks, algorithmic discrimination protections in credit and housing, and AI chatbot safety requirements for platforms used by minors.
The gap between what is in the news and what is actually on the books is significant. Legislators talk about AI regulation constantly. Enacted law on the subject remains thin, targeted, and highly variable by state.
The Bottom Line
The 2025 AI legislative year proved that American democracy is paying attention to artificial intelligence. It also proved that paying attention and making law are very different things. Twelve percent of bills passed. Eighty-eight percent failed. The bills that succeeded tended to be narrow, bipartisan, and low-friction. The bills that failed tended to be ambitious, complex, and contested.
As 1,561 new bills pile into statehouses in 2026, the structural dynamics have not fundamentally changed -- except that a federal framework now looms over everything, promising either to anchor state efforts or to render them moot. The most important AI regulatory story of 2026 may not be any individual state bill. It may be whether the federal government decides to leave states room to regulate, or decides to own the field entirely.
Sources: Brookings Institution -- Analyzing the Passage of State-Level AI Bills | NCSL 2025 AI Legislation | IAPP State AI Governance Tracker | IAPP 2026 Trends | AI2Work -- 78 Chatbot Safety Bills | AEI -- What States Are Actually Doing with AI
Related reading: More AI policy news | AI governance tracker | Federal AI framework analysis
Stay updated on AI news
Simple explanations of the latest AI developments