AI for Automation
Back to AI News
2026-03-28AI policyTrumpfederal preemptionstate lawsAI regulationWhite HouseCalifornia AIColorado AI

Trump just wiped out every state AI law with one framework

Trump's White House released a national AI framework on March 20, 2026, pushing to override all state AI regulations with a single federal standard.


The Federal Government Just Declared War on State AI Laws

On March 20, 2026, the Trump White House released what it calls a National Policy Framework for Artificial Intelligence — a four-page document that, if enacted by Congress, would eliminate the authority of all 50 states to regulate AI development and replace it with a single federal standard.

The document is framed as a legislative recommendation to Congress, not an executive order with immediate legal force. But it builds on an executive order President Trump signed in December 2025 that already set enforcement machinery in motion — including a task force inside the Department of Justice with a mandate to sue states whose AI laws the administration considers too restrictive.

In plain English: The White House wants Congress to pass a law that overrides (legally called preemption — when federal law cancels out state laws) any state rule that tells AI companies how to build or deploy their products. States could still pass child safety laws and consumer protection laws, but they would lose the ability to set their own safety requirements for AI systems themselves.

The White House's argument: having 50 different state AI laws creates a "fragmented patchwork" that makes it impossible for American AI companies to operate efficiently — and hands an advantage to China, which operates under one national policy.

The White House, where the National AI Policy Framework was released on March 20, 2026

What the Framework Actually Does — and What It Doesn't

The March 20 framework is the legislative wish list. The December 2025 executive order is already the enforcement mechanism. Together, they form a two-part strategy:

What the executive order (already active) does:

  • AI Litigation Task Force: A unit inside the Department of Justice, active since January 10, 2026, whose job is to identify state AI laws and challenge them in federal court — arguing they unconstitutionally burden business across state lines.
  • Commerce Department review: The Secretary of Commerce was required by March 11, 2026, to publish a list of state AI laws the administration considers overly restrictive, flagging which ones to hand to the litigation task force.
  • $42.5 billion in funding leverage: The order conditions federal broadband infrastructure grants (the BEAD program — a congressionally approved fund for expanding internet access) on states repealing AI regulations the administration opposes. States that keep their laws risk losing their share.
  • FTC directive: The Federal Trade Commission (the agency that polices unfair business practices) was directed to declare that state-mandated bias testing (laws requiring companies to check whether their AI treats different groups of people differently) is itself an unfair and deceptive practice.

What the new legislative framework adds:

  • Copyright position: The administration states that "training AI on copyrighted material does not violate copyright laws" — a position that directly conflicts with ongoing court cases brought by authors, newspapers, and visual artists.
  • Regulatory sandboxes: Companies would be able to apply for waivers from federal AI rules for up to 10 years to experiment with new products — a significant shield from liability during development.
  • No new federal AI agency: Rather than creating a dedicated AI regulator, the framework proposes leaving oversight to existing agencies in each industry sector (financial regulators for finance, healthcare regulators for medicine, etc.).

Which State Laws Are in the Crosshairs

The administration has named specific state laws it wants blocked or overturned. The most prominent targets:

Colorado's AI Act — set to take effect in the summer of 2026 — requires developers of "high-risk" AI systems to show "reasonable care" to protect users from discriminatory outcomes. The White House specifically criticized Colorado's use of a disparate impact standard (a legal test that checks whether a rule or system produces unequal outcomes for different racial or demographic groups, even without intent to discriminate), calling it an embedding of "ideological bias" in law.

California's SB 53 requires large AI developers to publish documents explaining how their systems work and to file catastrophic risk assessments (reports identifying scenarios where an AI could cause large-scale harm). California State Senator Scott Wiener, who authored the bill, said: "If the Trump Administration tries to enforce this ridiculous order, we will see them in court."

California's AB 2013 requires disclosures about what data was used to train AI systems.

Colorado Attorney General Phil Weiser said the state plans to challenge the order in court. A coalition of 36 state attorneys general from both parties had already issued a joint letter urging Congress to reject any federal law that would ban state AI regulations.

White House AI framework document released March 20, 2026

Who Supports It, Who Opposes It — and Why It May Stall

House Speaker Mike Johnson and Majority Leader Steve Scalise offered immediate support for the framework. The tech industry, through groups like NetChoice, praised it as creating "a light-touch regulatory environment" necessary for innovation.

But the political path is complicated. Democratic opposition is near-universal. More surprising: some Republicans are also resistant. Senator Marsha Blackburn (R-Tennessee) has drafted competing legislation that includes a "duty of care" requirement for AI companies — directly contradicting the framework's emphasis on limiting liability. The preemption priority has already failed to make it into two major pieces of legislation this Congress, including a budget bill and the annual defense policy bill.

Public polling presents another obstacle: 80% of Americans say they support maintaining AI safety rules, even as the administration argues those rules harm innovation. More than 280 state lawmakers from both parties signed a letter opposing any federal ban on state AI regulation.

Legal experts have also raised fundamental constitutional questions about the executive order itself. The ACLU called it "a hodgepodge of faulty legal theories," specifically noting that the BEAD broadband funding was authorized by Congress — not the executive branch — making the administration's attempt to attach conditions to it legally questionable. Constitutional law scholars broadly agree that preemption requires an act of Congress, not just an executive order, to have permanent legal force.

The net result is a high-stakes standoff: the White House pushing hard to consolidate AI governance at the federal level, individual states preparing lawsuits, and Congress — the only body that can actually settle the question — caught between industry pressure and constituent demand for accountability. The outcome will determine whether the next generation of AI products faces one national standard or 50 different sets of rules.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments