AI for Automation
Back to AI News
2026-03-20AI regulationWhite HouseDavid SacksAI policychild safety

The White House just dropped its AI rules — the 'Four C's'

David Sacks unveiled the White House AI framework covering child safety, creators, communities, and censorship. Federal rules will override state AI laws.


The White House unveiled its long-awaited federal AI framework on Friday, March 20 — a sweeping set of guidelines that will shape how AI companies operate across the United States. White House AI czar David Sacks structured the framework around what he calls the "Four C's": child safety, communities, creators, and censorship.

The White House, where the new federal AI framework was announced

This is arguably the most significant U.S. government move on AI regulation since President Trump's executive order on AI last year. Unlike Senator Blackburn's legislative bill introduced earlier this week, this framework comes directly from the executive branch and is designed to override conflicting state AI laws — creating a single national standard.

The Four C's — explained in plain English

1. Child Safety — AI companies must reduce harms to minors. However, the framework deliberately does not override state-level child safety laws, meaning states like California can still enforce their own stricter rules on AI and children. This was a key concession to both parties in Congress.

2. Communities — Rules addressing how AI-generated content and automated decision-making affect specific groups and communities. Think: hiring algorithms (programs that screen job applicants automatically), lending decisions, or housing recommendations powered by AI.

3. Creators — Obligations around AI's impact on content creators — artists, writers, musicians, journalists. This addresses the central question of whether AI companies can train on copyrighted work and what compensation or attribution creators deserve.

4. Censorship — Boundaries on what AI platforms can restrict, remove, or amplify. This is framed as protecting free speech online, preventing AI systems from unfairly suppressing certain viewpoints or content.

Federal rules vs. state laws — who wins?

The most contentious part of the framework is preemption (when federal rules override state laws). Under this framework, federal AI standards are intended to "limit or displace overlapping state regulation" — meaning a single set of national rules instead of a patchwork of 50 different state laws.

This is what the tech industry wanted. Companies like OpenAI, Google, and Meta have argued that complying with different AI rules in every state is impractical and expensive. The framework gives them a unified playing field.

But there's a critical exception: state child safety laws survive. The framework explicitly preserves states' ability to protect minors from AI harms, even if the federal standard differs. This means states that have passed strict age-verification or child-protection AI laws won't see those overturned.

How this differs from Blackburn's AI bill

Senator Marsha Blackburn introduced a separate AI bill in the Senate earlier this week. While both address similar topics, they're fundamentally different instruments:

Blackburn's bill = legislation that must pass through Congress. It's a specific, detailed law with enforcement mechanisms. It focuses heavily on copyright and Section 230 (the law that protects websites from being sued for user content).

White House framework = executive branch guidance that shapes how existing agencies enforce AI rules right now. It's broader, covering four major areas, and establishes the principles that future legislation should follow. It works with the House Energy and Commerce and Senate Commerce committees.

What this means if you work with AI

If you're a creator (artist, writer, designer) — the framework signals that Washington is taking AI's impact on creative work seriously. Expect more concrete rules about how AI companies must handle copyrighted training data. The UK just reversed its AI copyright plan after artist protests, and the U.S. appears to be moving in a similar direction.

If you run a business using AI — the shift to federal standards is a net positive. Instead of tracking AI laws in every state, you'll eventually have one set of rules to follow. But child safety compliance remains state-by-state.

If you're a parent — your state's child safety protections for AI aren't going anywhere. The framework explicitly preserves them, which was one of the biggest concerns heading into today's announcement.

If you're a developer — watch for enforcement details. The framework establishes principles, but the specific rules and penalties will come through agency guidance and Congressional legislation over the coming months.

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments