GUARD Act: AI Age Verification That IDs Every Adult
The GUARD Act requires every American adult to show government ID before using any AI tool — framed as child safety, but covering nearly all AI services online.
The GUARD Act — Congress's AI regulation bill framed as child safety legislation — would require every American user, adults included, to submit government identification before accessing virtually any AI-powered service online. The Electronic Frontier Foundation (EFF), the nonprofit digital rights organization that has defended online freedoms since 1990, warns that the bill's scope extends far beyond dangerous AI companions and covers the everyday AI automation tools millions rely on. A vote is scheduled this week.
That means a high school student asking an AI tutoring tool for help with algebra could be blocked entirely. A teenager trying to return a package through a retailer's AI customer service chat could be denied access. And every adult who wants to continue using the tools they rely on daily would first need to prove their age to a private company — or a third-party verification service that stores their identity permanently.
A Child-Safety Bill That Covers All AI Tools
The GUARD Act targets "AI companions" — chatbots designed to mimic human-like interaction with vulnerable users. On paper, that sounds like a narrow, defensible category. The legal definition in the bill is something else entirely.
Under the bill, any system that produces responses that are not fully pre-written (that is, where the AI generates answers dynamically rather than selecting from a fixed menu of scripted replies) qualifies as a regulated "AI chatbot." That sweeps in virtually every AI-powered tool on the internet today:
- Homework help tools — AI tutoring platforms that adapt explanations to each student's level
- Customer service bots — Retailer and airline chatbots that answer real-time product and booking questions
- AI search engines — Search tools that generate conversational summaries or direct answers to queries
- Educational apps — Language learning, SAT prep, and AI writing coach applications
- Health information tools — Symptom checkers and insurance navigation assistants
The bill also flags any chatbot with "emotional interaction" capabilities as a regulated AI companion. Since that term is left undefined in the legislation, nearly any conversational tool — one that responds warmly or adjusts its tone to the user — could be swept in. The EFF writes: "If enacted, the GUARD Act won't just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems."
$100,000 Per Violation — How AI Regulation Penalizes Small Developers
Companies that fail to comply with this AI regulation face penalties of up to $100,000 per violation. That number reshapes the economics of every decision a company makes about its AI features — and the effect is predictable.
For large platforms — Google, Apple, Meta — compliance is expensive but manageable. They can build age-verification pipelines (systems that check a user's age before granting access to a service) and retain legal teams to navigate the bill's ambiguous language. For smaller developers — a two-person education startup, an independent mental health app, a nonprofit tutoring service — that math does not work. The legal exposure alone forces a stark choice: shut down the AI feature entirely, restrict it to pre-scripted responses just to escape the law's scope, or block all users under 18 without trying to distinguish harmful chatbots from homework helpers.
The EFF calls this a structural market-consolidation mechanism (a built-in competitive advantage for large companies that smaller competitors cannot afford to replicate): "Faced with steep penalties and unclear boundaries, companies are unlikely to take chances on letting young people use their online tools. They'll block minors entirely or strip their tools down to something less useful for everyone."
The practical outcome: the bill's stated goal — protecting vulnerable youth from harmful AI companions — gets replaced in practice by a blanket ban on all minors across all AI-powered services, enforced by legal risk rather than intentional design. The losers are not just teenagers. Adults without current government ID face exclusion too.
The Government ID Problem: Age Verification at Internet Scale
"Reasonable age verification" under the GUARD Act likely means one of three things: submitting a government-issued ID, using a third-party age-checking service, or biometric verification (facial age estimation or similar technology that reads physical features to confirm identity). Each option creates serious risks for ordinary users.
Government IDs are not a clean solution. The EFF notes that millions of Americans carry outdated information on their government documents — old addresses, expired licenses, names changed through marriage or other life events. These users would face verification failures and access denials on services they currently use without restriction. There is no mechanism in the bill to address this gap.
Third-party age-verification databases have proven to be high-value targets for data breaches (incidents in which private personal information is stolen and publicly exposed). A centralized database linking government IDs to online service usage is enormously valuable to malicious actors — and enormously dangerous if compromised. Past age-verification deployments in the UK and Australia were breached within months of going live.
This is the core structural concern the EFF is raising: to block minors from a narrow category of genuinely risky chatbots, the GUARD Act builds the technological and legal framework for mandatory identity verification across the entire internet. Once that infrastructure exists, it can be repurposed by government agencies, acquired by data brokers (companies that purchase and resell personal information for profit), or infiltrated by adversaries. A child-safety rationale creates a mass surveillance architecture as a side effect — and the side effect may outlast the original rationale by decades.
If you want to understand how AI automation tools work before legislation changes access to them, the AI automation learning guides at aiforautomation.io cover the core concepts.
Section 702 — The Surveillance Bill Moving in Parallel
The GUARD Act is not the only digital rights concern advancing through Congress this week. The EFF is simultaneously raising alarms about Speaker Johnson's Foreign Intelligence Accountability Act, a proposed reform of Section 702 of FISA (the Foreign Intelligence Surveillance Act, the law governing how U.S. intelligence agencies conduct surveillance on foreign and domestic targets). Section 702 authorizes the FBI to access communications collected "incidentally" (as a side effect of surveilling foreign targets) — which in practice means accessing Americans' private messages without obtaining a warrant.
Civil liberties advocates have spent years demanding one specific reform: require the FBI to obtain a warrant before querying the database of Americans' communications. The new bill provides no such requirement. Instead, it mandates that a civil liberties officer at the Office of the Director of National Intelligence (ODNI) review FBI queries after the surveillance has already occurred. The search happens first. The review — conducted internally by the intelligence community itself, with no independent judicial oversight — comes later.
"It's bad enough to let the intelligence community police itself, and what's more, the assessment for illegality would be made after a U.S. person has already been spied on."
— Electronic Frontier Foundation on the Foreign Intelligence Accountability Act
Current law already formally prohibits targeting Americans under Section 702. The operational loophole is "incidental collection" — the legal framework permits surveillance of foreign subjects while allowing simultaneous capture of any American's communications with those subjects. The new reform bill acknowledges this practice exists but proposes no mechanism to stop it: no warrant gate, no new transparency requirements, and no new rights for Americans whose private messages are swept up and stored in government databases.
Two AI and Surveillance Bills, One Pattern — What You Can Do Before the Vote
What connects the GUARD Act and the Section 702 reform is a recurring pattern in federal legislation: using a legitimate concern — child safety, national security — as the stated rationale for surveillance infrastructure that extends far beyond the original problem. The GUARD Act uses protecting minors as the entry point for mandatory identity verification at internet scale. The Section 702 reform uses intelligence accountability as the framing for reauthorizing warrantless access to Americans' communications without meaningful new limits.
In both cases, the EFF argues, Congress is choosing the broadest available mechanism when narrower, more targeted reforms would address the actual problem. Targeted enforcement of existing consumer protection laws, transparent design standards for AI products aimed at minors, and specific safeguards against documented bad actors could protect children without building a national identity verification layer for the entire internet. The EFF is direct: "Young people — and all people — deserve protection from genuinely harmful products. But this bill doesn't do that."
The GUARD Act vote is expected this week. If you use AI tools for work, school, or daily tasks, the EFF's action center lets you contact a representative directly before the bill advances. Read the full EFF analysis of the GUARD Act and their critique of the Section 702 reform for the complete picture. The window to shape both bills is narrow.
Related Content — Get Started with AI Automation | AI Guides | More AI News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments