Grok made 3M deepfakes — and the new US law can't stop it
Congress's new deepfake law is likely unconstitutional. A 135-year-old legal doctrine may be the real solution — and platforms should be worried.
In the summer of 2024, Taylor Swift found herself staring at a video of herself endorsing Donald Trump for president — a video she never made, never authorized, and never would have made. The clip spread across social media before her team could respond. She was forced to publicly deny it. Millions of people had already seen it.
In Baltimore, a school administrator nearly lost their career after an audio deepfake — a synthetic recording generated by AI — was circulated that inserted racist slurs into their voice. The recording was fake. The damage to their reputation was real.
In Maine, the state's governor was targeted by a deepfake video that falsely depicted her administering hormones to minors. It was fabricated. It went viral anyway.
These are not isolated edge cases. They are symptoms of an industrial-scale crisis. Grok, the AI model built by Elon Musk's xAI, generated more than 3 million nonconsensual sexualized images — including tens of thousands depicting children. Three million. Not three thousand. Three million.
Congress responded in April 2025 with near-unanimous passage of the TAKE IT DOWN Act, which President Biden signed into law on May 19, 2025. The law is the first federal statute specifically targeting nonconsensual intimate deepfakes. And according to a growing chorus of legal scholars, digital-rights organizations, and constitutional experts, it will probably not survive a First Amendment challenge.
But there is an alternative — one that predates the internet, predates television, and predates film. A legal doctrine born in the early days of photography may be the most powerful tool we have against the AI deepfake crisis.
The Law Congress Passed — and Why It May Already Be Broken
The TAKE IT DOWN Act — which stands for Taking Action against Known Exploitation to Increase Downloading Oversight and Warding off Abuse of Nonconsensual Nudity — does two specific things. First, it criminalizes the publication of NCII (nonconsensual intimate images — that is, sexual or nude images of a real person published without their consent, whether real or AI-generated). Offenders can face up to 2 years in federal prison. Second, it requires covered platforms (social media sites, video platforms, and similar services) to remove flagged NCII content within 48 hours of receiving a valid complaint.
The platform compliance deadline is May 19, 2026 — exactly one year after the Act was signed. Platforms that fail to meet this deadline will face federal liability.
On the surface, this sounds like exactly what victims need. But legal analysts from Skadden and Hogan Lovells note that the Act's scope is extremely narrow. It covers only intimate images. It does not cover political deepfakes like the Taylor Swift election video. It does not cover defamatory audio deepfakes like the one that targeted the Baltimore educator. It does not cover the Maine governor's fabricated video. Three of the most widely reported deepfake harms of the past two years fall entirely outside the law's reach.
More critically, organizations including the Electronic Frontier Foundation (EFF), the Center for Democracy and Technology (CDT), and the Authors Guild have warned that the Act is likely unconstitutional. Their core argument: the law is overbroad (meaning it sweeps in protected speech alongside unprotected speech — a fatal flaw under the First Amendment, which is the section of the US Constitution that guarantees freedom of speech and press). The Act contains no meaningful carve-out for satire, commentary, journalism, or artistic expression — categories that courts have consistently treated as constitutionally protected even when they involve real people's likenesses.
As Fisher Phillips notes, the 48-hour takedown window also creates practical pressure on platforms to over-remove content — including legitimate speech — to avoid liability, a chilling effect that courts have repeatedly found constitutionally problematic.
A 135-Year-Old Camera Law That Could Actually Work
In 1890, two Boston lawyers named Samuel Warren and Louis Brandeis published a paper in the Harvard Law Review titled "The Right to Privacy." It is one of the most cited legal articles in American history. What prompted it? The portable camera — specifically, the Kodak box camera, introduced in 1888 — had just made it possible for ordinary people to photograph other ordinary people without their consent, in public, and sell or publish those images. Warren and Brandeis argued that existing law was inadequate for this new technological threat and that courts needed to recognize a legal right to control one's own image.
Sound familiar?
That paper seeded what became the right of publicity — a legal doctrine that gives every person the exclusive right to control commercial or harmful uses of their name, image, likeness, and voice. Think of it as intellectual property (IP) for your own identity (IP refers to legal ownership of creative works, inventions, and — in this case — personal identity; owners can sue for unauthorized use just as a musician can sue for copyright infringement). Today, more than 30 US states recognize some form of publicity rights, and the doctrine has been used to protect everyone from Elvis Presley's estate to anonymous private citizens.
Legal scholar Michael Goodyear, writing for Lawfare, argues that the right of publicity is not just a viable alternative to the TAKE IT DOWN Act — it is a superior one, for three interconnected reasons.
First, it covers everything the TAKE IT DOWN Act does not. Publicity rights apply to all unauthorized uses of a person's likeness: sexual, political, commercial, and defamatory. Taylor Swift's election deepfake? Covered. The Baltimore audio fabrication? Covered. The Maine governor video? Covered. The 3 million Grok-generated sexualized images? Covered. The TAKE IT DOWN Act addresses only one of these categories.
Second, it creates platform liability in a way the TAKE IT DOWN Act cannot. This requires understanding Section 230 — the federal law (specifically, Section 230 of the Communications Decency Act of 1996) that generally shields social media companies and internet platforms from lawsuits over content posted by their users. Under Section 230, if a user posts a defamatory deepfake of you on a platform, you generally cannot sue the platform — only the user. This has historically made platforms nearly litigation-proof for user-generated harms. However — and this is the key legal insight — Section 230 contains an explicit exception: it does not apply to intellectual property (IP) claims. Because the right of publicity is classified as an IP protection, platforms that host deepfakes of real people could lose their Section 230 shield entirely. As Bloomberg Law reports, this IP carve-out is already creating significant legal exposure for platforms hosting AI-generated celebrity likenesses.
Third, it is constitutionally durable. Unlike the TAKE IT DOWN Act's blanket criminalization approach, the right of publicity has First Amendment defenses (legal protections for speech that serve the public interest — courts recognize that satire, parody, commentary, and journalism involving real people are constitutionally protected even against publicity rights claims) baked directly into its existing framework. Courts applying publicity rights already know how to balance the right against free expression. A Saturday Night Live parody of a politician is protected. A deepfake pornographic video of that same politician is not. The doctrine draws this line through 135 years of case law. The TAKE IT DOWN Act, by contrast, tries to draw the line through a criminal statute with no history of constitutional testing.
The Platform Liability Gap: What Happens If the Law Falls
Here is the practical problem. The TAKE IT DOWN Act's platform compliance deadline is May 19, 2026. Platforms are right now building content moderation systems, complaint intake processes, and 48-hour removal pipelines to meet that deadline. If a federal court strikes down the Act — which legal experts consider a genuine possibility, not a remote one — those platforms will have no functioning federal enforcement mechanism for deepfake content. The victims will have no federal remedy. The harm will continue at industrial scale.
The right of publicity, by contrast, does not depend on any single federal statute surviving a constitutional challenge. It exists in state law across more than 30 jurisdictions. It has survived legal challenges for over a century. And critically, it creates incentives that the TAKE IT DOWN Act does not: because platforms could face direct IP liability for hosting deepfake content (bypassing Section 230), they have a financial reason to act proactively rather than reactively.
This is the same economic logic that has made copyright enforcement so effective online. Platforms invest heavily in systems like YouTube's Content ID not because they are legally required to, but because the alternative — direct copyright liability — is financially catastrophic. Extending IP-style liability to deepfakes through publicity rights would create identical incentives.
What Would a Right-of-Publicity Framework Actually Look Like?
Goodyear and other scholars argue that Congress should enact a federal right of publicity statute — one that standardizes protections currently scattered across more than 30 different state laws, closes gaps for states that do not yet recognize the right, and explicitly extends coverage to AI-generated likenesses. Such a law would:
- Give every American a federally recognized right to control AI-generated uses of their name, image, likeness, and voice
- Create a private right of action (meaning individuals could sue directly, without waiting for criminal prosecution) against both the creator of a deepfake and the platform that hosts it
- Strip Section 230 immunity from platforms that knowingly host nonconsensual deepfakes, using the existing IP exception
- Preserve robust First Amendment defenses for satire, journalism, commentary, and artistic expression — the categories that legitimate creative work depends on
- Apply retroactively to existing deepfakes, not just future ones
The Warren-Brandeis parallel is instructive here. When Warren and Brandeis published their 1890 paper, courts initially resisted their arguments. It took decades for privacy law to develop the robust protections we now take for granted. But the doctrinal foundation they laid — the idea that technology-enabled violations of personal dignity require legal remedies — eventually reshaped American law. The AI moment we are living through is, as Goodyear argues, an equivalent watershed. The question is whether we wait decades again, or move faster this time.
The Stakes Are Not Abstract
Taylor Swift had the platform, the lawyers, and the public profile to fight back against her deepfake. Most victims do not. The Baltimore educator whose voice was stolen by AI had none of those resources. The Maine governor, as a public figure, had some — but the video still spread to hundreds of thousands of people before it was debunked.
Grok's 3 million nonconsensual images did not target celebrities. The vast majority targeted ordinary private citizens — people with no legal team, no press office, and no way to reach the audiences that saw the fabricated images of them. For these victims, the TAKE IT DOWN Act offers a narrow remedy that may never survive a court challenge. The right of publicity, properly updated and federalized, offers something more durable: a legal framework that treats your face, your voice, and your likeness as yours — not as raw material for AI systems to exploit.
In 1888, Louis Brandeis watched the portable camera arrive and recognized that the law had not caught up to the technology. He wrote the paper that eventually changed that. We are watching the same moment unfold now, at a scale Brandeis could never have imagined: not a few unauthorized photographs, but 3 million synthetic images generated by a single AI system in a matter of weeks.
The law Congress passed is narrow, probably unconstitutional, and already under fire. The doctrine that could actually work is 135 years old. The race between harm and remedy is already underway.
Sources:
- Lawfare — Kodak to Deepfakes: Publicity Rights and Abuse of Our Likenesses
- Wikipedia — TAKE IT DOWN Act
- Skadden — TAKE IT DOWN Act Legal Analysis
- Hogan Lovells — TAKE IT DOWN Act Overview
- Fisher Phillips — New Federal AI Deepfake Law
- Bloomberg Law — AI Celebrity Deepfakes and State Publicity Laws
Related reading:
Stay updated on AI news
Simple explanations of the latest AI developments