Teens sue Elon Musk's xAI after Grok generated abuse images of them
Three Tennessee teenagers filed a class-action lawsuit against xAI, alleging Grok's AI technology was used to create 23,000+ sexualized deepfake images of minors in just 11 days.
Three teenage girls from Tennessee just sued Elon Musk's AI company xAI, claiming its chatbot Grok powered an app that turned their school yearbook photos into sexually explicit deepfake images. The class-action lawsuit, filed in a California federal court, alleges that xAI knowingly licensed its AI technology to third-party developers — some operating outside the United States — without safeguards to prevent child sexual abuse material (CSAM) from being created.
The numbers behind this case are staggering. According to a study by the Center for Countering Digital Hate, Grok generated an estimated 23,338 sexualized images of children in just 11 days — roughly one every 41 seconds. A separate New York Times review found 4.4 million images were generated in nine days, with 1.8 million being sexualized depictions of women.
How Yearbook Photos Became Deepfake Abuse Material
The lawsuit describes a chilling chain of events. A man in Tennessee took photos of at least 21 underage girls — the three plaintiffs plus 18 others — from yearbook photos and social media accounts. He then used an unnamed third-party app powered by xAI's Grok technology to strip the images and generate realistic nude content.
One video allegedly depicted a plaintiff "undressing until she was entirely nude." The deepfakes were so realistic they lacked any AI-generated labels or watermarks, making victims initially believe the images were authentic. The material was then traded across Discord servers and Telegram channels, where one plaintiff recognized at least 18 other girls from her school.
The perpetrator was eventually arrested — but by then, the images had already spread across multiple platforms.
Why xAI Is Being Held Responsible
Unlike competitors like Google and OpenAI, which add visible watermarks to AI-generated images, xAI has not adopted this practice. The lawsuit argues that Grok's permissive approach to explicit content — marketed through its "Spicy Mode" (a setting that lets users ask questions other chatbots refuse to answer) — was a deliberate business strategy to drive user growth.
The legal argument: A system designed to generate sexualized adult content cannot reliably prevent the creation of child sexual abuse material. By licensing its technology to developers with minimal oversight, xAI created a pipeline for abuse.
The plaintiffs filed 13 counts against xAI, including violations of:
- Masha's Law — a federal statute that lets victims of child pornography sue for at least $150,000 per violation
- The Trafficking Victims Protection Act
- California's Unfair Competition Law
- Charges of negligence, intentional infliction of emotional distress, and public nuisance
The plaintiffs are also seeking disgorgement of xAI's revenues, punitive damages, and a permanent injunction blocking the company from continuing the practices described in the suit.
A Growing Wave of Legal Action
This isn't the first lawsuit xAI has faced over Grok-generated images. The timeline tells its own story:
December 20, 2025: Musk announced Grok could generate and edit images on X
January 14, 2026: Musk claimed he was "not aware of any naked underage images"
January 15: First lawsuit filed (by content creator Ashley St. Clair)
January 23: Second class-action suit filed
March 16: Third class-action lawsuit — the Tennessee teens case
xAI now faces investigations from California's Attorney General, 35 state attorneys general, and European regulators. In response, Musk announced company layoffs, saying xAI requires a "rebuild from the foundations."
The Broader Crisis Nobody's Solving
Attorney Vanessa Baehr-Jones, representing the plaintiffs, said the goal is to make generating this kind of content "a business decision that does not make any business sense anymore."
The problem extends far beyond xAI. AI-generated deepfake abuse material is growing exponentially, and there's no universal system to prevent it. As Imran Ahmed of the Center for Countering Digital Hate put it: "We have no mechanisms for holding accountable platforms that are incredibly resistant to taking responsibility when their platforms cause harm."
For anyone with children online, this case raises an uncomfortable question: if a yearbook photo or an Instagram post can be turned into realistic abuse material in seconds, who is responsible for stopping it?
xAI has not responded to requests for comment.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments