Baltimore just sued xAI — Grok made 3 million deepfakes in 11 days
Baltimore filed the first US city lawsuit against xAI after Grok generated 3 million sexualized images in 11 days — including 23,000 depicting minors.
The city of Baltimore just became the first U.S. municipality to sue xAI — Elon Musk's AI company — over nonconsensual deepfakes generated by its chatbot Grok.
The numbers are staggering. According to the Center for Countering Digital Hate, Grok generated approximately 3 million sexualized images over just 11 days. Of those, roughly 23,000 depicted minors.
What Grok Was Generating — and Why Nobody Stopped It
Grok is the AI chatbot built into X (formerly Twitter). Unlike ChatGPT or Claude, which have extensive safety filters for image generation, Grok had minimal guardrails preventing users from creating sexualized images of real people.
The result: users discovered they could upload anyone's photo and generate explicit imagery without that person's knowledge or consent. This included photos of classmates, coworkers, and public figures — as well as children.
The Key Numbers
🔴 3 million sexualized images generated in 11 days
🔴 23,000 of those depicted minors
🔴 First U.S. city to file suit against an AI company over deepfakes
🔴 Zero prior federal action against xAI
Baltimore's Legal Argument
The lawsuit, filed on March 24, 2026, alleges that xAI violated Baltimore's Consumer Protection Ordinance. The city argues that xAI marketed Grok as a general-purpose AI assistant (a tool for answering questions, writing text, and creating images) without adequately warning users about the risks.
Specifically, the complaint claims xAI failed to disclose that using Grok and the X platform could expose people to nonconsensual intimate imagery — both as creators who might unknowingly break the law, and as victims whose likenesses could be exploited.
City Solicitor Ebony M. Thompson stated: "Baltimore's consumer protection laws exist to safeguard residents from exactly this kind of emerging harm."
A Separate Class Action from Teenagers
Baltimore's lawsuit isn't the only legal threat facing xAI. A separate potential class action has been filed by three teenagers who allege their photos were used to create child sexual abuse material through Grok's image generation.
This creates a two-front legal battle: a city suing under consumer protection law, and individual victims suing for direct harm. Together, they could set precedents for how AI-generated deepfakes (realistic fake images created by AI using someone's real photos) are treated under U.S. law.
Why Every AI User Should Pay Attention
This case matters beyond Baltimore. If the city wins, it creates a blueprint for every other U.S. city with consumer protection laws to sue AI companies over harmful content generation. That could force every major AI company — not just xAI — to implement stronger safety filters.
For everyday users, this is a reminder: any photo you post publicly could potentially be fed into an AI image generator. While most major AI tools have safety measures in place, not all do. Check the privacy settings on your social media accounts, and be aware that AI deepfake technology is advancing faster than the laws designed to prevent its misuse.
No trial date has been set yet. xAI has not publicly responded to the lawsuit.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Stay updated on AI news
Simple explanations of the latest AI developments