A judge just ruled your AI chats aren't private
A federal judge ruled that conversations with ChatGPT and Claude can be seized and used in court — and inputting private info into AI may destroy your legal protections forever.
If you've ever pasted sensitive information into ChatGPT or Claude, a new federal court ruling says that conversation isn't private — and it could be used against you in court.
In United States v. Heppner, Judge Jed Rakoff of the Southern District of New York ruled that 31 documents a defendant created using Anthropic's Claude had no legal protection whatsoever. The ruling is the first of its kind — and its implications reach far beyond one fraud case.
The case that changed AI privacy
Bradley Heppner, a Dallas financial executive charged with securities and wire fraud, received a grand jury subpoena and learned he was under investigation. Before hiring a lawyer, he turned to Claude — Anthropic's AI assistant — to analyze his legal exposure and build a defense strategy.
He typed in details he later received from his defense counsel, generated 31 documents of prompts and responses, then handed those documents to his lawyers. The FBI seized them during a search. Heppner argued they were protected by attorney-client privilege (the legal rule that keeps conversations between you and your lawyer secret).
Judge Rakoff disagreed — on every count.
The three reasons your AI chats aren't protected
1. AI isn't a lawyer. Claude has no law license, owes no duty of loyalty, and can't form an attorney-client relationship. Claude itself states: "I'm not a lawyer and can't provide formal legal advice."
2. There's no confidentiality. Anthropic's privacy policy says it collects your inputs and outputs, can use them to train the model, and may share data with "governmental regulatory authorities."
3. You can't get legal advice from a tool that says it doesn't give legal advice. Anthropic's own materials say Claude follows the principle of choosing "the response that least gives the impression of giving specific legal advice."
The scariest part: privilege waiver
Here's what should alarm everyone who uses AI at work. The judge ruled that inputting already-privileged information into a consumer AI tool may destroy the privilege over the original documents too.
In other words: if your lawyer tells you something confidential and you paste it into ChatGPT to "think it through," you may have just made that confidential communication fair game for the other side in a lawsuit. As Judge Rakoff wrote, sharing information with Claude is "just as if he had shared it with any other third party."
But another judge said the opposite
Here's where it gets complicated. Just one week earlier, in Warner v. Gilbarco, a federal judge in Michigan ruled the opposite — that documents a person created using AI to prepare for a discrimination lawsuit were protected as "work product" (materials prepared in anticipation of litigation).
So right now in America, two federal courts have issued directly contradictory rulings on whether your AI conversations are legally protected. Your protection depends on which state you're in.
What this means if you use AI at work
If you're an employee: Never paste attorney-client communications, legal strategies, or sensitive internal documents into free AI tools. What you type into ChatGPT, Claude, or Gemini could be discovered in litigation.
If you run a business: Update your AI usage policies immediately. Establish clear rules about what employees can and cannot input into AI tools, especially during investigations or litigation.
If you're already using AI for legal questions: Enterprise AI platforms (like Claude for Enterprise or ChatGPT Enterprise) offer contractual confidentiality guarantees and don't use your data for training. Judge Rakoff hinted this distinction might matter — but no court has tested it yet.
Where this is heading
Legal experts across dozens of major law firms are calling this a watershed moment. With two courts split, the issue will almost certainly reach an appeals court — and possibly the Supreme Court.
Until then, the safest assumption is simple: anything you type into a free AI tool is not private. Treat ChatGPT, Claude, and Gemini like you'd treat a conversation in a crowded coffee shop — assume someone is listening.
As one Jones Walker analysis put it: "Your AI Conversations Are Not Privileged."
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments