ChatGPT Lawsuit: OpenAI Trusted Contact Won't Read Your Chat
3 families sued OpenAI for wrongful death — now ChatGPT has a Trusted Contact that alerts someone you choose but never reads your chats. Here's what it misses.
OpenAI this week launched a new safety feature for ChatGPT — and the timing reveals everything. The company currently faces multiple wrongful-death lawsuits (civil court cases where families allege ChatGPT contributed to the deaths of their loved ones) and a formal state investigation in Florida examining the chatbot's "links to criminal behavior," including alleged encouragement of suicide and self-harm. OpenAI's answer — a feature called Trusted Contact — notifies someone you designate when a crisis is flagged, but experts say it doesn't go far enough.
The new feature, called Trusted Contact, lets you designate one adult — someone 18 or older — to receive an alert if ChatGPT flags a conversation involving a mental health crisis. The detail that matters most: the contact never sees what you wrote. No transcripts. No summaries. Just a notification that you may need support.
ChatGPT Wrongful-Death Lawsuits: The Legal Pressure Behind This Launch
OpenAI's Trusted Contact didn't emerge from a product roadmap. It came from courtrooms and attorney generals' offices.
At least three families have filed wrongful-death lawsuits against OpenAI, submitting detailed conversation logs as evidence. The families allege that ChatGPT — either through explicit encouragement or by failing to redirect users toward crisis resources — played a role in their loved ones' deaths. These cases are ongoing and have drawn significant media and regulatory attention in the first half of 2026.
Simultaneously, Florida's attorney general launched a formal investigation into ChatGPT for what the state describes as its "links to criminal behavior," with suicide and self-harm listed specifically among the stated concerns. The investigation carries real legal weight: Florida can subpoena OpenAI's internal documents, training data decisions, and response policy records.
OpenAI's decision to partner with the American Psychological Association (APA — the largest organization of professional psychologists in the United States, representing over 146,000 members) and its own Expert Council on Well-Being and AI signals the company is treating this as both a reputational and legal crisis, not just a product gap.
The legal context extends further: Character.AI and other chatbot platforms have faced similar wrongful-death suits. Courts across the U.S. are now actively deciding whether AI companies bear legal responsibility for the words their models generate — a question with no settled answer as of 2026.
What Trusted Contact Does — and Deliberately Does Not Do
Here's exactly how the feature works once activated:
- Open ChatGPT Settings and find the Trusted Contact form
- Enter the email address or phone number of an adult (must be 18 or older)
- They receive an invitation — it expires in exactly 1 week if not accepted
- OpenAI's specialized review team (staff trained specifically to evaluate sensitive conversations) monitors flagged exchanges
- If a conversation triggers a flag, the contact receives a notification — not a transcript, just an alert that you may need support
What the feature deliberately does not share with the designated contact:
- Chat transcripts or any portion of your conversation
- Which specific message or response triggered the alert
- Details about what ChatGPT said in its reply
- Any severity rating, diagnosis, or clinical assessment
Dr. Arthur Evans, CEO of the APA, described the intent: "Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most."
OpenAI added: "Our goal is to ensure that AI systems do not exist in isolation. Instead they should help connect people to the real-world care, relationships, and resources that matter most."
Why Mental Health Experts Say It's an Incomplete Fix
The Trusted Contact feature adds a real-world human layer to a digital conversation — that's genuinely useful. But four structural problems remain unaddressed.
It doesn't change what ChatGPT says. The wrongful-death lawsuits allege the chatbot gave harmful, specific responses when it shouldn't have. Trusted Contact activates after the conversation occurs. It can notify someone that a concerning exchange happened — but it doesn't prevent the exchange in the first place.
The opt-in problem is real. People in active mental health crisis are the least likely to have configured a contact proactively. The feature requires deliberate setup during a calm, stable moment — the exact opposite of when vulnerable users actually need the protection.
Detection accuracy is unverified. OpenAI has not disclosed the size, training methodology, or accuracy rate of its "specially trained" human review team. The rate of false negatives (crisis conversations the system fails to flag) is unknown — and each missed case means no notification goes out.
One contact, no escalation path. The system notifies one designated person and sends one alert. If that contact doesn't respond, there's no fallback — no automatic connection to 988 (the U.S. Suicide and Crisis Lifeline, a free 24/7 crisis service), and no integration with emergency services.
For comparison, dedicated mental health platforms such as Crisis Text Line operate under multi-tier escalation protocols, with licensed clinicians in the loop and mandatory reporting requirements in certain cases. ChatGPT is used as an emotional outlet by tens of millions of users but currently lacks those safeguards by design.
AI Liability: Who Is Responsible When ChatGPT Causes Harm?
The lawsuits against OpenAI are part of a broader legal reckoning forming across the AI industry. Courts are working through whether AI companies can be held liable under product liability law (the framework that holds manufacturers responsible when products cause injury), negligence, or Section 230 of the Communications Decency Act (a 1996 U.S. law that traditionally shielded internet platforms from liability for user-generated content — but which courts are beginning to question for AI systems, since AI generates content rather than merely hosting it).
Florida's criminal investigation raises the exposure further. If the state finds that ChatGPT's responses constitute criminal behavior — or that OpenAI knowingly deployed a system with documented harmful outputs — the consequences extend well beyond civil damages.
OpenAI's APA collaboration creates a documented good-faith safety record, which matters for litigation defense. Whether courts, investigators, and grieving families find it sufficient is a question 2026's AI liability landscape has not yet answered.
If you use ChatGPT regularly — especially as a journaling tool, emotional support outlet, or mental health resource — you can activate Trusted Contact right now in under 2 minutes: Settings → Trusted Contact → enter an adult's email or phone number. It won't change what the model says to you. But it may mean someone who cares about you gets a call before things reach a point of no return.
For guidance on using AI tools responsibly and choosing the right platform for sensitive work, explore the AI for Automation learning guides — written for non-technical readers navigating the expanding AI landscape.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments