OpenAI $150B Trial: Sam Altman Testifies Against Musk
Sam Altman swore under oath that Musk tried to kill OpenAI twice. See what the $150B verdict means for ChatGPT, AI governance, and every OpenAI user.
Sam Altman took the witness stand this week in the OpenAI vs. Musk trial and answered the most explosive question in AI industry history: did Elon Musk's $150 billion lawsuit against OpenAI have any merit? In a single sentence delivered under oath, Altman reframed three weeks of OpenAI trial testimony — and potentially the outcome of the entire case.
OpenAI's $150 Billion Founding Mission Dispute: Three Weeks of Testimony
Elon Musk filed his lawsuit against OpenAI in 2024, accusing the company of betraying the founding mission he helped establish: developing artificial general intelligence (AGI — a hypothetical AI system capable of matching or exceeding human performance across virtually any intellectual task) for the benefit of all humanity, not for profit. OpenAI began as a nonprofit (an organization legally prohibited from prioritizing shareholder returns over its stated mission), but has since transitioned into a "capped-profit" structure — meaning investors can earn returns, but those returns are limited by the company's charter rules.
Musk wants three things: up to $150 billion in damages paid to OpenAI's nonprofit entity, the removal of both CEO Sam Altman and President Greg Brockman from leadership, and a full reversal of the company's for-profit shift. OpenAI's legal team has one answer for all three demands:
"This lawsuit has always been a baseless and jealous bid to derail a competitor." — OpenAI legal defense team
The "competitor" framing sits at the heart of OpenAI's defense. Musk launched xAI — maker of the Grok chatbot — in 2023, less than a year before filing suit. Musk's lawyers counter that his concerns predate xAI by years and trace back to his 2018 departure from OpenAI's board after a clash over control of the company's direction.
Altman on the Stand: 'He Tried to Kill It — Twice'
After two weeks of witness testimony painting Altman as dishonest and mission-driven in name only, the OpenAI CEO testified on Tuesday — and chose direct confrontation over diplomacy.
When the question of whether Altman and his team had "stolen" a charity from humanity surfaced, his response stopped the courtroom:
"We created, through a ton of hard work, this extremely large charity, and I agree you can't steal it. Mr. Musk did try to kill it, I guess. Twice." — Sam Altman, under oath
Altman didn't stop there. He described working alongside Musk in OpenAI's early years as structurally corrosive to research culture. According to his testimony, Musk required that researchers be ranked by their accomplishments — a forced performance-ranking system that Altman characterized as incompatible with effective scientific exploration. Musk also pushed for aggressive restructuring that would, in Altman's words, "take a chainsaw through a bunch" of staff.
The testimony delivered a precise counter-argument: if Musk's own management philosophy — forced rankings, aggressive layoffs, top-down control — would have dismantled the conditions a research lab needs to thrive, then his claim to represent the "true" mission of OpenAI collapses under its own logic.
"I don't think Mr. Musk understood how to run a good research lab," Altman told the court.
Nadella, Sutskever, and What Their Testimony Revealed
Altman's appearance closed a week packed with high-profile witnesses. Two testified before him:
- Satya Nadella (Microsoft CEO): Testified on Monday. Microsoft is OpenAI's largest commercial partner, having provided billions in cloud compute infrastructure critical to training GPT-4 — the large language model (an AI system trained on massive text datasets to generate human-like responses) that powers ChatGPT. One courtroom observer noted Nadella's testimony resembled an Xbox commercial more than a legal proceeding, suggesting the Microsoft angle focused on commercial terms rather than ideological history.
- Ilya Sutskever (former OpenAI CTO): Co-founded OpenAI alongside Altman and Musk, then departed in 2024 under circumstances that remain unexplained publicly. As the former chief technology officer (the executive directly responsible for building the actual AI systems), Sutskever's view of whether the company honored its founding intent carries exceptional weight — more than any outside observer could offer.
Nadella, Sutskever, and Altman — in that order — gave three distinct versions of what OpenAI was built to be. Musk's legal team now has to argue that all three are wrong, or that their combined testimony doesn't outweigh a founding document signed before commercial pressures existed.
The ChatGPT Death Lawsuit: A Parallel AI Safety Question
Running in a different courthouse but drawing from the same public conversation is a wrongful death lawsuit filed by the family of Sam Nelson, a 19-year-old who died from an accidental overdose. His family alleges that conversations with ChatGPT contributed to the circumstances of his death — and points to a specific technical event as evidence.
An April 2024 update to GPT-4o (OpenAI's multimodal AI model — meaning one capable of processing text, images, and audio simultaneously) reportedly modified how the system handled drug-related conversations, leading it to provide guidance it had previously refused. OpenAI says the issue was identified and corrected, but the incident highlights a known vulnerability in large language model (LLM) safety design: guardrails can shift unexpectedly with software updates, sometimes invisibly to users who have no way to audit what changed.
For anyone following the Musk trial, the timing carries an uncomfortable irony. The very safety failures Musk claims to fear are documented in a separate legal filing — against the same company he is suing for being too commercially pragmatic about AI development.
The AI Industry Didn't Wait for the Verdict
While the trial entered week 3, the AI industry made three significant product moves — suggesting the debate over OpenAI's founding mission has become largely academic to its competitors:
- Amazon launched Alexa for Shopping, a rebranded AI shopping assistant powered by a large language model, replacing the Rufus AI assistant previously embedded in the main Amazon.com app.
- Meta began testing a Threads feature allowing users to tag Meta AI directly in posts for answers and conversation context — mirroring how users on X already interact with Musk's own Grok chatbot. Users quickly discovered the Meta AI account cannot be blocked, prompting immediate privacy complaints.
- Google announced Gemini Intelligence, bundling Gemini AI features across Chrome for Android, predictive autofill, and third-party app integrations — accelerating platform-wide AI distribution at a scale that requires no founding mission document to justify.
The structural irony is hard to ignore: Musk filed suit to prevent OpenAI from becoming a commercial AI company. In the time that lawsuit has consumed, Amazon, Meta, and Google have each deployed AI at the same commercial scale he objects to — with no nonprofit charters in sight.
Three Possible Verdicts — and What Each Means for You
Legal analysts watching the proceedings outline three plausible outcomes, each with different consequences for anyone who uses AI tools professionally:
- Musk wins on damages: Considered highly unlikely by most legal commentators given the framing, but even a partial ruling in his favor could force OpenAI to revise its governance structure — potentially slowing product releases and enterprise partnerships for months.
- OpenAI wins outright: Validates the nonprofit-to-capped-profit transition as legally defensible. This outcome sets a precedent that other mission-driven AI organizations could follow when seeking commercial capital without abandoning their stated purpose.
- Out-of-court settlement: Widely considered the most probable outcome. A negotiated resolution might include governance concessions — independent board seats, mission compliance auditing, transparency requirements — without the $150 billion figure appearing in any final judgment.
Closing arguments are expected within the next 2 weeks. If you rely on ChatGPT, build products using OpenAI's models, or work at a company that does — this verdict directly affects every governance and product decision OpenAI makes going forward. Watch for the ruling, and explore our AI governance and tool guides to understand what ownership changes in AI actually mean for the software you use every day.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments