Musk Called Altman a Liar — $134B Trial Starts Now
Musk's $134B trial vs. OpenAI starts this week. Outcome could oust Altman, block the $850B IPO. AI agents failed 480+ real workplace tasks in new study.
The trial everyone in AI has been watching starts this week in a Northern California federal courthouse. Elon Musk is suing OpenAI and Microsoft for $134 billion, alleging that Sam Altman and Greg Brockman deceived him about OpenAI's nonprofit mission — then quietly turned the company into an $850-billion-plus commercial empire. The same week, two independent research studies are delivering uncomfortable news about what AI actually does in professional workplaces.
Three storylines converge at once: a courtroom fight that could unwind the world's most valuable AI company, research showing AI agents fail most real professional tasks, and an enterprise data crisis that blocks deployment before it starts. The AI industry's credibility gap has rarely been this visible — or this well-documented.
How a $38 Million Donation Became a $134 Billion Lawsuit
Musk co-founded OpenAI in 2015 with a $38 million donation. The promise, he alleges, was explicit: a nonprofit (an organization legally prohibited from distributing profits to owners or investors) dedicated to open-source AI development for humanity's benefit — not a product company competing for market share.
By 2017, Altman and Brockman had proposed converting OpenAI into a "capped-profit" for-profit subsidiary — a structure that allows investors to earn returns up to a defined multiple, with excess profits flowing back to the nonprofit parent. Musk left the board in 2018 after a power struggle over control and strategic direction. He says he was never informed this pivot was coming.
Today, OpenAI's valuation sits north of $850 billion in pre-IPO funding rounds, with a public offering planned for end of 2026. Musk's central claim: he was deceived into donating his credibility and capital to seed what became a for-profit competitor to his own AI company, xAI — which carries a combined $1.25 trillion valuation together with SpaceX.
The legal standing question is actively contested. Northwestern Law professor Jill Horwitz notes: "The idea that Elon Musk can sue because he was a donor or used to be on the board is pretty puzzling." Nonprofit law, unlike trust law, typically does not grant donors the right to enforce how charitable organizations spend their money. OpenAI argues Musk has no standing; his team argues trust law applies. The judge allowed the case to proceed — a result that surprised several legal scholars.
Who Takes the Stand
Nine jurors will hear an advisory verdict — meaning their ruling guides but does not legally bind the judge — from a witness list that includes Musk, Altman, Brockman, Ilya Sutskever (OpenAI's former chief scientist who later co-founded rival lab Anthropic), Mira Murati (OpenAI's former CTO who resigned in 2024), and Microsoft CEO Satya Nadella. Internal texts, board memos, and strategy documents from OpenAI's earliest years are expected to enter the public record — a rare transparency window into one of tech's most secretive companies.
Four Ways the Trial Could End — and What Each Means
- Musk wins damages: $134 billion redirected — potentially toward a new public-benefit AI organization
- Structural reversal: Courts order OpenAI to return to nonprofit status, unwinding billions in investor commitments
- Leadership removal: Altman and/or Brockman ousted from management positions
- IPO blocked: OpenAI's planned end-of-2026 public offering halted or indefinitely delayed
California and Delaware attorneys general approved OpenAI's for-profit restructuring in October 2025, with conditions requiring ongoing nonprofit safety oversight. UCLA School of Law philanthropy director Rose Chan Loui flagged the core unresolved question: "how much they can enforce it and how much transparency they get into OpenAI's work." The Musk trial could force a harder answer than regulators planned for.
OpenAI's spokesperson called the lawsuit "a baseless and jealous bid to derail a competitor." Musk responded on X that Altman "lies as easily as he breathes." Neither statement addresses the underlying legal question: what rights, if any, did a $38 million founding donation carry — and what happens when the mission it funded quietly disappears.
AI Agents Tested on 480+ Real Professional Tasks — Every Agent Failed Most of Them
While legal teams prepare arguments, Mercor — an AI-powered recruiting platform — published findings from one of the most rigorous independent evaluations of AI agents (autonomous software programs that take multi-step actions and decisions, not just generate text responses) conducted to date.
Researchers assigned 480-plus real workplace tasks to leading AI agents, drawn from three industries where vendors have made their strongest automation claims: banking, consulting, and law. The result was stark: every agent tested failed to complete the majority of its assigned tasks.
MIT Technology Review's analysis frames this as the "Step 2" problem — named after the underpants gnome episode from South Park, where the business plan reads: Phase 1: collect underpants → Phase 2: ? → Phase 3: profit. AI companies have solved the technology (Phase 1) and promised economic transformation (Phase 3). The messy middle — workflow integration, exception handling, and judgment under ambiguity — remains largely unsolved.
The failure pattern follows a consistent logic. LLMs (large language models — the AI systems powering tools like ChatGPT, Claude, Gemini, and GPT-4) excel at well-scoped tasks with clear outputs: writing, coding, translation, summarization. They struggle with:
- Strategic judgment calls requiring institutional or relational context
- Multi-step workflows with ambiguous decision points mid-process
- Tasks where errors are not immediately visible and compound over time
- Any workflow requiring access to internal enterprise data — more on why, next
Anthropic published a separate study predicting which jobs LLMs will most disrupt: managers, architects, and media roles face the highest displacement risk. Groundskeepers, construction workers, and hospitality professionals are largely unaffected — their work requires physical presence, real-time environmental judgment, and embodied skills that text-based AI cannot replicate. The disruption pattern is consistent: AI transforms information work, not physical work. And even within information work, it disrupts tasks reducible to defined rules — not tasks requiring genuine judgment.
For a practical starting point on where AI tools actually deliver value today, see our AI automation beginner's guide.
The Enterprise Data Crisis Blocking AI Before It Starts
Even if AI agents eventually close the performance gap, most large organizations face a structural blocker that exists independently of AI capability: their data is fragmented across systems that do not communicate with each other.
Bavesh Patel, SVP at Databricks (a data infrastructure company that helps organizations build unified data pipelines — centralized systems where all organizational data flows in consistent, queryable formats that AI can actually use), stated the dependency directly: "the quality of that AI and how effective that AI is, is really dependent on information in your organization." Without that foundation, he warns, organizations produce "terrible AI" — tools that generate confident-sounding wrong answers because they are working from incomplete or outdated inputs.
Most large enterprises share the same fragmentation problem:
- Legacy databases and ERP systems (Enterprise Resource Planning — company-wide management software built decades before AI existed) that cannot export cleanly to modern tools
- Modern SaaS platforms — Salesforce, Workday, Slack, ServiceNow — each maintaining siloed, non-standardized data
- Unstructured files: PDFs, emails, spreadsheets, scanned documents spread across shared drives
- No unified schema (a standard data format defining how information is organized and connected) linking these sources together
Rajan Padmanabhan of Infosys (a global IT services firm with over 300,000 employees, serving Fortune 500 clients across industries) frames enterprise AI evolution in three stages: systems of record (storing data), systems of engagement (retrieving it on request), and systems of action (autonomously initiating tasks based on it). The third stage — AI that proactively does work — requires unified data as a prerequisite. Most organizations are still building the prerequisite.
Patel's competitive framing is the sharpest argument for urgency: "the big competitive differentiator for most organizations is their own data." Companies that unify their data infrastructure now will deploy AI that works meaningfully better than competitors — not because they chose a superior model, but because their model has better, cleaner information to work from. That gap widens over time.
The $850 Billion Credibility Gap
Hold all three storylines together and a single picture emerges: the AI industry is under simultaneous pressure from three independent directions at once.
- OpenAI is valued at $850-billion-plus — and its co-founder is in federal court this week trying to dismantle it before it reaches Wall Street
- AI vendors promise to transform banking, law, and consulting — and an independent study shows agents failing those exact tasks at scale
- Enterprise leaders publicly commit to AI adoption — and privately admit their data infrastructure cannot support it yet
Investors betting on the AI sector are effectively making three simultaneous gambles: that the technology works as promised, that the legal structure of leading companies holds, and that organizations can actually deploy it at scale. All three of those bets are in open question simultaneously this week.
The most actionable signal from this convergence is uncomfortable but clarifying: vendor benchmarks and curated demos are not reliable evaluation criteria. The Mercor findings — 480-plus tasks, every agent failing most — came from real workflows, not cherry-picked showcases. OpenAI's research director Jakub Pachocki describes the company's goal as building "economically transformative technology." This week's evidence suggests the transformation timeline is longer, messier, and more legally contested than trillion-dollar valuations imply.
Watch the trial unfold. Read the independent research. And if you're evaluating AI tools for your work or organization, test them against your actual workflows — with your actual data, your actual edge cases, and your actual definition of success. That is the only standard that separates real transformation from a very expensive Step 2.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments