Musk Demands $150B from OpenAI — Nadella Testifies May 11
Musk sues OpenAI for $150B, claiming Altman broke the nonprofit founding mission. Nadella testifies May 11 — his private emails are now court evidence.
When OpenAI was founded, its pitch was unusual for Silicon Valley: a nonprofit developing artificial general intelligence for the benefit of all humanity, with no profit motive. Elon Musk donated over $100 million and joined its board on that promise. Today, that promise is the central exhibit in one of the biggest AI governance lawsuits in history — and Microsoft CEO Satya Nadella is about to take the stand.
Musk filed suit against OpenAI in 2024, demanding up to $150 billion in damages and the removal of Sam Altman and Greg Brockman (co-founder and president). The trial, presided over by Judge Yvonne Gonzalez Rogers, hits a pivotal moment with Nadella scheduled to testify on May 11, 2026 — his private emails from a 9-year partnership already entered as court evidence.
The Founding Promise — and How OpenAI's Nonprofit Mission Broke
The OpenAI-Microsoft partnership began in summer 2017, just days after OpenAI's Dota 2 bot (a reinforcement learning system — AI that trained by playing millions of games against itself, with no human instruction) defeated a professional esports player in a landmark public match. That demonstration of raw AI capability was exactly what Microsoft wanted to invest in.
Musk's lawsuit argues that from the very start, Altman and Brockman were building toward commercial dominance, not scientific openness. His core claims:
- The founders "tricked" him into providing millions in funding under false nonprofit pretenses
- The 2019 shift to a "capped profit" structure (a hybrid model where investors can profit up to a fixed multiple of their investment) was a fundamental betrayal of OpenAI's founding charter
- The entire commercial trajectory serves Musk's competitors while he was misled into funding it
OpenAI's response has been sharp: "This lawsuit has always been a baseless and jealous bid to derail a competitor." Their counterargument: Musk left the board voluntarily in 2018, was aware of the commercial pivot discussions, and is now weaponizing litigation while building his own AI company — xAI, maker of the Grok chatbot.
"The Blip": The OpenAI CEO Firing That Became Court Exhibit A
The trial is also surfacing new details about "The Blip" — the insider nickname for the chaotic four-day stretch in Thanksgiving week 2023 when OpenAI's board fired Sam Altman, over 700 employees threatened mass resignation, and Altman was reinstated days later with an entirely new board installed.
The board's official reason for the firing: Altman was "not consistently candid in his communications with the board." Court proceedings are now revealing far more complex dynamics behind that deliberately vague statement.
Three key witnesses define the trial's testimony schedule:
- Shivon Zilis — Former OpenAI board member who shares multiple children with Musk; has already testified
- Satya Nadella — Microsoft CEO who handled emergency calls about OpenAI's collapse during The Blip; testifying May 11
- Ilya Sutskever — Former OpenAI chief scientist (the researcher who led development of the GPT model series — the technology behind ChatGPT); scheduled to testify after Nadella
Nadella's testimony carries the most legal weight. Microsoft had invested approximately $13 billion in OpenAI by the time of The Blip. His private emails could reveal whether Microsoft was a passive capital investor — or an active architect of OpenAI's departure from its nonprofit founding structure. That distinction is the heart of the $150 billion case.
$55 Billion: Musk Bets on Making His Own AI Chips
While fighting OpenAI in court, Musk is simultaneously making one of the largest private infrastructure bets in tech history. SpaceX is planning a chip manufacturing facility called Terafab in Austin, Texas — initial investment of $55 billion, potentially expanding to $119 billion total across future build phases.
The production target: chips capable of supporting up to 200 gigawatts (GW) of computing annually. For scale, 200 GW is roughly equivalent to the entire electrical generating capacity of the United Kingdom — an almost incomprehensible output for a single private facility.
Terafab represents a strategic escalation beyond simply buying AI chips — a vertical integration play that could reshape AI automation infrastructure at scale. Rather than purchasing compute from NVIDIA or AMD, SpaceX would manufacture them internally — creating vertical supply chain control that few companies have attempted in AI hardware at this scale. The global data center arms race driving this investment is intensifying on every front:
- Power grid access (AI data centers now consuming 1–3% of global electricity, growing fast)
- Water rights for server cooling infrastructure
- Physical land near low-cost energy sources
- And in some audacious proposals: orbital space — companies exploring data centers launched into orbit to escape terrestrial power constraints entirely
Consumer AI: Cameras in Your Ears, AI in Your Games
Away from the courtroom and chip fabs, consumer AI integration is advancing on multiple product fronts — with results ranging from cautious optimism to outright developer resistance.
Apple's AirPods May Soon Have Eyes
Apple is reportedly testing AirPods with built-in cameras, currently in the DVT (design validation test — the phase immediately before mass-production tooling is finalized, meaning hardware works but mass-production qualification hasn't been passed) stage. The camera captures low-resolution (object-identifying, not text-reading) visual data to support contextual Siri queries: "what can I cook with these ingredients?" using your actual physical surroundings as context, not just your words.
The privacy implications of always-on wearable cameras remain publicly unaddressed. No confirmed launch date exists. DVT is still 1–2 stages away from retail availability.
Sony: "Human Creativity Isn't Optional"
Sony is actively integrating AI tools into PlayStation game development while drawing a firm public line on human irreplaceability: "The vision, the design, and the emotional impact of our games will always come from the talent of our studios and performers. AI is meant to augment their capabilities, not to replace them."
This statement matters because it directly addresses the anxiety driving widespread rejection of AI among independent game developers. Sony's model — AI for production efficiency (asset generation, QA automation), humans for creative direction — may become the template major studios quietly adopt regardless of what the indie developer community wants.
Nanoleaf Pauses Product Launches to Chase AI Identity
Smart lighting company Nanoleaf has released just 1–2 new products over two years compared to competitors Govee and Philips Hue, which maintain rapid, continuous release schedules. CEO Gimmy Chu explained the deliberate slowdown: "The smart home is getting kind of boring. Our brand needs to evolve." The pivot targets wellness technology, robotics, and AI integration — categories requiring significantly more R&D investment than new LED strip patterns. The competitive risk: while Nanoleaf repositions, faster competitors are capturing the retailer relationships, shelf space, and consumer mindshare it is leaving behind.
OpenAI's Quiet Safety Feature: Emergency Contacts for Crisis Moments
Amid all the legal drama, OpenAI also launched Trusted Contact — a feature allowing adult ChatGPT users to designate emergency contacts who receive notifications if the user discusses self-harm or suicide during a conversation.
OpenAI's rationale: "When someone may be in crisis, connecting with someone they know and trust can make a meaningful difference." The feature is built on expert-validated crisis intervention research — specifically, the finding that social connection (peer support, family contact) is a primary protective factor (a measurable element that statistically reduces harm risk) against self-harm outcomes.
It marks a real shift in how AI companies define their responsibilities: chat interfaces are now being treated as genuine mental health touchpoints, not just information retrieval tools. Learn how AI products are reshaping everyday life at AI for Automation Guides.
May 11 — What Nadella's Inbox Could Decide for AI Governance
The entire legal fight converges on May 11, 2026. When Satya Nadella takes the witness stand, the court will be looking at three specific questions:
- What exactly did OpenAI promise Microsoft during the 2017 partnership formation — and were those promises consistent with a nonprofit-forever mission?
- Did Microsoft leadership know about — or actively support — OpenAI's commercial pivot before it was announced?
- What do Nadella's private emails reveal about OpenAI's internal board dynamics during The Blip of November 2023?
If Nadella's emails show Microsoft was surprised by the commercial shift, Musk's "mission betrayal" narrative gains major legal support. If they show Microsoft was a co-architect of the pivot — explicitly aware and approving — the $150 billion claim loses its core premise. Ilya Sutskever's testimony, scheduled shortly after, adds a second data point: what did OpenAI's own chief scientist believe the company's mission actually was?
Follow the ongoing trial and AI governance developments at AI for Automation News. The next 72 hours of testimony may redefine the legal standards that govern the most powerful AI organizations on earth — and determine whether founding promises made in Silicon Valley boardrooms carry any binding weight at all.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments