AI for Automation
Back to AI News
2026-05-05OpenAIElon MuskxAIAnthropicmodel distillationAI automationSam AltmanOpenAI lawsuit

Musk Admits xAI Copied OpenAI — Pentagon Hired Him Anyway

Musk testified xAI used model distillation on OpenAI's outputs. The Pentagon labeled Anthropic a 'supply-chain risk' — then signed with Musk's xAI instead.


Elon Musk built his lawsuit against OpenAI on one argument: the company betrayed its mission to benefit humanity. But on the witness stand in federal court, he admitted his own startup, xAI, used OpenAI's models through a process called model distillation — a technique where a larger AI model teaches a smaller, competing model by having it study the bigger model's outputs, effectively copying its behavior without touching its raw training data. The courtroom irony was immediate. A ruling in this case could reshape competitive AI automation practices industry-wide.

The case is the largest AI legal confrontation in history. Musk is demanding $150 billion in damages and the removal of CEO Sam Altman and President Greg Brockman. But an equally significant development emerged outside the courthouse: the Pentagon signed new classified AI contracts with seven companies — and explicitly excluded Anthropic, declaring it a "supply-chain risk." Musk's xAI made the cut. Anthropic didn't.

Elon Musk, founder of xAI, testifying in the OpenAI lawsuit over model distillation and AI training practices — federal court, April 2026

OpenAI Lawsuit: The Courtroom Admission Nobody Expected

Musk filed his lawsuit in 2024, accusing co-founders Sam Altman and Greg Brockman of diverting OpenAI from its nonprofit roots — an organization Musk claims he heavily shaped. He argues the transition to a for-profit structure primarily benefits Microsoft and early investors, not humanity. To support this, his legal team presented early internal emails, including evidence that Nvidia CEO Jensen Huang gifted an in-demand supercomputer to OpenAI — a corporate donation that complicates the "pure public mission" framing.

Then Musk's own cross-examination arrived. He confirmed that xAI practiced model distillation using OpenAI's systems. In competitive AI development, distillation is used two ways: first, legitimately compressing a large model into a smaller, faster version of itself; and second, more controversially, having a competitor's model generate outputs that your own model studies, absorbing the larger model's "knowledge" without licensed access to its underlying data or weights (the numerical values that define how an AI model reasons and responds). It's the second interpretation that OpenAI's legal team is pressing.

OpenAI's official statement leaves no ambiguity: "This lawsuit has always been a baseless and jealous bid to derail a competitor."

Key trial facts so far:

  • April 27, 2026: Jury selection began. Musk took the stand as the first witness.
  • 3 consecutive days: Musk testified Tuesday through Thursday — the longest single-witness stretch in a major tech trial this year.
  • Jared Birchall — Musk's financial manager and Neuralink CEO — followed with documentation supporting Musk's financial claims.
  • Jensen Huang GPU gift: Early OpenAI emails reveal Nvidia's CEO donated a supercomputer to the company — evidence Musk uses to argue OpenAI was commercially entangled from day one.
  • $150 billion in damages demanded — plus the forced removal of Altman and Brockman from leadership.

The distillation admission matters well beyond this trial. Competitive distillation is practiced across the AI industry — by startups benchmarking against GPT-4, by research teams studying model behavior, and by companies replicating capabilities they haven't independently built. A court ruling that competitive distillation constitutes IP theft will force a restructuring of how AI models are trained everywhere. That affects everyone in AI automation — from indie developers to enterprise labs.

Pentagon Drops Anthropic — Then Approves Musk's xAI

The same week Musk's trial began, the U.S. Department of Defense awarded new classified AI contracts to seven companies: OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and startup Reflection. One significant name was missing.

The Pentagon building — U.S. Department of Defense awarded classified AI automation contracts to xAI while excluding Anthropic as a supply-chain risk

Anthropic — the AI safety company co-founded by Dario and Daniela Amodei after they left OpenAI — was explicitly excluded after the Defense Department designated it a "supply-chain risk." Supply-chain risk is a formal national security classification meaning a vendor's systems are considered potentially vulnerable to infiltration, disruption, or foreign influence — the same label applied to telecom hardware from companies like Huawei in the previous decade.

What makes this exclusion remarkable:

  • Anthropic previously operated in classified government contexts before this new designation — it wasn't new to Defense Dept work.
  • Claude, Anthropic's flagship model, is widely cited in policy and security circles for its safety-first design philosophy — the opposite of what a "supply-chain risk" label implies.
  • The Pentagon simultaneously approved xAI — a company in active federal litigation over competitive training practices, whose founder just admitted to distilling OpenAI's models on the stand.
  • No public technical justification for the supply-chain risk classification has been released.
  • The seven approved vendors span cloud hyperscalers (Google, Microsoft, Amazon), AI labs (OpenAI), chipmakers (Nvidia), and now Musk's xAI alongside startup Reflection — a deliberately wide portfolio hedge.

The exclusion is the clearest signal yet that U.S. government AI procurement is shaped by factors beyond model capability or safety track record. Whether Anthropic's classification reflects technical concerns, competitive dynamics, or political considerations remains publicly unknown — and the silence itself is a story.

$150 Billion and a CEO Removal: What the Verdict Would Actually Mean

Musk's $150 billion damages figure has no precedent in technology litigation. Legal analysts treat it as a negotiating anchor rather than a realistic expectation — but the executive removal demand is equally without precedent. Forcing Sam Altman and Greg Brockman out of OpenAI via court order would be the most consequential leadership action in Silicon Valley history, with downstream effects on OpenAI's valuation, its Microsoft partnership, and every company depending on its models.

OpenAI's defense rests on three core arguments:

  • Musk voluntarily left the company before its commercial pivot, forfeiting standing to dictate its future direction.
  • The nonprofit mission was always compatible with building commercially viable AI — frontier model development costs require corporate partnerships at scale.
  • Musk's own conduct — distillation from OpenAI's models, recruiting from OpenAI's talent pool, and building a direct competitor — undercuts his standing as a plaintiff claiming mission betrayal.

The trial is ongoing with no verdict timeline set. But each day produces new admissions from both sides. The distillation question, once resolved by this jury, becomes a legal baseline for how competitive AI training is evaluated across the entire industry — shaping what any company can legally do when building AI that competes with an existing system. That is a ruling with consequences far beyond one lawsuit between two billionaires.

Microsoft AI Automation Tool Drops Into Word as AI Law Gets Complicated

Alongside the trial, Microsoft this week launched Legal Agent — an AI system embedded directly in Microsoft Word, designed exclusively for legal teams. It automates three tasks: contract review, negotiation history tracking, and clause-by-clause analysis against internal playbooks (pre-defined legal guides that specify which contract terms a company accepts, rejects, or flags for senior counsel review).

Microsoft VP Sumit Chauhan explained the design approach: "Instead of relying on general AI models to interpret commands, the agent follows structured workflows shaped by real legal practice." The distinction matters: generic AI writing assistants respond to freeform natural language prompts. Legal Agent runs deterministic structured workflows — applying consistent legal logic across documents rather than interpreting each request independently.

For legal professionals monitoring the OpenAI v. Musk trial, a Word-embedded AI that analyzes contract terms and tracks negotiation history is immediately relevant. The trial itself centers on disputes over early internal emails, corporate governance commitments, and multi-year funding agreements — exactly the document types Legal Agent is built to process. If your organization handles AI vendor agreements or government contracts, watch how Legal Agent's capabilities expand in the coming weeks. Our AI tools guide will cover it in detail as public rollout begins.

Related ContentGet Started | Guides | More News

Sources

Stay updated on AI news

Simple explanations of the latest AI developments