OpenAI Insiders Call Altman 'Sociopath' — Microsoft May Sue
OpenAI board member calls Sam Altman 'sociopathic.' Microsoft may sue after he signed a $50B Amazon deal the same day he pledged exclusivity to them.
A bombshell New Yorker investigation has surfaced testimony from multiple tech insiders — including a sitting OpenAI board member and the founder of Anthropic — painting Sam Altman as a figure who uses artificial intelligence safety as a recruitment tool, then quietly abandons those commitments when they no longer serve him. The timing of the report is particularly loaded: in 2026, on the same day Altman reaffirmed Microsoft as OpenAI's exclusive cloud partner, he announced a $50 billion exclusive reseller deal with Amazon. Microsoft is now signaling it may sue.
The Portrait of a Persuader
The New Yorker piece assembles a gallery of sources who describe Altman not as a technical visionary, but as an extraordinarily effective salesman — one who makes every skeptic in the room feel, individually, that their concerns are his top priority.
"He's unbelievably persuasive," one unnamed tech executive told the magazine. "Like, Jedi mind tricks. He's just next level."
That persuasive ability, sources say, is precisely what makes him dangerous. An OpenAI board member described two traits that rarely coexist in the same person:
"The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."
The word "sociopath" appears more than once in the reporting — and notably, it was used years ago by Aaron Swartz, the legendary programmer and internet-freedom activist who was in the inaugural 2005 Y Combinator class alongside Altman. Before his death by suicide in 2013, Swartz warned friends directly: "You need to understand that Sam can never be trusted. He is a sociopath. He would do anything."
The Microsoft Blindside — $50 Billion, Same Day
The most immediately consequential section of the investigation involves OpenAI's relationship with Microsoft. In 2019, Microsoft committed a landmark $1 billion investment in OpenAI. During those negotiations, Altman agreed to a ranked list of safety demands. A provision was later inserted that, in effect, nullified the top safety requirement. When Dario Amodei — then a senior researcher at OpenAI, now founder and CEO of Anthropic — read the provision aloud directly to Altman in a meeting, Altman denied it existed.
This pattern — agreement, then reversal, then denial — reportedly continued across multiple Microsoft executives over subsequent years. One described it plainly: "He has misrepresented, distorted, renegotiated, reneged on agreements." The strained dynamic reportedly extended to Microsoft CEO Satya Nadella himself.
The tensions reached a breaking point in 2026. On the same day Altman reaffirmed Microsoft as OpenAI's exclusive provider for memoryless AI models (models that don't retain memory between separate conversations), he announced a $50 billion deal naming Amazon as an exclusive reseller. Microsoft has since signaled willingness to take Altman to court over what it views as an outright breach of contract.
Why Anthropic's Dario Amodei Left OpenAI — and What He Built Instead
Perhaps the most structurally significant revelation in the investigation is what it tells us about Anthropic's origins. Dario Amodei — whose company now develops Claude, one of OpenAI's most formidable competitors — did not leave because of a technical disagreement. He left specifically because of Altman's approach to safety commitments.
According to the reporting, Altman used "AI safety" as a bargaining chip: a recruiting tool designed to attract engineers and researchers who were genuinely alarmed about the risks of powerful AI. Once those recruits were inside the organization, the commitments would quietly erode. Amodei and others who couldn't operate under those terms eventually departed to found Anthropic, positioning it explicitly as the safety-first alternative that OpenAI had promised to be.
This makes Anthropic's rise not just a business story — it's a direct structural consequence of the trust failures documented by the New Yorker. Follow our AI news coverage for ongoing updates on OpenAI, Anthropic, and the broader AI governance landscape.
Sociopath or True Believer? The Debate Inside the Room
Not every former OpenAI insider accepts the "sociopath" framing. Sue Yoon, a former OpenAI board member, offered a notably different interpretation:
"Not this Machiavellian villain... he's too caught up in his own self-belief. So he does things that, if you live in the real world, make no sense. But he doesn't live in the real world."
Yoon's framing is, in some ways, more troubling than deliberate manipulation. A calculated deceiver knows where the line is. Someone operating inside a sealed self-narrative — who genuinely believes whatever version of events currently serves him — cannot be corrected by evidence, confronted with contradictions, or held to past agreements he no longer remembers making.
The distinction matters enormously when that person is building what Altman himself has called a trillion-dollar AI empire.
An Allegation the Investigation Doesn't Avoid
The New Yorker piece also references a civil lawsuit filed by Altman's sister, alleging sexual abuse that she says began in childhood. Altman, his mother, and his brothers all deny the allegations. No court has made any legal determination on the matter. We note it here because it is part of the published investigation, and omitting it would misrepresent the full scope of what the reporting covers.
The $1 Trillion AI Governance Question
None of the sources in this investigation are anonymous bystanders. They include a sitting board member who used the word "sociopath," the CEO of a major AI company who walked out rather than continue working with Altman, and multiple Microsoft executives with documented, specific grievances. The pattern they describe — charm, commitment, reversal, denial — spans more than a decade and multiple high-stakes partnerships.
OpenAI's models are increasingly embedded in tools used for medical information, financial decisions, legal guidance, and everyday productivity. If you're evaluating AI automation tools for your workflow, understanding the governance structures behind these models is essential context. What the New Yorker investigation ultimately raises is not a psychological question about one executive's character. It's a structural one: what governance, what accountability, what checks exist when the person at the center of a trillion-dollar AI ecosystem is, by multiple credible accounts, someone who treats agreements as optional?
As of publication, Sam Altman has not issued a public response to the New Yorker investigation.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments