Sam Altman Called 'Pathological Liar' by OpenAI Board
Ronan Farrow's New Yorker investigation: OpenAI board members called Sam Altman a 'pathological liar' and 'sociopath.' 18 months, ~100 sources, now on record.
When Ronan Farrow (the Pulitzer Prize-winning journalist who broke the Harvey Weinstein story) turned his attention to Sam Altman, it took 18 months. The resulting investigation — based on nearly 100 sources and published in The New Yorker — breaks a silence OpenAI worked hard to maintain. Board members who voted to fire Altman in November 2023 are now, for the first time, on record. The words they used: "pathological liar." "Sociopath."
These aren't background whispers. They're attribution-ready characterizations, attached to real people who held real governance authority over the most powerful AI company in the world. That's what makes this report different from everything that came before it.
OpenAI's Sam Altman: 18 Months, 100 Sources, One Very Specific Word
At 17,000+ words, Farrow's investigation runs longer than most academic papers. The reporting timeline — 18 months from start to publication — is notable even by long-form investigative standards (the kind that involves deep document review, cultivating reluctant sources, and extensive legal review before a single word publishes).
The headline finding isn't subtle. Multiple board members who participated in the November 2023 decision to fire Altman described him to Farrow using two specific terms:
- "Pathological liar" — someone who lies habitually, compulsively, and often without clear motivation or apparent benefit to themselves
- "Sociopath" — a clinical-adjacent term suggesting a persistent pattern of disregard for the feelings, rights, or wellbeing of others
These terms did not stay in the room. They're now in print, attributed, in one of the most credible investigative outlets in American journalism. That's a significant departure from the vague, legalistic language — "communication failures," "lack of candor" — that dominated the public record for the past two years.
The OpenAI–WilmerHale Review With No Written Record
Perhaps the single most structurally revealing detail in the entire report: the independent review of the 2023 crisis conducted by WilmerHale (one of the largest and most prestigious law firms in the United States, with deep ties to Silicon Valley, Washington D.C., and big tech litigation) was kept entirely oral.
No written record. No discoverable document. No published findings beyond a brief public summary that described "communication failures" and cleared Altman of "malfeasance."
This is not standard practice for a review of this scale and consequence. Independent investigations commissioned in the wake of corporate governance crises typically produce written reports — because written reports create accountability. Oral reviews do the opposite. Whatever WilmerHale's investigators were told, whatever they found, whatever conclusions they reached: all of it exists only as memories in the minds of the people in those rooms.
The strategic consequence is significant: there is no document that Farrow — or any journalist, lawyer, regulator, or future board member — can obtain, cite, or rebut. The most inconvenient details of the November 2023 crisis were deliberately kept out of documented form.
Why OpenAI Board Sources Went on Record Now — Not in 2023
Farrow notes a telling pattern in how his reporting evolved over those 18 months. Sources who initially refused to go on record became progressively more willing to do so as the investigation continued. By the end, some were willing to attach their names to specific, damning characterizations of Altman.
Several factors likely drove this shift:
- Time and distance: The immediate professional risk of speaking — when Altman returned triumphantly in late November 2023 with near-unanimous employee support and major investors firmly behind him — had diminished considerably by 2025 and 2026
- Altman's expanding public footprint: His Congressional testimony, TIME "CEO of the Year" designation, and publicly stated trillion-dollar valuation ambitions made the gap between the official narrative and private characterizations harder to sustain
- Changed board composition: The board that voted to fire Altman was largely replaced afterward. Those who remained had different institutional incentives than those who had left with their reputations complicated
- Farrow's track record: Sources considering going on record with a journalist who successfully published the Weinstein investigation — and survived the subsequent legal and reputational pressure — understand the report will be thorough, legally reviewed, and resistant to suppression
The 2023 OpenAI Board Crisis: What Happened vs. the Official Record
To understand why this report lands with such weight, the context: On November 17, 2023, OpenAI's nonprofit board voted to fire Sam Altman, effective immediately. The stated reason — that Altman had "not been consistently candid" with the board — was vague enough to invite interpretation, but not specific enough to explain the extraordinary speed and secrecy of the action.
What followed was one of the most compressed corporate crises in tech history. Microsoft (OpenAI's largest outside investor, with approximately $13 billion committed) announced it would hire Altman to lead a new AI division. Roughly 700 of OpenAI's approximately 770 employees signed an open letter threatening to quit and follow him there. Within 96 hours, the board reversed course. Altman was reinstated as CEO. Three of the four board members who had voted to fire him were removed from the board within weeks.
WilmerHale's subsequent oral-only review concluded that no "malfeasance" had occurred. The crisis was officially characterized as a governance failure — not a character failure. The public record said: the board made a mistake, Altman was wrongly removed, normal operations resumed.
Farrow's sources now say something different. The words "pathological liar" and "sociopath" were not in any official summary. They are now in print.
What This Means If You Rely on OpenAI Every Day
For the roughly 400 million weekly ChatGPT users (a figure Altman cited in early 2025), this investigation doesn't change the product. ChatGPT still works. The underlying model runs the same way it did yesterday. But for anyone building applications on OpenAI's platform (developers, startups, and enterprises integrating AI automation into workflows via the API — the technical interface that connects apps to ChatGPT's underlying AI models), this report is a governance risk signal.
The November 2023 crisis demonstrated that a single board vote could threaten to dismantle the company overnight. The oral-only WilmerHale structure means the root causes were deliberately left unresolved in documented form. That combination — demonstrated instability and intentional non-documentation — is relevant to anyone calculating platform dependency risk.
You can explore AI tools that reduce single-platform dependency in our guides, and follow AI governance and company news to stay ahead of the next shift. Watch for OpenAI's official response to Farrow's report — and whether any named current board members or leadership team address the specific characterizations in writing. The absence of a WilmerHale written document means there is no counter-record to produce. The asymmetry of oral silence versus printed attribution is now the permanent historical record of November 2023.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments