AI for Automation
Back to AI News
2026-04-12Sam AltmanOpenAIAnthropicClaudeOpenClawAI automationAI agentdeveloper tools

Sam Altman Attacked: OpenClaw Faces 50x Claude Cost Hike

Molotov cocktail at Sam Altman's home. OpenClaw's 247K-star AI automation tool faces 10-50x price hike on Claude. What developers must know now.


Two events, 48 hours apart, put the AI automation industry on edge this week. OpenAI CEO Sam Altman woke at 3:45 AM to a Molotov cocktail thrown at his house — while Anthropic banned and then un-banned the creator of OpenClaw, a 247,000-star Claude-powered tool used by 135,000 developers, all in the same news cycle.

The 3:45 AM Wake-Up

On the morning of April 11, 2026, Sam Altman posted to his personal blog. Someone had thrown a Molotov cocktail (a fire-bomb improvised from a glass bottle filled with flammable liquid) at his home at 3:45 AM. No one was hurt. He shared a family photo: "in the hopes that it might dissuade the next person."

The timing was impossible to ignore. Days before the attack, The New Yorker published a lengthy profile of Altman that he called "incendiary" — a word that, in retrospect, carries obvious irony. The profile raised questions about Altman's trustworthiness as CEO of the world's most prominent AI lab, revisiting events the industry knows well: in November 2023, Altman was briefly fired by OpenAI's board over "candor" concerns, then reinstated five days later after a near-revolt by staff.

An associate had warned Altman the New Yorker article "was coming at a time of great anxiety about AI and that it made things more dangerous for me."

Why a Prestigious Profile Can Cost More Than PR

Altman's response reads like genuine sleeplessness: "I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives." He acknowledged something Silicon Valley CEOs rarely say publicly — that media coverage shapes real-world physical risk.

The New Yorker is not a tech blog. It reaches millions of readers who include federal regulators, senators, foundation heads, and institutional investors. A profile questioning the reliability of the man steering the world's most powerful AI development effort lands differently than a social media post. It circulates in policy circles. It informs congressional hearings. It enters the permanent record of how history will assess this period of AI development.

Altman's closing line struck an unusual note of restraint: "We should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes — figuratively and literally."

Anthropic restricts OpenClaw AI automation tool from Claude Pro and Max subscriptions — April 2026

OpenClaw: 247,000 Stars, 135,000 Instances, One Pricing Change

While Altman's post spread on April 11, the day before had already opened a separate wound in developer trust. The story starts with Austrian developer Peter Steinberger, who built OpenClaw in November 2025.

OpenClaw is an open-source AI agent framework — a tool that wraps around a model like Claude and adds capabilities the base model lacks: persistent memory (so it remembers past conversations), tool access (so it can browse the web, write files, and run code), and messaging integrations (so it can reply via WhatsApp and Telegram). Think of it as a productivity layer that sits between you and Claude, turning single-question chat into a full workflow assistant.

It grew at a remarkable pace. By March 2, 2026 — roughly 4 months after launch — OpenClaw had accumulated:

  • 247,000 GitHub stars — GitHub stars function like community endorsements; 247,000 places OpenClaw among the most widely supported open-source AI tools ever launched
  • 47,700 forks — forks are copies made by developers who build their own versions on top of the original code
  • 135,000 active instances running worldwide at the time of Anthropic's policy announcement

Then, on February 14, 2026, Steinberger announced he was leaving OpenClaw to join OpenAI — Anthropic's main competitor. He handed the project to the broader open-source community.

The Claude Pricing Change That Blindsided 135,000 Developers

Six weeks after Steinberger departed, Anthropic changed the rules. Effective April 4, 2026, Claude Pro and Claude Max subscriptions (Anthropic's flat-rate monthly plans — the AI equivalent of an "all-you-can-eat" tier) would no longer cover usage via third-party agent harnesses (tools like OpenClaw that wrap around Claude to extend what it can do). Users must now pay via a separate billing system.

The real-world cost impact hit hard:

  • Before April 4: flat monthly subscription, unlimited-ish use
  • After April 4: $0.50 to $2.00 per interaction via extra usage billing
  • Via direct API (the paid technical connection developers use to build apps): $3 per million input tokens and $15 per million output tokens for Claude Sonnet 4.6 — a token being roughly three-quarters of a single word
  • Net change for heavy users: 10x to 50x higher monthly costs

Anthropic's Boris Cherny, Head of Claude Code, defended the shift: "Anthropic's subscriptions weren't built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully." The technical argument is fair — agentic tools that loop through thousands of Claude calls per hour consume dramatically more compute than a casual conversation. But for the 135,000 people who built their workflows on the flat-rate assumption, the announcement felt sudden.

Anthropic offered partial mitigation: a one-time credit equal to one month's subscription cost (redeemable until April 17, 2026) and 30% discounts on pre-purchased usage bundles. For someone paying $20/month who now faces $200/month in usage fees, a one-time $20 credit is a limited cushion.

Steinberger pointed to a notable coincidence. Weeks before the pricing change, Anthropic had added similar features to Cowork — Anthropic's own in-house agent product (think of Cowork as Anthropic's internal version of OpenClaw). Specifically, a feature called Claude Dispatch launched a few weeks before the policy change. His interpretation was blunt: "First they copy some popular features into their closed harness, then they lock out open source."

Peter Steinberger OpenClaw founder faces Anthropic Claude access ban — AI automation developer crisis April 2026

The Ban, the Viral Post, and the Same-Day Reversal

On April 10 — six days after the new policy took effect — Steinberger posted that his personal Anthropic account had been suspended for "suspicious" activity. He said he was operating within the new rules, using the paid API correctly, but had been cut off regardless.

The post spread quickly through developer communities. The subtext was obvious: the creator of OpenClaw — a tool Anthropic had arguably borrowed features from — now working for OpenAI, banned from Claude while trying to maintain a project with 247,000 community supporters.

Within a few hours, an Anthropic engineer appeared in the comments to clarify that Anthropic had "never banned anyone for using OpenClaw" and offered to resolve the issue directly. Steinberger's access was reinstated the same day.

Whether the suspension was an automated system misfire, an overzealous manual enforcement action, or something else, Anthropic did not specify publicly. The speed of the walk-back — same-day, after community pressure — suggests the optics were not what Anthropic intended to manage during an already tense developer moment.

What Platform Dependence Actually Costs

Taken separately, both stories are dramatic but contained. Taken together, they sketch a clear pattern: the AI industry's disputes — commercial, technical, and personal — are now fully public events with real consequences for people caught in the middle.

For developers and teams using AI tools day-to-day, the OpenClaw story is the more immediately actionable one. If your AI automation setup depends on a third-party tool built on top of Claude — anything that adds memory, messaging, or multi-step automation to Anthropic's model — review your billing arrangement now. The OpenClaw precedent shows that pricing can shift dramatically with 30 days' notice. A 10-50x cost increase is not a rounding error; for a solo practitioner or small team, it is a budget decision.

Steinberger's parting observation carries weight regardless of the ban outcome: "It's gonna be harder in the future to ensure OpenClaw still works with Anthropic models." That is not bitterness from someone who lost a platform fight. It is a read on the structural risk every developer building on a single AI provider's infrastructure now has to factor into their roadmap — before, not after, the pricing changes.

Related ContentSet Up AI Automation | AI Automation Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments