Claude Code Just Hit $30B — Your AI Spend Is Next
Anthropic surged 30x in 90 days — from $1B to $30B annualized. Claude Code, built by a self-taught programmer, is driving the global AI automation boom.
In just 90 days, Anthropic's annualized revenue jumped 30-fold — from $1 billion in December 2025 to $30 billion today. The engine behind this AI automation explosion isn't a new research breakthrough. It's a coding assistant built by one self-taught programmer who briefly left the company to work at a competitor.
That tool is Claude Code. And according to The Information, it has fundamentally reshaped who controls the AI industry's trajectory — and how much your organization might end up spending to stay competitive.
Claude Code Revenue: From $1B to $30B in 90 Days
Anthropic's revenue over the past four months reads like fiction:
- December 2025: $1 billion annualized (annualized = the yearly run-rate calculated by multiplying a single month's revenue by 12)
- February 2026: $2.5 billion annualized — a 150% jump in just 60 days
- April 2026: $30 billion annualized — another 12x leap in roughly 8 weeks
For context: OpenAI generated $13 billion in actual 2025 revenue across the entire year. Anthropic is now running at more than double that rate on a projected basis. The driver isn't Claude's benchmark scores or research papers. It's Claude Code — a developer tool that lets programmers describe what they want in plain English and receive working, multi-file code changes in return — and its adoption has been near-vertical.
OpenAI, for its part, projects its revenue reaching $284 billion by 2030 (from $13 billion in 2025 — a 21x increase). Industry analyst Jim Chanos described these projections as "just guesses." Martin Peers at The Information put it more directly: "Stop issuing long-range revenue forecasts to investors... How can anyone take seriously forecasts for revenue reaching as far out as 2030?"
The Self-Taught Programmer Who Built Anthropic's $30B Engine
Boris Cherny doesn't fit the standard AI executive profile. He's largely self-taught — no Stanford PhD, no deep learning research publications. He built something that worked, left to join a competitor, then came back.
Cherny had joined Anthropic, then departed to work at Cursor (a popular AI-powered code editor that gained serious developer traction through 2024–2025). He returned to Anthropic and led the development of Claude Code, which The Information now credits as the company's "megahit" and primary revenue driver behind the $30B annualized trajectory.
Claude Code works differently from tools like GitHub Copilot (Microsoft's AI code-completion service, starting at $10/month per developer). Rather than suggesting the next few lines as you type, Claude Code understands entire codebases — an approach developers now call vibe coding. Describe what you want — "refactor the authentication module," "add rate limiting to the payment API" — and Claude handles multi-file changes, writes tests, and explains every decision. For teams shipping software at scale, this level of AI automation assistance has proven worth substantial per-seat spending.
The story behind Cherny's departure and return epitomizes the AI talent war. Cursor was gaining users fast — but Claude Code, built inside a company with Anthropic's model quality and infrastructure, became something Cursor couldn't replicate. The growth vindicates Cherny's return and raises a pointed question: how much of Anthropic's $30B trajectory depends on this single product category versus a diversified platform?
Inside Meta's AI Automation 'Claudeonomics' Status War
If you want proof of how normalized AI spending has become inside major tech companies, look at Meta's internal "Claudeonomics" leaderboard — a ranking of employees by their Claude token (a token is roughly three-quarters of a word, the unit AI services use to measure and bill text processing) consumption. It's not a cost-control dashboard. It's a status board.
The current leader consumed 328.5 billion tokens in a single 30-day period. At public API pricing rates, that usage level runs approximately $2 million per month. At Meta, this isn't being flagged as a budget problem — it's treated as a mark of distinction.
This is a meaningful cultural signal. Inside some of the world's most competitive tech organizations, AI usage volume is becoming a proxy for productivity, ambition, and technical credibility. The employee who isn't maxing out Claude isn't just leaving tools on the table — they may be falling behind a new internal hierarchy that rewards aggressive AI adoption over traditional output metrics.
For team leads and executives outside Big Tech: your organization is probably 12–18 months behind this curve. But AI usage norms are forming right now, organically, whether you've set guidance or not. The question isn't whether your team will develop an AI usage culture — it's whether that culture will have guardrails when it does. You can build that foundation today with the AI automation tools guide.
OpenAI's Infrastructure Brain Drain
OpenAI's most ambitious initiative just lost its founding team to a direct rival.
Three senior executives who built the Stargate program — Peter Hoeschele, Shamez Hemani, and Anuj Saharan — have departed or are departing OpenAI. Their destination: Meta's newly formed Compute Unit, a dedicated organization Meta assembled to compete for AI infrastructure dominance at scale.
Stargate was announced in early 2026 with $500 billion in committed infrastructure spending, backed by SoftBank and Oracle — one of the largest single infrastructure bets in tech history. The departure of its three founding architects to a direct competitor raises real execution continuity questions, and signals that OpenAI's internal culture may be struggling to retain the people responsible for its most capital-intensive bets.
Anthropic is meanwhile pursuing two parallel infrastructure strategies: a multi-year cloud computing contract with CoreWeave (a specialized GPU — graphics processing unit, the chips AI models train and run on — cloud provider), with dedicated servers expected online later in 2026; and early-stage exploration of in-house chip design. The company views cloud partnerships as the faster route to solving near-term compute constraints.
The Memory Chip Play Wall Street Is Ignoring
Every investor tracks Nvidia. Fewer are watching the companies that supply the memory AI chips desperately need — and their growth projections are striking:
- SK Hynix: Sales grew ~50% in 2025; analysts forecast +159% growth in 2026
- Micron: Sales grew ~50% in 2025; analysts forecast +191% growth in 2026
- Nvidia 2026 forecast: +71% — strong, but 2–3x below the memory chip growth trajectory
- Valuation gap: Both memory stocks still trade at lower multiples than Nvidia despite superior growth projections
The reason: HBM chips (High Bandwidth Memory — specialized chips physically stacked alongside AI processors to feed them data at speeds that would otherwise bottleneck the entire system) have become the most constrained component in AI infrastructure. Every AI training run and every inference (real-time AI response generation) request requires enormous memory bandwidth. SK Hynix and Micron supply that critical memory almost exclusively — and demand is accelerating faster than supply can scale.
The Information's Anita Ramaswamy described these companies as "overlooked" — a label that may not survive the next wave of AI infrastructure spending reports.
The Security Alarm Regulators Aren't Hiding
Amid the growth story, a parallel and more uncomfortable narrative is taking shape. Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent recently convened leaders from Citigroup, Bank of America, and Wells Fargo specifically to discuss cybersecurity risks from Anthropic's Claude Mythos model (Mythos is the internal code name for Anthropic's most advanced — and most restricted — AI system, not yet broadly available to the public).
The concern is grounded in real data: research from Buzz, a Sequoia-backed cybersecurity startup, demonstrates that existing publicly-available AI models can already autonomously execute sophisticated cyberattacks within minutes — no specialized instruction, no expert hacker required. Anthropic's response has been to restrict Mythos access to a small number of top-tier technology companies rather than release it broadly.
In parallel, Cisco is in advanced acquisition talks for AI security startup Astrix, valued at $250–350 million — a 25% premium over Astrix's most recent $200 million valuation. Astrix specializes in monitoring and securing AI agents (software programs that take autonomous actions on your behalf — scheduling meetings, writing emails, executing code, browsing the web). If that acquisition closes, Cisco gains enterprise-grade AI agent security capability to push through its existing global customer network immediately.
The practical implication is direct: if federal regulators are alarmed enough to summon bank CEOs over an unreleased AI model, the window to establish your organization's AI security baseline is narrowing. The conversation is better started now than after your first incident. Watch for this space to move fast — Cisco's Astrix deal alone could redefine enterprise security buying in the second half of 2026.
Related Content — Get Started | Guides | More News
Sources
- The Information — Feed (Apr 9–11, 2026)
- The Information: How a Self-Taught Programmer Became the Father of Claude Code
- The Information: OpenAI Stargate Leaders Depart in Latest Shakeup
- The Information: SK Hynix Is the Overlooked Memory Chip Maker
- The Information: xAI Spending Pushed SpaceX to a Nearly $5 Billion Loss
Stay updated on AI news
Simple explanations of the latest AI developments