Claude Code Wiped Uber's AI Budget in 90 Days
Claude Code burned Uber's entire AI budget in 90 days. Microsoft abandoned 9 GW of data center power — Google moved in. What enterprise AI teams must do now.
Uber's CTO Praveen Neppalli Naga put it plainly this week: "I'm back to the drawing board because the budget I thought I would need is blown away already." The culprit wasn't headcount growth or runaway cloud bills — it was Claude Code, the AI coding assistant from Anthropic, spreading through Uber's engineering org so aggressively that the company's entire annual AI budget evaporated months before year-end.
That same week, a separate reckoning surfaced at Microsoft. The company's aggressive data center power strategy — once the envy of every tech executive — has quietly collapsed, handing Google a structural advantage in AI infrastructure that could take years to reverse.
The AI Budget That Vanished Before Summer
The speed of Uber's AI spending spiral is a case study in how quickly enterprise AI costs can overtake any budget model built before 2025. Claude Code (an AI-powered coding assistant that helps software engineers write, review, and debug code directly inside their development environment) didn't just spread at Uber — it dominated every cost forecast.
To accelerate adoption, Uber built internal "leader boards" — internal ranking dashboards that score engineers by how heavily they use AI tools relative to peers. The goal was cultural: make AI tool adoption measurable and competitive. The outcome blindsided finance.
Within a few months of 2026, what Uber had modeled as a full 12-month AI budget line was gone. The CTO is now rebuilding forecasting models from zero. Key data points:
- Claude Code adoption spread faster than any prior enterprise software rollout at Uber
- Full-year AI budget consumed in roughly 90 days — not 12 months
- Internal leaderboards drove adoption far beyond any projected utilization rate
- CTO Naga's current status: "back to the drawing board"
Uber's situation isn't unique — it's simply the most candid public admission yet from a Fortune 500-tier company. Finance teams that built AI budgets in 2025 based on projected per-seat usage are discovering that actual consumption can run 3x to 5x higher when tools embed deeply into daily engineering workflows. If your organization hasn't audited its AI tool spend this quarter, this warning is worth taking seriously. See how teams are managing AI costs without killing productivity →
Microsoft's 9-Gigawatt War Chest — and the Retreat Nobody Announced
In the early AI boom, Microsoft assembled what insiders describe as a 9-gigawatt data center power "war chest" — 9 gigawatts being roughly equivalent to the output of 9 nuclear reactors, enough electricity to power tens of millions of homes simultaneously. This was the most aggressive infrastructure bet in modern tech history, designed to anchor OpenAI's compute needs and fuel a massive Azure AI expansion.
Then Microsoft CFO Amy Hood made a decision in late 2024 and early 2025 that will shape the company's AI trajectory for years: she curbed the spending. Microsoft's energy team was instructed to walk away from multiple data center deals across the US and Europe. Bobby Hollis, the company's top energy executive and the architect of much of this strategy, departed on March 31, 2026 — a quiet but significant organizational signal.
"Microsoft is going to fall behind Google on AI compute capacity. Google has an amazing team. They continue to push."
— Infrastructure manager familiar with the projects, per The Information
Microsoft's official response from Azure General Manager Alistair Speirs: "Microsoft's global infrastructure approach is built on flexibility and optionality, based on the near-term and long-term demand signals we see from customers." In infrastructure strategy, "flexibility and optionality" typically translates as: the previous plan no longer holds.
Google and Oracle Capture the Ground Microsoft Left Behind
In infrastructure, abandoned capacity doesn't stay empty for long. Google moved decisively to fill the power gaps Microsoft walked away from, expanding its own grid-connected data center footprint in the US and Europe. Oracle secured the critical power allocation for OpenAI's Wisconsin data center — the facility that was originally built around Microsoft infrastructure.
The competitive landscape as of April 2026:
| Factor | Microsoft | |
|---|---|---|
| Infrastructure momentum | Retreating, pivoting to off-grid | Expanding grid capacity |
| OpenAI infrastructure | Weakening (Oracle filling gap) | Growing via new partnerships |
| Executive stability | Top energy exec departed March 31 | Stable |
The Off-Grid Pivot: Gas Turbines, Oilfields, and $30 Billion in the UK
Microsoft hasn't abandoned scale — it has changed strategies. Rather than competing for grid-connected capacity (data centers that draw from public electricity networks, where connection wait times can run 3 to 7 years in the US and Europe), the company is pursuing off-grid deals: dedicated gas-powered generation built adjacent to or within energy production sites.
Confirmed deals currently in Microsoft's pipeline:
- Crusoe, Abilene, Texas — 900 MW (megawatts) gas-fired data center, positioned immediately adjacent to OpenAI and Oracle's Wisconsin facility
- Chevron + Engine No. 1, Permian Basin — Preliminary agreement for 2.5 GW (2,500 megawatts — roughly the combined output of 2 to 3 nuclear reactors) sourced from oilfield gas infrastructure
- Nscale, West Virginia — 1.35 GW facility running Nvidia Vera Rubin chips (Nvidia's next-generation AI accelerator, the successor to the H100 GPU) and Caterpillar industrial gas turbines
- United Kingdom — $30 billion committed to new UK data centers, described by Microsoft as the largest private infrastructure investment in British history
The off-grid pivot solves one problem — bypassing grid delays — while introducing another: gas turbines produce significant carbon emissions, adding environmental exposure at a time when hyperscalers (giant tech companies that operate massive cloud infrastructure) face mounting regulatory scrutiny over their climate footprints. The irony is sharp: Microsoft spent years promising net-zero infrastructure and is now inking deals in oil fields.
The Quiet $1 Billion Winner: Handshake's Invisible AI Explosion
While the Microsoft-Google infrastructure battle commanded the most attention, the most remarkable financial transformation in this week's reporting may belong to Handshake — the 12-year-old college recruiting platform that most people associate with "LinkedIn for students."
Handshake's AI training division, which employs domain experts (specialists including lawyers, medical doctors, and PhD researchers who evaluate and grade AI model outputs to improve their accuracy and reasoning), has grown at a pace that defies conventional revenue forecasting:
- One year ago: $5–10 million gross annualized revenue
- January 2026: $550 million gross annualized
- April 2026: ~$1 billion gross annualized
That's a 100x to 200x revenue increase in roughly 12 months. The growth reflects insatiable demand from AI labs for high-quality human feedback data (structured evaluations where human experts score AI outputs, helping models learn to produce more accurate and reliable responses). As models become more capable, the tasks used to test and refine them require increasingly specialized expertise — exactly the professional talent Handshake's network can supply at scale.
A structural caveat: data labeling startups (companies that organize human annotation and evaluation of AI outputs) lack defensible moats (durable competitive advantages that prevent rivals from replicating the business). Individual expert annotators often work with multiple AI labs simultaneously. Contracts shift between providers within a single quarter. The $1B figure reflects "gross annualized" revenue — which may include forward projections rather than confirmed bookings. Still, the directional signal is clear: the AI training supply chain is now a billion-dollar market segment that barely existed 18 months ago.
Three AI Automation Actions Before Your CFO Calls
Uber's experience converts an abstract budget risk into a concrete case study. Three immediate steps organizations should take:
- Track AI tool costs at the team level, not company-wide. A single AI coding tool at $200/month per engineer, multiplied across 1,000 engineers, equals $200,000/month — $2.4 million per year. Uber's mistake was treating AI as a single budget line. Per-team caps catch runaway usage before it reaches the CFO's desk. Set up AI cost tracking for your team →
- Revisit your cloud provider assumptions for AI workloads. The Microsoft vs. Google infrastructure balance has shifted meaningfully since mid-2024. If your organization is evaluating long-term AI compute contracts or cloud agreements, this is no longer the same market it was 18 months ago.
- Watch Handshake's trajectory as a bellwether. Domain-expert AI training is a $1 billion business that grew from near-zero in 12 months. Professions that feed AI models — legal, medical, academic — are becoming unexpected AI revenue centers. The companies that identify this early have a head start.
The AI economy is maturing faster than most budget cycles can track. Uber's CTO is back at the drawing board. You don't have to be — but only if you start measuring now.
Related Content — Get Started | Guides | More News
Stay updated on AI news
Simple explanations of the latest AI developments