xAI Colossus 2: 200MW AI Datacenter Built in 6 Months vs. 15
xAI built its Colossus 2 AI datacenter in 6 months — 2.5x faster than OpenAI. Controls 67% of its power partner's orderbook. Near-zero revenue, $200B valuation.
In August 2025, xAI powered on 200 megawatts of cooling at Colossus 2 — a datacenter that didn't exist six months earlier. OpenAI, Oracle, and Crusoe each needed 15 months to reach the same scale. xAI did it in six. This is the story of the fastest major AI infrastructure build in history — and the uncomfortable financial reality behind it.
Six Months to 200MW: How xAI Built the Fastest AI Datacenter in History
Building a hyperscale datacenter (a warehouse-scale facility housing tens of thousands of specialized AI chips) typically takes at least 15 months. Power permits, cooling infrastructure, grid connections, and supply chains all move at institutional pace. xAI ignored that playbook entirely.
When xAI acquired the Southaven, Mississippi site on March 7, 2025, it was empty land. By August 22, 2025, 119 industrial air-cooled chillers — each roughly the size of a shipping container — were running at full capacity. That's 200 megawatts of cooling, enough to support approximately 110,000 of Nvidia's GB200 NVL72 processors (Nvidia's most powerful AI accelerator chips, capable of performing quadrillions of calculations per second).
The timeline comparison, according to SemiAnalysis (a semiconductor research firm whose proprietary datacenter model predicted Oracle's infrastructure deals months before public announcement):
- xAI Colossus 2: 6 months from empty land to 200MW operational
- Oracle's comparable facility: 15 months
- Crusoe Energy's build: 15 months
- OpenAI's infrastructure projects: 15 months average
That's a 2.5x speed advantage — achieved not through algorithmic breakthroughs, but through aggressive regulatory navigation and supply-chain leverage.
xAI's Power Strategy: Gas Turbines and the Mississippi-to-Memphis Grid
The real audacity wasn't the construction speed — it was the power strategy. Colossus 2 needs enormous electricity, but local permits in Southaven take months. xAI's solution: build the power generation in Mississippi and pipe it across the state border to the Memphis, Tennessee datacenter campus.
Seven 35-megawatt gas turbines (industrial generators — each powerful enough to supply a small town) now operate in Southaven, producing 245 megawatts total. Mississippi regulators issued temporary permits valid for only 12 months while the permanent permitting process continues. Tesla Megapacks (industrial-scale battery storage units that buffer power between generation and consumption) serve as the physical bridge between the Mississippi generation site and the Memphis datacenter.
The broader power infrastructure runs through Solaris Energy Infrastructure, a NYSE-listed company that owns 600 megawatts of gas turbines globally. The numbers are extraordinary:
- ~400MW of Solaris's global fleet currently serves xAI operations directly
- xAI controls 1.1 gigawatts of power ordering through Solaris — 67% of Solaris's entire 1,700MW orderbook
- A joint venture (50.1% Solaris / 49.9% xAI) owns 900MW of turbine capacity outright
- The joint venture spent $112 million in capital expenditure in Q2 2025 alone
- Solaris's Q2 2025 revenues jumped over 50% quarter-over-quarter, driven almost entirely by the xAI deal
Solaris targets 1.1 gigawatts fully operational for xAI by Q2 2027. The aspirational ceiling: 1.5+ gigawatts of gross power — the output of a large nuclear power plant, dedicated entirely to running AI chips.
Inside Colossus 1: The World's Largest Single AI Compute Cluster
While Colossus 2 races toward completion, its predecessor — Colossus 1 in Memphis — is already the largest fully operational single-coherent AI training cluster on Earth. (Google runs larger AI setups, but spread across multiple geographically separate locations rather than one connected facility.)
Colossus 1 current specs:
- ~200,000 H100 and H200 GPUs (Nvidia's previous flagship AI chips, each valued at approximately $25,000–$30,000 on the market)
- ~30,000 GB200 NVL72 units (Nvidia's newest generation — roughly 3–5x more computationally powerful per chip than H100s)
- ~300MW total power consumption — equivalent to running 225,000 average American homes simultaneously
- Originally built in just 122 days
Colossus 2 will add capacity for approximately 110,000 additional GB200 NVL72 units at full buildout. The combined facility footprint begins at 1 million square feet and can expand to 2 million square feet using a two-story ultra-high-density configuration — a design that stacks far more compute per floor than standard single-story datacenter layouts allow.
Zero Revenue, $200 Billion Valuation: xAI's Uncomfortable Funding Math
Here is the tension at the center of this infrastructure sprint: xAI has not generated any meaningful external revenue. According to SemiAnalysis, most of xAI's reported income is inter-company transfers — money flowing from Elon Musk's X platform to xAI for AI services. As SemiAnalysis describes it: Elon's right pocket paying his left pocket.
Despite this, xAI is reportedly pursuing a $40+ billion funding round at a rumored $200 billion valuation. SemiAnalysis notes it is "hard for most investors to justify xAI as having a valuation higher than Anthropic" — a company that has genuine external commercial revenue from enterprise Claude contracts.
The financial picture in full:
- Capital requirements for Colossus 2 completion: described as "tens of billions of dollars"
- Grok app revenue has "flattened out in recent months" despite new feature launches
- Key investors: Kingdom Holding ($800M xAI stake), Vy Capital ($700M Twitter investment), Qatar's QIA ($375M Twitter stake)
- Middle East sovereign wealth funds are the primary reported targets for the new $40B+ round
SemiAnalysis's read on the motivations: "No one truly knows how levered Elon is already, but it is widely understood he can always sell and unlock a lot more of his dry powder into xAI. Elon will do everything he can to not lose to Sam Altman." The infrastructure race isn't purely technical — it's personal.
xAI's 1.5 Gigawatt AI Infrastructure Roadmap — and What It Means for AI Users
xAI's final target is 1.5+ gigawatts of total gross power — roughly the output of a large nuclear power plant, dedicated entirely to AI computation. To reach it, the roadmap runs:
- 460MW installed or under active construction as of mid-2025
- 425MW of additional capacity available to contract immediately
- 1.1GW targeted fully operational by Q2 2027
- 1.5+ GW aspirational total at full buildout
Nvidia GPU allocations for large-scale model training are already secured for early 2026 — meaning the next generation of Grok models will train on this infrastructure within months of it going fully live. SemiAnalysis notes that when complete, "the datacenter capacity will be ready for the GPUs to be moved in to create the largest single datacenter in the world, yet again."
Whether you use Grok, Claude, or ChatGPT, the underlying compute race shapes what's possible in the AI tools you rely on every day. The company that builds the most compute the fastest will likely set the tempo of AI capabilities for the next five years. Right now, xAI is betting it can outbuild the entire industry on speed alone — and that Elon Musk's leverage will outlast a revenue gap that would sink any conventional startup. You can explore how AI infrastructure shapes the tools available to you today, or get set up with AI automation in your own work before the next generation of models arrives on this hardware.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments