AI for Automation
Back to AI News
2026-04-07nvidiaamdcudagpuai-infrastructureopenaiai-chipsclaude-code

Nvidia Blocked AMD's Anti-CUDA Summit — OpenAI Took Note

Nvidia pre-booked every San Jose venue for years to shut out AMD's anti-CUDA event. Now OpenAI and Meta quietly train on AMD chips. The GPU war is on.


Jeff Tatarchuk had a simple plan: rent a venue in San Jose, fill it with hundreds of AI engineers, and spend a day talking about Nvidia alternatives. Simple — except Nvidia had already booked every major venue in the city. Not for next month. For the next several years. The incident crystallizes a central tension in modern AI infrastructure: Nvidia's CUDA dominance now extends into physical venue control, while AMD quietly gains ground in real AI training workloads.

So Tatarchuk did what founders do when blocked: he adapted. He renamed the event from "Beyond CUDA" to "Beyond Summit," moved it to San Francisco, and scheduled it for weeks after Nvidia's GTC conference — so as not to directly antagonize the chip giant his attendees all still depend on.

Inside a modern AI data center featuring GPU server racks used for Nvidia CUDA and AMD AI training workloads

The AI Venue War Nobody Wanted to Talk About

TensorWave, the company Tatarchuk runs, is an AMD-backed GPU cloud provider — a company that rents out AMD's MI300X chips (Nvidia's biggest rival AI accelerator) to teams who want an alternative to Nvidia's H100s. Hosting "Beyond CUDA" was meant to be a clear statement of intent.

CUDA (short for Compute Unified Device Architecture) is Nvidia's proprietary software layer — the glue that makes its chips work with AI frameworks like PyTorch and TensorFlow. It has been the industry standard since 2006 and is the single biggest reason Nvidia can charge premium prices even as AMD's raw chip performance closes the gap. Companies don't just buy Nvidia hardware; they buy 20 years of optimized code libraries and developer muscle memory. Switching away from CUDA means rewriting significant portions of your AI software stack.

The venue problem revealed something deeper. Multiple sponsors grew nervous about being seen publicly opposing Nvidia. The event's name was softened. Its location relocated north. "There is stuff being built outside of the walled gardens of Nvidia," Tatarchuk told attendees — but even he conceded that "many companies still depend on Nvidia hardware." Even in an event about Nvidia alternatives, explicitly naming Nvidia as the enemy was considered too risky.

The Quiet AMD Migration Already Happening Behind Closed Doors

Despite the public silence, the shift is already underway. Both OpenAI and Meta Platforms have recently announced large deals with AMD for AI chip processing. These aren't test pilots — they're large-scale training runs (the most compute-intensive workloads in AI, where firms previously relied almost exclusively on Nvidia).

"AI labs are starting to do large-scale training on AMD, which wasn't really talked about too much before," Tatarchuk said. The key phrase is "wasn't really talked about" — the shift is happening, but companies are deliberately quiet about it.

Why the secrecy? Nvidia's power extends far beyond chips. It shapes ecosystem relationships, conference access, enterprise contracts, and potentially future hardware allocation. Publicly declaring a move to AMD while still needing Nvidia for the bulk of your workloads is a political calculation as much as a technical one. As Tatarchuk put it: "There are so many sophisticated companies that don't need CUDA" — but even his own sponsors hesitated to say so on the record.

AMD MI300X GPU chip quietly winning large-scale AI training contracts from OpenAI and Meta as a CUDA alternative to Nvidia

Stanford's Sold-Out AI Course — and Why Students Call It "Compute Coachella"

At Stanford, an AI infrastructure course taught by early Anthropic investor Anjney Midha is so oversubscribed that students have nicknamed it "Compute Coachella." The speaker lineup for a single 10-week undergraduate course includes:

  • Jensen Huang — Nvidia CEO
  • Lisa Su — AMD CEO
  • Sam Altman — OpenAI CEO
  • Satya Nadella — Microsoft CEO
  • Andrej Karpathy — OpenAI co-founder and former Tesla AI director

The student project is actual frontier AI research conducted under tight computational constraints — real research, not toy demos. The fact that both Jensen Huang and Lisa Su are speaking to the same undergraduate cohort signals that academia already treats the Nvidia-vs-AMD debate as a genuine strategic question. The next generation of AI engineers is being trained to think beyond CUDA from day one.

Sam Altman Wants OpenAI Public by Q4 2026 — His Own CFO Disagrees

All of this infrastructure competition plays out against a backdrop of extraordinary financial pressure. Altman has committed to $600 billion in spending over 5 years and is privately pushing for an OpenAI IPO (initial public offering — when a private company first sells shares to the public) by Q4 2026.

The problem: OpenAI's own CFO, Sarah Friar, has privately expressed concerns that the company won't be ready. OpenAI's projected cash burn — the rate at which it spends more than it earns — is expected to exceed $200 billion before reaching profitability. That's not a typo. Two hundred billion dollars in cumulative losses before the business turns cash-positive, against a CEO who wants to IPO in under 12 months.

The Altman-Friar rift matters because CFOs at this level don't express concerns about IPO readiness casually. If Friar is right and the company rushes to market under CEO pressure, investors could end up holding shares in a company burning cash at a rate that makes even the most aggressive tech growth stories look conservative.

The AI Infrastructure Risks Nobody Is Pricing In

Compounding the financial pressure are real-world vulnerabilities that standard valuation models struggle to quantify:

  • Geopolitical threat: Iran's Islamic Revolutionary Guard issued a direct threat of "complete and utter annihilation" against OpenAI's $30 billion Stargate data center in Abu Dhabi — explicitly citing U.S. tech leaders' 2025 announcement of the facility.
  • Collapsed mega-deal: A planned 2-gigawatt data center partnership between CoreWeave and AI startup Poolside for a West Texas facility fell apart entirely. Poolside is now searching for new customers to anchor the project.
  • Energy scramble: Microsoft is in exclusive talks to acquire a natural gas-fired Texas power plant specifically to fuel data center expansion — a signal that electricity supply has become a strategic asset, not just an operating expense.
  • Starlink revenue gap: T-Mobile's Starlink Mobile deal is generating roughly $100 million in milestone payments — described internally as "tiny" — despite SpaceX executives projecting "hundreds of millions of customers globally."

Inside Meta: 85,000 Workers Competing for the Title of "Token Legend"

While the industry debates infrastructure, inside Meta's offices a quiet AI arms race is accelerating. The company runs an internal leaderboard tracking Claude usage (measured in tokens — the units of text an AI model reads and writes, roughly equivalent to 3/4 of a word) across more than 85,000 employees. The top 250 power users earn titles like "Session Immortal" and "Token Legend."

The program — internally dubbed "Claudeonomics" — deliberately gamifies AI adoption at workforce scale. It's a calculated strategy to normalize heavy AI tool use across the company, and it's running at a firm that also builds its own AI models (the open-source Llama series). The irony is notable: Meta is training its workforce on a competitor's product.

On a related signal, Apple's App Store saw an 84% jump in new app submissions during a recent quarter. Analysts attribute a meaningful portion of this to the "vibe coding" effect — non-programmers using AI automation tools to design and ship real applications without traditional software development skills. The implication: AI coding tools aren't just accelerating existing developers; they are actively creating new ones.

Three AI Automation Actions Worth Taking This Week

1. Run an AMD benchmark for your AI workloads. TensorWave offers AMD MI300X cloud access at prices that undercut Nvidia's H100 market rates. If your team uses PyTorch (the most widely used deep learning framework) and doesn't rely on CUDA-specific libraries, a cost-comparison benchmark is worth the few hours it takes. OpenAI and Meta running real training runs on AMD means the software compatibility story has materially improved.

2. Test the Claude Code + Codex integration. OpenAI released a plugin allowing its Codex AI system (specialized for understanding and generating code) to run inside Claude Code for AI-assisted development for review tasks. "We've seen Claude Code users bring in Codex for code review... so we thought: why not make that easier?" said Romain Huet, OpenAI's Head of Developer Experience. The practical result: primary development in Claude Code, Codex review passes — inside a single environment, no switching required.

3. Watch the 2026 IPO pipeline. SpaceX, OpenAI, and Anthropic are all targeting public debuts this year. When multiple large private companies go public simultaneously, capital currently locked in private funds needs to move — and existing tech investors often sell listed holdings to fund allocations to the new offerings. That creates public market volatility entirely independent of underlying business performance.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments