AI for Automation
Back to AI News
2026-04-10OpenAIClaudeAnthropicAI pricingClaude CoworkGeminisubscription

OpenAI just halved your $200/month Pro subscription

OpenAI cut Pro from $200 to $100/month. Anthropic expanded Claude Cowork to all paid tiers. The AI subscription price war just hit your wallet.


OpenAI just cut its Pro plan from $200 to $100 per month — a 50% price reduction aimed at heavy Codex users (developers who rely on AI-powered code completion for dozens to hundreds of tasks per day). The announcement landed simultaneously with Anthropic expanding Claude Cowork to every paid plan, Google adding interactive charts to Gemini, and Meta shipping its first closed-source frontier model. Four major AI platforms moved within 48 hours. This is what the subscription pricing war looks like in real time.

If you're paying for any AI tool right now, what your money buys just changed significantly.

OpenAI's $200 Plan Gets Cut in Half

OpenAI Pro plan price reduced from $200 to $100 per month

The old $200/month Pro plan was OpenAI's premium tier for developers who needed maximum access to GPT-4o and Codex (OpenAI's code-specialized AI model, trained specifically for writing and reviewing software code). At $200, it was a tough sell against Anthropic's Claude and Google's Gemini at competitive price points.

The new $100/month plan includes the same capabilities, plus "significantly more Codex usage" — phrasing that signals OpenAI anticipates developers running coding agents (AI systems that automatically write, test, and fix code without constant human oversight) at much higher volumes in 2026.

What stays exactly the same:

  • GPT-4o access — same flagship model as before
  • Full API access — no reduction in rate limits or features
  • Advanced voice and canvas features remain included

For existing Pro subscribers, the bill drop is automatic — no plan change or action required. Your next invoice reflects the new $100 rate. That's $1,200 saved per year, at zero cost to you.

Anthropic Fires Back With Features, Not Price Cuts

Claude Cowork expanded to all paid plans on macOS Windows and Zoom

Anthropic didn't lower its prices. Instead, it expanded what paid subscribers receive. Claude Cowork — a real-time AI collaboration workspace (a shared environment where Claude actively participates in your team's documents, conversations, and decisions in real time) — is now available to all paid Claude subscribers, not just high-tier enterprise accounts.

The Cowork expansion delivers three meaningful upgrades:

  • Cross-platform availability: Now works on both macOS and Windows, removing the Mac-only limitation for teams with mixed operating systems
  • Organizational controls: Team administrators can set usage policies, visibility settings, and permissions across the entire workspace — previously available only through enterprise contracts
  • Zoom integration: Claude joins video calls as an active participant, summarizing discussions, drafting follow-ups, and answering questions during live meetings

Simultaneously, Anthropic announced Claude Managed Agents — a hosted platform (Anthropic manages all the servers and infrastructure, so your engineering team doesn't have to) for autonomous agents (AI systems that complete multi-step workflows without human supervision at each step — research, draft, publish, repeat). Notion and Rakuten are already using it in early beta. Current access is for select API partners.

Google, Meta, and Zhipu: Everyone Moved at Once

OpenAI and Anthropic weren't the only ones moving. The same 48-hour window compressed what normally spreads across weeks into a single competitive sprint:

Google Gemini: live interactive charts inside chat

Gemini can now generate editable charts and graphs directly inside the conversation interface — and let you modify them without ever leaving the chat. Ask it to filter by date range, switch from bar to line chart, or rescale the Y-axis, and it updates in place. For analysts and marketers who currently export data to separate tools just to build basic visualizations, this removes a significant workflow friction point that competing platforms haven't yet matched natively.

Meta Muse Spark: the first closed-source Meta frontier model

Meta launched Muse Spark without releasing model weights (meaning you cannot download, run locally, or fine-tune it — unlike the Llama series, which anyone can use freely). This marks Meta's strategic pivot into the closed frontier model market alongside OpenAI, Anthropic, and Google. Early benchmarks show Muse Spark "closing the gap" to the top three platforms, though exact performance scores haven't been publicly disclosed at launch.

Zhipu GLM-5.1: free, MIT-licensed, runs entirely on your machine

Chinese AI lab Zhipu released GLM-5.1 under the MIT license (fully open, commercially usable, no royalties required). Its standout capability: iterative code strategy refinement across hundreds of retry loops — making it unusually persistent on hard engineering problems compared to standard models. It's live on GitHub today. No subscription, no API key, no cloud account. You can run it entirely on your own machine at zero cost.

What Stanford's New Study Means for Your Agent Setup

Stanford researchers published a finding this week that reframes how engineering teams should evaluate AI infrastructure. Multi-agent systems (setups where multiple AI instances run in parallel — one researching, one writing, one reviewing, all coordinating automatically) do show performance advantages over single-model approaches, but the advantage comes primarily from extra compute, not from the coordination logic being architecturally clever.

In plain terms: if you're planning to build a complex multi-agent pipeline because you believe the "teamwork" between AI instances is the core magic — the data suggests it mostly isn't. The performance gains come from running more inference (processing more AI computations). A single powerful model running multiple passes might match a multi-agent system at lower engineering complexity and cost.

Before committing months to custom orchestration, explore simpler automation patterns in our AI automation guides — many teams achieve 80% of the results with a fraction of the pipeline complexity.

Five Things You Can Act On Right Now

With four major platforms moving simultaneously, here's what's immediately actionable for each user type:

  • OpenAI Pro subscribers: Your plan drops to $100/month automatically — verify your next billing statement at platform.openai.com to confirm the change
  • Claude paid users: Open your Claude dashboard on macOS or Windows and look for the Cowork workspace tab — it's now included in your existing plan at no extra charge
  • Developers wanting free local AI: GLM-5.1 is on GitHub under MIT license — clone it and run it entirely on your own machine, zero subscription required
  • Data and analytics teams on Gemini: Paste a dataset into Gemini chat and ask for an interactive chart — then modify it live to experience the new feature firsthand
  • Enterprise Claude users: Ask your Anthropic account team about the Claude Managed Agents beta — Notion and Rakuten are already running it in production workflows

OpenAI's $100 signal is clear: they're cutting price to hold market share against Anthropic's feature momentum. Anthropic's signal is equally clear: they're competing on integrated capabilities, not price. You're watching the most competitive AI product window since 2023 — and this time, the competition is working in your favor. Your monthly bill just dropped by $100. Your existing plan gained features you didn't pay for. A free local alternative got more capable. Watch the next 30 days closely: second waves almost always follow sprints this fast.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments