AI for Automation
Back to AI News
2026-04-14AnthropicClaude MythosClaude Sonnet 4.5AI securityenterprise AIAI automationGitHub Copilot CLIModel Context Protocol

Claude Mythos: Anthropic Locks Top AI to 15 Companies

Anthropic's Claude Mythos Preview finds critical security flaws — but only 15 Project Glasswing partners can access it. Here's what your team can use today.


Anthropic's most capable AI model quietly went live this month — and the overwhelming majority of developers will never get to use it. Claude Mythos Preview, which demonstrated the ability to discover critical security vulnerabilities in internal tests, is restricted to a small consortium of partner companies under a program called Project Glasswing. If you're not in that group, you're working with Claude Sonnet 4.5.

This isn't just one product update. April 2026 has exposed a structural split forming inside the AI industry: open-source models expanding access at the bottom, while the most capable models get locked behind enterprise-only gates at the top. Here's exactly what shipped, who can access it, and what your team should do about it.

Claude Mythos Preview: The AI Your Team Can't Touch Yet

Claude Mythos Preview is Anthropic's newest model, purpose-built for advanced reasoning (the ability to work through complex, multi-step problems), coding, and cybersecurity. In internal evaluations, it demonstrated the ability to identify critical security flaws — the kind of vulnerabilities that typically take experienced human security engineers hours to find.

Claude Mythos Preview — Anthropic's restricted enterprise AI model for advanced reasoning and cybersecurity

Access flows through Project Glasswing, a closed consortium of approximately 10–15 partner companies. No public waitlist has been announced. No pricing has been disclosed. Anthropic hasn't stated when or whether Mythos will be made broadly available. The model targets advanced reasoning, coding, and cybersecurity — three categories where the capability gap between Mythos and publicly available models appears to be widest.

Meanwhile, Claude Sonnet 4.5 — the publicly accessible version — is being actively studied by Anthropic's own interpretability team (researchers who examine what's happening inside AI models to understand why they behave as they do). A recent Anthropic paper examined how Sonnet 4.5 internally represents concepts related to emotions and how those representations influence model behavior. The research is open. The model is open. The distinction from Mythos is deliberate.

How Anthropic Split the AI Automation Market in Two

The bifurcation (splitting into two distinct access tiers) isn't unique to Anthropic — but the capabilities gap is sharper than what most vendors have created:

  • Public tier: Claude Sonnet 4.5 — broadly available, subject to published interpretability research, accessible via standard API
  • Enterprise tier: Claude Mythos Preview — advanced reasoning and cybersecurity capabilities, limited to Project Glasswing consortium partners only

The practical effect: security teams outside the consortium cannot use Mythos to probe their own infrastructure. Engineering teams building compliance and risk systems are working with Sonnet 4.5 or third-party alternatives. The model Anthropic's internal tests say "discovers critical security flaws effectively" isn't on the open market.

This structure favors organizations with existing Anthropic enterprise relationships, cloud platforms (AWS, Azure) that bundle AI access into existing contracts, and companies already inside Project Glasswing. Independent developers and small-to-mid-size teams are operating from a fundamentally different toolbox than the consortium's members — even when running identical security workloads.

What AI Tools Are Actually Open Right Now

GitHub Copilot CLI general availability — AI automation for terminal with GPT-5.4 agentic workflows

While Mythos stays locked, three significant tools reached general availability (meaning: fully released and production-supported, not just in preview) this month:

GitHub Copilot CLI — Terminal AI Automation Goes Live

GitHub's terminal-based AI assistant (a coding tool that operates inside your command line, not just a code editor) hit general availability with GPT-5.4 integration and a new "Autopilot" mode for multi-step autonomous workflows. Enterprise teams gain built-in telemetry (usage tracking data) for monitoring team-level activity. The CLI competes directly with Claude Code and Cursor Composer for context window management and Model Context Protocol integrations.

Google Gemma 4 — On-Device AI That Runs on Your Phone

Google's Gemma 4 (an open-weight model — meaning the underlying parameters are freely downloadable and inspectable) launched with a focus on local-first inference: running AI processing directly on Android devices without sending data to a remote server. Available through Android Studio and Google AI frameworks, Gemma 4 targets app developers who need on-device AI (processing that happens on the hardware itself) for use cases where data privacy or connectivity constraints matter.

Google Colab MCP Server — Cloud Compute for AI Agents

Google open-sourced a Colab MCP Server, which allows AI agents (automated systems that take multi-step actions) to offload compute-intensive tasks to cloud-based Colab notebooks via the Model Context Protocol (MCP — a standardized communication format that lets AI tools connect to external services). A coding assistant running locally can now spin up a cloud compute job without any manual configuration by the user.

The MCP ecosystem is consolidating faster than most observers expected. The MCP Dev Summit North America 2026 — held April 2–3 at the New York Marriott Marquis — drew 1,200 developers to discuss protocol hardening, gRPC integration (a high-performance system for software-to-software communication), and production deployment standards. Amazon and Uber are both actively implementing MCP for enterprise AI agent workflows, signaling the protocol is moving from experimental to standard infrastructure.

MCP Dev Summit North America 2026 — 1,200 AI automation developers discuss Model Context Protocol enterprise standards in New York City

Real Companies Already Shipping AI Automation

Beyond model announcements, April's most instructive stories came from production deployments at companies that long since moved past "should we try AI?" to "here's what it did for us."

Lyft: AI Automation Cuts Translation from Weeks to Minutes

Lyft's engineering team published results from a dual-path localization pipeline (a system that routes content through two separate tracks based on complexity — AI-handled for standard cases, human-reviewed for complex edge cases). The system now processes the majority of app content translations in minutes. International release cycles previously bottlenecked by manual workflows — spanning regional idioms, legal messaging, and brand consistency enforcement — now move at software speed.

The result changes what's operationally possible for a global consumer app. The team's framing of "minutes vs. previous manual timeline" suggests a step-change in throughput, not incremental improvement. Exact prior timelines weren't disclosed, but the shift is significant enough that Lyft's international release velocity changed materially as a direct result.

Etsy: 425 TB, 1,000 Shards, One Clean Migration

Etsy completed a migration of its MySQL sharding infrastructure (a database architecture that splits data across hundreds of servers to handle high-traffic loads) to Vitess (an open-source database clustering system originally developed at YouTube to handle YouTube-scale data). Migration scope: 1,000 individual database shards carrying 425 TB of total data. Migrations at this scale typically consume years of engineering capacity — completing this frees Etsy's team to redirect effort toward product development rather than database maintenance.

Etsy Vitess Migration — Scope:
  Source:  MySQL sharding, 1,000 shards
  Volume:  425 TB total data
  Target:  Vitess clustering platform
  Impact:  Engineering capacity freed for product work

The Tier You're Actually In — and What to Do About It

The two-tier market is real and it's accelerating. But the open tier is meaningfully more capable than it was 12 months ago. Here's where to focus based on your actual situation:

  • Security teams: Mythos isn't accessible to most, but Sonnet 4.5's security capabilities are solid for the majority of real-world workflows. Test it against your actual vulnerability discovery process before assuming you need Glasswing access.
  • Developer tool teams: MCP is the infrastructure bet with the clearest momentum right now — Amazon, Uber, and 1,200 summit attendees are all converging on it. Building MCP-compatible agents now positions your team ahead of competitors waiting for a clear standard winner.
  • Localization and content teams: Lyft's result — minutes instead of weeks — is achievable today with publicly available models. The dual-path architecture (AI handles standard content, humans handle complex edge cases) is the pattern worth replicating.
  • Android developers: Gemma 4's on-device focus is worth evaluating now if your app handles sensitive data that shouldn't leave the user's device — no cloud dependency required.
  • Database and infrastructure engineers: Etsy's Vitess result demonstrates that modern tooling maturity is enabling migrations that were previously too expensive or risky to attempt at all.

You can start with the open tools today. Google Colab MCP Server requires a Python environment with MCP protocol support. GitHub Copilot CLI integrates via the standard gh command-line tool with a Copilot subscription. Gemma 4 is available through Google AI frameworks and Android Studio. Explore what's available to your team right now — the gap between public and restricted tiers is wide, but the public tier has more real production capability than most teams have actually deployed yet.

Related ContentGet Started | Guides | More News

Stay updated on AI news

Simple explanations of the latest AI developments