Anthropic just leaked a model above Claude Opus
Anthropic leaked Claude Mythos — a new tier above Opus with dramatically higher coding and cybersecurity scores. Cybersecurity stocks fell 4–6%.
On March 26, 2026, a CMS (content management system — the software companies use to publish websites and blogs) misconfiguration at Anthropic accidentally exposed roughly 3,000 unpublished internal documents to the public internet. Cybersecurity researchers found and archived the cache before Anthropic could remove access. Inside: draft blog posts describing a model that doesn't officially exist yet — one Anthropic is calling Claude Mythos (also internally nicknamed Capybara).
Anthropic confirmed the leak was genuine. Within hours, the draft posts were circulating across X and Hacker News. By March 27, cybersecurity stocks were falling and Bitcoin had slid alongside software equities — all because of what a few internal documents revealed.
How 3,000 Documents Got Exposed
The leak originated from what appears to be a misconfigured CMS bucket — essentially a cloud folder that was accidentally set to "public" instead of "private." Cybersecurity researchers discovered the exposed cache and captured its contents before Anthropic secured the bucket. The exposed files were described as early-stage publication drafts, not production code or model weights.
What was exposed: announcement drafts, benchmark summaries, rollout strategy notes, and internal risk assessments. What was not exposed: model weights (the actual AI), source code, or customer data. The information leak was about plans, not capability.
Anthropic's statement confirmed the model's existence: "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity" — and called it "a step change and the most capable we've built to date."
A New Tier Above Claude Opus — What That Actually Means
Right now, Anthropic offers three tiers of Claude. Think of them like car models: Haiku is the economy car (fast, affordable, handles everyday tasks), Sonnet is the mid-range (balanced performance and cost), and Opus is the flagship (most capable, most expensive). Mythos/Capybara would sit above Opus — a new category Anthropic has never publicly offered before.
Haiku → Sonnet → Opus → Mythos / Capybara (NEW — above all)
The leaked documents describe Capybara as "larger and more intelligent than Anthropic's Opus models — which were, until now, their most powerful."
The leaked draft attributed "dramatically higher scores" to Mythos in three specific categories:
- Software coding — writing, reviewing, and debugging code at levels beyond Opus 4.6
- Academic reasoning — solving multi-step logic, mathematics, and science problems
- Cybersecurity — analyzing software for vulnerabilities and crafting exploitation strategies
No specific benchmark numbers (like SWE-bench scores or MMLU percentages) appeared in the leaked drafts — only qualitative descriptions. But "step change" is unusually strong language from Anthropic, a company known for conservative, measured communication.
Cost will be significant. The documents note the model is "very expensive for us to serve, and will be very expensive for our customers to use." Anthropic is actively working to reduce inference costs before broader release.
The Cybersecurity Warning That Spooked Wall Street
The section that generated the most alarm was the internal risk assessment. The leaked notes stated Mythos is "currently far ahead of any other AI model in cyber capabilities" — and that it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
In plain English: Anthropic is building an AI that can find security holes in software faster than security teams can patch them. That creates an arms race — attackers with access to Mythos could outmaneuver defenders who don't have it.
Anthropic's response to this risk is deliberate: early access will be restricted to cybersecurity defense organizations only. The idea is to give defenders a head start — time to harden their systems before offensive actors get access.
Markets didn't wait for nuance. On March 27, cybersecurity company stocks fell sharply:
- Palo Alto Networks (PANW): down 4–6%
- CrowdStrike (CRWD): down 4–6%
- Fortinet (FTNT): down 4–6%
The reasoning: if an AI can autonomously find and exploit vulnerabilities, the human-hours these companies sell — threat hunting, penetration testing, incident response — may shrink in demand. That's the fear, anyway. Reality is more complicated.
Who Gets Access — And When
There is no public launch date. Based on the leaked documents, the rollout plan has three phases:
- Phase 1 (now): Small group of early-access customers evaluating cybersecurity applications — specifically defenders tasked with hardening systems
- Phase 2: Gradual API (application programming interface — the technical connection businesses use to plug Claude into their own software) expansion to enterprise customers
- Phase 3: General availability, likely at a price point above Opus
For most Claude users on claude.ai, Mythos will not be immediately accessible. It will almost certainly require a paid enterprise plan and carry a significantly higher per-token cost than Opus.
What This Signals About the AI Race
The most important takeaway from this leak isn't about Anthropic specifically — it's about the pace of development across the industry. If Anthropic's "above Opus" model is already in customer trials, it's safe to assume OpenAI, Google, and Meta have their own fourth-tier models in advanced development.
The public AI landscape in 2025 featured a race to release models that were broadly capable at a reasonable cost. What's emerging in 2026 is a second race: who can build the most powerful model, period, regardless of cost — because the enterprises willing to pay for frontier capabilities will pay a premium.
Anthropic being forced to acknowledge Mythos earlier than planned may actually accelerate this dynamic. With the model's existence public knowledge, competitors now know the bar that needs to be cleared.
Related Content — Get Started with Easy Claude Code | Free Learning Guides | More AI News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments