Anthropic has built what may be the most capable AI model ever documented — and deliberately chose not to release it to the public.

Claude Mythos Preview, revealed in early April 2026, is capable of finding and exploiting zero-day security vulnerabilities across every major operating system and web browser. During internal testing, it autonomously identified thousands of critical flaws — and Washington is scrambling to understand what that means for national cybersecurity.

ℹ️
Anthropic is not releasing Claude Mythos to the general public. Access is restricted to a curated group of 40+ security organizations under "Project Glasswing."

What Claude Mythos Can Actually Do

The capabilities are staggering. During testing, Mythos Preview autonomously found and exploited a 17-year-old remote code execution vulnerability in FreeBSD — catalogued as CVE-2026-4747 — that gives any attacker complete root access to a server from anywhere on the internet, no authentication required.

That was just one of thousands of zero-day vulnerabilities the model identified across:

  • Every major operating system (Windows, macOS, Linux, FreeBSD)
  • Every major web browser (Chrome, Safari, Firefox, Edge)
  • Critical open-source software infrastructure
  • Enterprise systems used by Fortune 500 companies

As Anthropic put it directly: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. Mythos is that inflection point.

Key Facts
  • Mythos found thousands of zero-days across major OS and browsers
  • Exploited a 17-year-old FreeBSD vulnerability autonomously (CVE-2026-4747)
  • Anthropic committed $100M in usage credits for defensive security work
  • $4M in direct donations to open-source security organizations
  • Access limited to ~40 vetted organizations under Project Glasswing

Why Anthropic Refused to Release It

The decision not to publish Mythos publicly is itself remarkable in an AI industry that typically races to ship.

The broader AI race continues: ChatGPT-5 and GPT-6 represent OpenAI's competing approach — maximum capability, public release. Anthropic's choice here is a deliberate rejection of that model.

Anthropic's reasoning is straightforward: a model this capable in the hands of malicious actors — ransomware groups, nation-state hackers, or even script kiddies with bad intentions — could enable a new era of cyberattacks at scale. The offensive capabilities are simply too powerful without new safeguards in place.

Instead of a public launch, Anthropic announced Project Glasswing — a controlled access program that restricts Mythos Preview to organizations doing defensive security work.

Project Glasswing's founding partners include:

  • Amazon Web Services
  • Apple
  • Broadcom
  • Cisco
  • CrowdStrike
  • Google
  • JPMorganChase
  • Linux Foundation
  • Microsoft
  • NVIDIA
  • Palo Alto Networks

These organizations are using Mythos to scan and patch their own systems — turning the model's hacking capabilities toward defense rather than offense.

Pros
  • Finds vulnerabilities before real attackers do
  • Defends critical infrastructure at AI speed
  • Controlled rollout prevents mass exploitation
  • Transparent disclosure builds trust
Cons
  • Restricted access creates an information asymmetry
  • Nation-states may have equivalent models already
  • Oversight frameworks don't yet exist for this capability class
  • Debate over who decides what's "too dangerous" to release

Washington's Alarm: Treasury, Wall Street, and the Federal Push

The US government's reaction has been swift — and revealing about how unprepared federal agencies are for AI at this capability level.

The US Treasury Department's Chief Information Officer Sam Corcos directed his team to gain access to Mythos "as soon as possible" after the model's existence became public. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with top Wall Street CEOs, warning them that Mythos-class AI could usher in a new era of heightened cyber risk for the financial sector.

The concern isn't hypothetical. If Mythos can find thousands of zero-days in major operating systems, so could a less scrupulous actor with a comparable model. The question isn't whether this level of AI capability exists — it's who controls it.

Thousands
zero-day vulnerabilities found by Mythos in major systems
40+
organizations granted restricted access under Project Glasswing
$100M
Anthropic usage credits committed to defensive security
17 years
age of the FreeBSD vulnerability Mythos autonomously exploited
12
major tech companies in Project Glasswing's founding coalition

The Manhattan Project Debate

Some policy analysts and commentators are pushing for something more drastic: placing frontier AI models like Mythos under a Manhattan Project-style federal authority, with the government controlling access, development, and deployment.

The argument follows a familiar logic — when a technology's destructive potential is high enough, governments nationalize it. Nuclear weapons. Bioweapons. The question is whether AI has crossed that threshold.

The counterargument, advanced by most of the tech industry, is that centralized federal control would slow defensive innovation, concentrate power dangerously, and likely fail anyway since comparable research is happening in labs globally.

Anthropic's own approach — voluntary restricted access, industry coalitions, and transparent disclosure — represents a middle path. Whether that's sufficient given Mythos's capabilities is a debate just beginning in Washington.

What Happens Next

Anthropic says it "eventually" wants to safely deploy Mythos-class models at scale once new safeguards are in place. That timeline is vague — and intentionally so. The company is threading a needle: demonstrating the model's value for defense while building the governance frameworks needed to prevent misuse.

For now, Mythos remains one of the most powerful AI systems ever built — accessible to fewer than 50 organizations in the world, audited, controlled, and watched closely by every government with a cybersecurity interest.

That window won't last forever. The real race isn't who releases Mythos first. It's whether the defensive security work it's enabling can outpace the equivalent offensive capabilities already developing elsewhere.

Claude Mythos marks the moment AI capability crossed from productivity tool to national security concern — and Anthropic's decision not to release it may be remembered as one of the most consequential choices in tech history.