Get In Touch
541 Melville Ave, Palo Alto, CA 94301,
ask@ohio.clbthemes.com
Ph: +1.831.705.5448
Work Inquiries
work@ohio.clbthemes.com
Ph: +1.831.306.6725
Back

Anthropic’s Mythos AI Is Too Dangerous to Release — So It’s Keeping It Secret

AI
M
Marcus Webb
AI · April 11, 2026

Anthropic’s Mythos AI Is Too Dangerous to Release — So It’s Keeping It Secret

$100M Usage Credits
$4M Open-Source Grants
No Public Release
Project Glasswing Launch

Anthropic’s Mythos AI model represents an unprecedented step in artificial intelligence capability — and, for the first time, a major AI lab is withholding its most powerful model from the public entirely. The decision, driven by alarming cybersecurity risk assessments, marks a turning point in how the industry handles models that cross new capability thresholds. Rather than a phased rollout, Anthropic launched Project Glasswing to grant tightly controlled access to select defensive-use partners while committing $100M in compute credits to ensure the technology is used responsibly.

What Is Anthropic’s Mythos AI Model?

Anthropic AI model cybersecurity
Anthropic’s Mythos Preview model represents the lab’s most capable — and most restricted — AI release to date.

Mythos Preview is Anthropic’s latest frontier model, internally classified as its most capable system to date. Unlike Claude 3.7 Sonnet or previous public releases, Mythos was developed with a sharper focus on reasoning across complex, multi-step problems — including those with real-world operational implications. Early internal evaluations showed the model could execute tasks in cybersecurity domains that previous models could not.

The model’s System Card — Anthropic’s transparency document outlining a model’s capabilities and risk profile — flagged something the company had never encountered before: credible potential for full automation of offensive cyber operations. That assessment triggered an internal review that ultimately led to the decision to withhold the model from standard deployment channels.

Key Insight
A Model That Crossed a New Threshold
Mythos is the first Anthropic model whose System Card explicitly identified capabilities that could enable autonomous offensive cyberattacks — a threshold the company had previously used as a red line for public deployment.

Why Mythos Is Too Dangerous to Release

cybersecurity risk artificial intelligence
The risks identified in Mythos’s System Card center on its ability to dramatically lower the barrier for sophisticated cyberattacks.

The core concern isn’t that Mythos can answer cybersecurity questions — many existing models can do that. The issue is the degree to which Mythos can plan, sequence, and execute offensive operations with minimal human oversight. According to the System Card findings cited by NBC News experts, the model demonstrated the ability to identify vulnerabilities, generate working exploit code, and iterate on attack chains in ways that could accelerate real-world cyberattacks by months.

This isn’t theoretical. Security researchers who reviewed the System Card noted that existing red-team defenses were developed before models of this capability class existed. Releasing Mythos publicly — even with standard API guardrails — would, in Anthropic’s assessment, provide a meaningful uplift to malicious actors that current defensive infrastructure isn’t equipped to absorb.

Key Insight
The Uplift Problem
Security professionals describe “uplift” as the degree to which a tool accelerates an attacker’s capabilities beyond what they could achieve alone. Mythos scored high enough on uplift metrics that Anthropic concluded the risk outweighed any commercial benefit from a public release.

Project Glasswing: Restricted Partner Access

cybersecurity partnership restricted access
Project Glasswing channels Mythos exclusively to vetted cybersecurity defenders — CrowdStrike and Microsoft among the first partners.

Project Glasswing is Anthropic’s structured response to the Mythos dilemma: rather than shelving the model entirely, the company is routing access exclusively through vetted defensive cybersecurity organizations. The initial partner cohort includes CrowdStrike and Microsoft, both of which operate large-scale threat intelligence and incident response operations where Mythos’s capabilities could be applied to detect and neutralize attacks rather than enable them.

Access under Glasswing is not a standard API key. Partners undergo vetting, agree to usage constraints, and operate within monitoring frameworks that allow Anthropic to audit how the model is being used. The program is explicitly defense-only — partners cannot use Mythos for offensive testing or red-team exercises without explicit case-by-case approval.

Key Insight
Defense Before Deployment
Glasswing represents a new model for AI deployment at capability frontiers: build up defensive infrastructure using the model before any broader access, so that defenders are ahead of potential misuse rather than playing catch-up.

The $100M Compute Fund and Open-Source Grants

AI compute funding cybersecurity research
Anthropic’s $100M in usage credits and $4M in open-source grants are designed to ensure Mythos-level capabilities strengthen defense, not offense.

Alongside the Glasswing partner program, Anthropic announced a $100M usage credits fund for qualifying cybersecurity organizations. The credits allow eligible defenders — including non-profits, government agencies, and academic security research teams — to access Anthropic’s broader API suite for threat detection, vulnerability research, and defensive tooling development. This is separate from Mythos access, which remains under stricter Glasswing controls.

Complementing the compute fund, Anthropic committed $4M in open-source security grants to support community-led projects that build detection and defense capabilities using publicly available models. The goal is to raise the floor of defensive capability across the ecosystem, even for organizations that will never have Glasswing-level access to Mythos itself.

Key Insight
Funding Defense at Scale
The combination of $100M in compute credits and $4M in open-source grants signals that Anthropic views the cybersecurity risk from frontier AI as a systemic problem — one that requires funding the entire defensive ecosystem, not just locking down one model.

Frequently Asked Questions

What is Anthropic’s Mythos AI model?

Mythos Preview is Anthropic’s most capable AI model to date, designed for complex multi-step reasoning. Its System Card flagged the ability to automate offensive cyber operations, leading Anthropic to withhold it from public release.

Why isn’t Mythos available to the public?

Anthropic determined that Mythos provides significant “uplift” to potential attackers — meaning it could dramatically accelerate sophisticated cyberattacks. The company concluded the risk to public safety outweighed the benefits of an open release.

What is Project Glasswing?

Project Glasswing is Anthropic’s restricted partner program that gives vetted defensive cybersecurity organizations — starting with CrowdStrike and Microsoft — controlled access to the Mythos model for defensive purposes only.

How can organizations access the $100M in compute credits?

The $100M usage credits fund is available to qualifying cybersecurity organizations including non-profits, government agencies, and academic security teams. Applications are managed through Anthropic’s Glasswing program page.

Will Mythos ever be publicly released?

Anthropic has not committed to a timeline for public release. The company’s position is that Mythos will remain restricted until defensive infrastructure across the ecosystem is sufficiently mature to absorb the risks it introduces.

Stay Ahead of AI’s Riskiest Frontier

Get Networkcraft’s weekly brief on the AI models, security threats, and startup moves shaping the industry.

Subscribe Free →

Maya Chen
https://networkcraft.net/author/maya-chen/
AI & Technology Analyst at Networkcraft. I write for the reader who wants to understand — not just be impressed. Formerly at MIT Technology Review. Covers artificial intelligence, machine learning, and the long-term implications of frontier tech.