Anthropic on April 7 announced Claude Mythos Preview, a model the company said demonstrated autonomous vulnerability discovery and exploit construction across major operating systems and web browsers during internal testing. The company is not releasing the model broadly. It is instead limiting access through a new initiative called Project Glasswing.
Anthropic said Glasswing will give early access to a restricted set of organizations that maintain or secure critical software. The company named Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks among the initial participants, and said more than 40 additional organizations are involved. Anthropic also said it will commit up to $100 million in Mythos Preview usage credits and $4 million in direct support for open-source security groups.
In public materials released alongside the announcement, Anthropic said Mythos Preview’s cyber performance emerged from general gains in coding, reasoning, and autonomy rather than from narrow exploit-specific training. The company contrasted the model with Opus 4.6, which it said had near-zero autonomous exploit-development success in comparable evaluations. Anthropic said Mythos Preview repeatedly succeeded where earlier systems did not.
The deployment decision is the central policy fact.
Anthropic is treating the model as a controlled capability rather than as a conventional product launch. The company said most of the vulnerabilities it found remain unpatched, and it published cryptographic commitments to some undisclosed findings rather than releasing them. Anthropic also said it has spoken with U.S. government officials about the risks these systems may pose and about potential defensive coordination.
Project Glasswing places a frontier model inside a limited-access defensive framework built around patching, disclosure, and institutional trust. That makes the announcement relevant beyond Anthropic’s product line. The practical question is no longer only whether a model is open or closed, but who receives access, under what verification rules, with what monitoring, and with what responsibility for downstream misuse or disclosure failure.
Anthropic said the initiative is intended to help software maintainers identify and fix vulnerabilities before similar capabilities become more widely available. If the company’s public claims are accurate, that would place frontier AI development deeper inside the governance problems of cybersecurity, critical infrastructure, and national security. It would also suggest that offensive cyber capacity may increasingly emerge as a byproduct of general-purpose model progress.
The significance of Glasswing is difficult to overstate. It is a restricted-release structure for a model Anthropic said crossed a deployment threshold, and an early public example of a frontier lab handling access to cyber-capable systems as a governance problem rather than a standard commercial rollout.


