Unauthorized group gains access to Anthropic's Mythos AI cybersecurity tool, raising security concerns

Reviewed byNidhi Govil

6 Sources

Share

A small group of unauthorized users has reportedly accessed Anthropic's powerful Mythos AI model through a third-party vendor environment. The AI cybersecurity tool, designed for defensive purposes but capable of identifying vulnerabilities in major operating systems, was released exclusively to select companies under Project Glasswing to prevent misuse by malicious actors.

Unauthorized Access to Mythos Raises Security Questions

Anthropic is investigating reports that unauthorized users have gained access to Mythos, its powerful AI cybersecurity tool that the company deliberately restricted due to concerns about potential for misuse. According to Bloomberg, a small group accessed Claude Mythos Preview on the same day Anthropic announced its limited release to select partners, using a combination of tactics including access obtained through a third-party contractor and internet sleuthing techniques commonly employed by cybersecurity researchers

2

. The incident highlights significant challenges for the $380 billion AI lab in controlling access to its most sensitive AI technology

4

.

Source: FT

Source: FT

An Anthropic spokesperson confirmed the company is "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," though the company stated it currently has no evidence that the activity impacted Anthropic's systems directly

1

. The group, which operates through a private online forum on Discord, has been using Mythos regularly since gaining access, though they claim not for cybersecurity purposes

2

.

How the Breach Occurred Through Third-Party Vendor

The unauthorized group relied on multiple methods to access the restricted model. A person interviewed by Bloomberg, who works as a third-party contractor for Anthropic, used their existing permissions to access Anthropic models and software related to evaluating the AI technology

2

. The group made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, with details reportedly revealed in a recent data breach from Mercor, an AI training startup that works with several top developers

2

.

Source: Reuters

Source: Reuters

Members of this Discord channel actively hunt for information about unreleased models, using bots to scour unsecured websites such as GitHub for details that Anthropic and other companies have inadvertently posted

2

. The source told Bloomberg the group is "interested in playing around with new models, not wreaking havoc with them," and has avoided running cybersecurity-related prompts, instead performing tasks like building simple websites to evade detection

2

. The person also revealed the group has access to other unreleased Anthropic AI models

2

.

Project Glasswing and Enterprise Security Concerns

Anthropic released Mythos exclusively to select organizations through Project Glasswing, a controlled initiative designed to allow trusted partners to test and safeguard their systems from potential cyberattacks before wider release

3

. The company has described Mythos as capable of identifying and exploiting security vulnerabilities "in every major operating system and every major web browser when directed by a user to do so"

2

.

Anthropic has granted access to major technology companies including Apple, Amazon, Microsoft, Cisco, and CrowdStrike for limited testing purposes

4

. Amazon also offers Mythos through its Bedrock platform to a restricted list of approved organizations

2

. In recent days, growing numbers of financial institutions and government agencies on both sides of the Atlantic have been seeking to join the list of early testers to protect their systems against malicious actors

2

.

Implications for AI Security and Regulatory Concerns

The unauthorized access incident raises critical questions about whether Anthropic can prevent its most powerful and potentially dangerous technology from spreading beyond approved partners

2

. Security experts have cautioned that in the wrong hands, the AI cybersecurity tool could enable hackers to exploit bugs faster than organizations can fix them

4

. The tool, designed for defensive cybersecurity and enterprise security, could be weaponized as a hacking tool against corporate security instead of bolstering it

1

.

Anthropic's security processes have faced intense scrutiny recently. Descriptions of the model, including its name, were discovered in a publicly accessible data cache in March, which the AI lab attributed to human error

4

. Earlier this month, internal source code for the company's coding assistant Claude Code was also made public in a second incident

4

. The risk of unauthorized access has sent shockwaves through markets and prompted high-level discussions among financial institutions and global regulators about regulatory concerns surrounding such powerful AI technology

4

. If confirmed, this breach could spell trouble for Anthropic, which implemented the exclusive release specifically to address enterprise security concerns and prevent access by bad actors

1

.

Source: Bloomberg

Source: Bloomberg

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Β© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo