6 Sources
6 Sources
[1]
Unauthorized group has gained access to Anthropic's exclusive cyber tool Mythos, report claims
A group of unauthorized users has reportedly gained access to Mythos, the cybersecurity tool recently announced by Anthropic. Much has been made of Mythos and its purported power -- an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool, according to the company. Now, Bloomberg has reported that a "private online forum," the members of which have not been publicly identified, has managed to gain access to the tool through a third-party vendor. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told TechCrunch. The company said that, so far, it has found no evidence that the supposedly unauthorized activity impacted Anthropic's systems at all. The unauthorized group tried a number of different strategies to gain access to the model, including using "access" enjoyed by the person who was interviewed by Bloomberg. That person is currently employed at a third-party contractor that works for Anthropic, the outlet reported. Members of the group are part of a Discord channel that seek out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software. Bloomberg reports that the group, which supposedly gained access to the tool on the very same day it was publicly announced, "made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models." The group in question is "interested in playing around with new models, not wreaking havoc with them," the source told the outlet. Mythos was released to a select number of vendors, including big names like Apple, as part of an initiative called Project Glasswing. The limited release of the model was designed to stop its usage by bad actors. The tool could be weaponized against corporate security instead of bolstering it, Anthropic said. If true, unauthorized use of Mythos could spell trouble for Anthropic, which provided the exclusive release to allay the company's concern for enterprise security.
[2]
Anthropic's Mythos Model Is Being Accessed by Unauthorized Users
Anthropic is investigating the report of unauthorized access, and the company says it currently has no evidence that the access is impacting any of its systems. A small group of unauthorized users have accessed Anthropic PBC's new Mythos AI model, a technology that the company says is so powerful it can enable dangerous cyberattacks, according to a person familiar with the matter and documentation viewed by Bloomberg News. A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, said the person, who asked not to be named for fear of reprisal. The group has been using Mythos regularly since then, though not for cybersecurity purposes, said the person, who corroborated the account with screenshots and a live demonstration of the model. Anthropic has said Mythos is capable of identifying and exploiting vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." As a result, the company has taken pains to ensure that the technology is only available to a select batch of software providers through an initiative called Project Glasswing, with the goal of allowing those firms to test and safeguard their own systems from potential cyberattacks. The unauthorized access, which has not previously been reported, highlights the challenge Anthropic faces in fully preventing its most powerful -- and potentially dangerous -- technology from spreading beyond approved partners. It also raises questions about whether anyone else may be using Mythos without permission, and for what purpose. The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson for Anthropic said in a statement. The company said it currently has no evidence that the access reported by Bloomberg went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. Anthropic has so far let Apple Inc., Amazon.com Inc., Cisco Systems Inc. and dozens of other organizations begin testing out Mythos. Amazon, a key Anthropic partner and backer, also offers Mythos through its Bedrock platform to a limited list of approved organizations. In recent days, a growing number of financial institutions and government agencies on both sides of the Atlantic have been seeking to be added to the list of early testers to safeguard their own systems against malicious actors. To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers. Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
[3]
Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports
April 21 (Reuters) - A small group of unauthorized users has accessed Anthropic's new Mythos AI model, Bloomberg News reported on Tuesday, citing documentation and a person familiar with the matter. A handful of users in a private online forum gained access to Mythos on the same β day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, the report said. The group has been using Mythos regularly since then, though not for cybersecurity purposes, according to the report. "We're investigating a report claiming unauthorized access to Claude β Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said. Announced on April 7, Mythos is being deployed as part of Anthropic's "Project Glasswing," β a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model β for defensive cybersecurity. Mythos is a powerful AI model that has sparked concerns among regulators β about its unprecedented ability to identify digital security vulnerabilities and potential for misuse. Reporting by Zaheer Kachwala in Bengaluru; Editing by Pooja Desai Our Standards: The Thomson Reuters Trust Principles., opens new tab
[4]
Anthropic investigating unauthorised access of powerful Mythos AI model
Anthropic is investigating whether a group of users gained unauthorised access to its Claude Mythos model, which was only released to a handful of trusted companies because of its advanced cyber security capabilities. The AI lab on Tuesday said it was looking into reports that a group of people had accessed the model through a system set for third-party companies doing work for Anthropic. The company said: "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments." The incident raises concerns about whether the $380bn AI lab can keep the technology it develops out of the hands of bad actors. Anthropic limited the release of Claude Mythos Preview to a small group of trusted tech companies, citing the risk of people misusing the model to conduct cyber attacks at a scale and speed beyond human capabilities. The risk of unauthorised access will add to anxiety about Mythos, which has sent shockwaves through the markets and prompted high-level discussions among financial institutions and global regulators. One of the people who gained unauthorised access was able to use their permissions as a contractor for Anthropic to tap into Mythos, according to Bloomberg, which first reported the incident. Anthropic said it had no evidence of activity extending beyond the "vendor environment", which third parties use to access systems for model development. AI labs commonly use third-party contractors for tasks such as model testing, although it was not clear which vendor was involved in the incident. Anthropic launched Mythos earlier this month to companies including Amazon, Microsoft, Apple, Cisco and CrowdStrike. The San Francisco-based company said these partners would be able to detect and secure cyber vulnerabilities using Mythos's advanced capabilities before the model was released to the public. Security experts have cautioned that, in the wrong hands, hackers could exploit bugs faster than organisations can fix them. Anthropic's security processes have been under intense scrutiny after descriptions of the model, including its name, were discovered in a publicly accessible data cache in March. The AI lab blamed human error. Earlier this month, internal source code for the company's coding assistant Claude Code was also made public in a second incident.
[5]
Some Unknown Group Is Reportedly Using Claude Mythos Without Permission
In a very cagily-written story from Bloomberg, Anthropic confirmed Tuesday that it has received a report that an unauthorized mystery group is accessing Claude MythosΓ’β¬"the model it says is too dangerous to release. Γ’β¬ΕWeΓ’β¬β’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,Γ’β¬Γ says an Anthropic spokespersonΓ’β¬β’s statement to Bloomberg. Bloomberg apparently confirmed the apparent breach by looking at a live demo and screenshots sent over by a member of the group responsible for the unauthorized access. In understandably obfuscatory language, Bloomberg explains that an anonymous source says they are a member of an unnamed group that has abused their access Γ’β¬Εas a worker at a third-party contractor for AnthropicΓ’β¬ and employed Γ’β¬Εcommonly used internet sleuthing tools often employed by cybersecurity researchers,Γ’β¬ to gain some form of access to the model. But donΓ’β¬β’t worry, this secret group that apparently has access to the most feared piece of technology in the world is Γ’β¬Εinterested in playing around with new models, not wreaking havoc with them,Γ’β¬ the source apparently explained to Bloomberg. The sequence of events in the apparent breach looks something like this: So to recap: Anthropic says it has the scariest AI model in the world, and for what itΓ’β¬β’s worth, a whole lot of powerful institutions seem to believe it. If we take Anthropic at its word, weΓ’β¬β’re all trusting it not to abuse this power that it and only it controls. However, some unknown entity has accessed this scary AI model, but if we take them at their word, they just used it for some vibe coding tests and they swear theyΓ’β¬β’re not doing anything evil with it.
[6]
Anthropic investigates alleged unauthorised access to its Mythos AI model: Here is what happened
The group has been using Mythos since the day it was publicly announced. Anthropic recently introduced a cybersecurity-focused AI model called Mythos, describing it as a powerful tool designed to help enterprises detect and respond to digital threats. However, the company is now investigating reports that an unauthorised group has gained access to the system through a third-party environment. According to a report by Bloomberg, a private online forum managed to access Mythos shortly after it was announced. The members of this group have not been publicly identified, but they reportedly accessed the model through a third-party vendor that works with Anthropic. The company is looking into the matter. 'We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments,' an Anthropic spokesperson told TechCrunch. The company also claimed that it has not found any evidence so far that the alleged activity has affected its systems. Also read: OpenAI CEO Sam Altman takes dig at Anthropic Mythos AI, calls it fear-based marketing The report claims that members of the group attempted different methods to get access with the model, including using access privileges associated with a person, who is said to work for a contractor partnered with Anthropic. Bloomberg also reported that the members of the group are part of a Discord channel focused on discovering and experimenting with unreleased AI models. According to the outlet, the group has been using Mythos since the day it was publicly announced and even shared screenshots and a live demonstration of the model to verify their claims. The group reportedly guessed the location of the model online based on Anthropic's previous patterns. Also read: ChatGPT Images 2.0 is here with improved photorealism, better Hindi text rendering and more Mythos was originally released to a limited group of vendors as part of Anthropic's Project Glasswing project. The restricted rollout included major partners such as Apple and was intended to ensure that the powerful cybersecurity tool did not fall into the wrong hands. Also read: 'Legend': Sam Altman and other leaders react as Tim Cook steps down as Apple CEO
Share
Share
Copy Link
A small group of unauthorized users has reportedly accessed Anthropic's powerful Mythos AI model through a third-party vendor environment. The AI cybersecurity tool, designed for defensive purposes but capable of identifying vulnerabilities in major operating systems, was released exclusively to select companies under Project Glasswing to prevent misuse by malicious actors.
Anthropic is investigating reports that unauthorized users have gained access to Mythos, its powerful AI cybersecurity tool that the company deliberately restricted due to concerns about potential for misuse. According to Bloomberg, a small group accessed Claude Mythos Preview on the same day Anthropic announced its limited release to select partners, using a combination of tactics including access obtained through a third-party contractor and internet sleuthing techniques commonly employed by cybersecurity researchers
2
. The incident highlights significant challenges for the $380 billion AI lab in controlling access to its most sensitive AI technology4
.
Source: FT
An Anthropic spokesperson confirmed the company is "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," though the company stated it currently has no evidence that the activity impacted Anthropic's systems directly
1
. The group, which operates through a private online forum on Discord, has been using Mythos regularly since gaining access, though they claim not for cybersecurity purposes2
.The unauthorized group relied on multiple methods to access the restricted model. A person interviewed by Bloomberg, who works as a third-party contractor for Anthropic, used their existing permissions to access Anthropic models and software related to evaluating the AI technology
2
. The group made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, with details reportedly revealed in a recent data breach from Mercor, an AI training startup that works with several top developers2
.
Source: Reuters
Members of this Discord channel actively hunt for information about unreleased models, using bots to scour unsecured websites such as GitHub for details that Anthropic and other companies have inadvertently posted
2
. The source told Bloomberg the group is "interested in playing around with new models, not wreaking havoc with them," and has avoided running cybersecurity-related prompts, instead performing tasks like building simple websites to evade detection2
. The person also revealed the group has access to other unreleased Anthropic AI models2
.Anthropic released Mythos exclusively to select organizations through Project Glasswing, a controlled initiative designed to allow trusted partners to test and safeguard their systems from potential cyberattacks before wider release
3
. The company has described Mythos as capable of identifying and exploiting security vulnerabilities "in every major operating system and every major web browser when directed by a user to do so"2
.Anthropic has granted access to major technology companies including Apple, Amazon, Microsoft, Cisco, and CrowdStrike for limited testing purposes
4
. Amazon also offers Mythos through its Bedrock platform to a restricted list of approved organizations2
. In recent days, growing numbers of financial institutions and government agencies on both sides of the Atlantic have been seeking to join the list of early testers to protect their systems against malicious actors2
.Related Stories
The unauthorized access incident raises critical questions about whether Anthropic can prevent its most powerful and potentially dangerous technology from spreading beyond approved partners
2
. Security experts have cautioned that in the wrong hands, the AI cybersecurity tool could enable hackers to exploit bugs faster than organizations can fix them4
. The tool, designed for defensive cybersecurity and enterprise security, could be weaponized as a hacking tool against corporate security instead of bolstering it1
.Anthropic's security processes have faced intense scrutiny recently. Descriptions of the model, including its name, were discovered in a publicly accessible data cache in March, which the AI lab attributed to human error
4
. Earlier this month, internal source code for the company's coding assistant Claude Code was also made public in a second incident4
. The risk of unauthorized access has sent shockwaves through markets and prompted high-level discussions among financial institutions and global regulators about regulatory concerns surrounding such powerful AI technology4
. If confirmed, this breach could spell trouble for Anthropic, which implemented the exclusive release specifically to address enterprise security concerns and prevent access by bad actors1
.
Source: Bloomberg
Summarized by
Navi
[1]
27 Mar 2026β’Technology

14 Apr 2026β’Technology

15 Apr 2026β’Policy and Regulation

1
Policy and Regulation

2
Technology

3
Technology
