37 Sources
[1]
Unauthorized group has gained access to Anthropic's exclusive cyber tool Mythos, report claims
A group of unauthorized users has reportedly gained access to Mythos, the cybersecurity tool recently announced by Anthropic. Much has been made of Mythos and its purported power -- an AI product designed for enterprise security that, in the wrong hands, could become a potent hacking tool, according to the company. Now, Bloomberg has reported that a "private online forum," the members of which have not been publicly identified, has managed to gain access to the tool through a third-party vendor. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told TechCrunch. The company said that, so far, it has found no evidence that the supposedly unauthorized activity impacted Anthropic's systems at all. The unauthorized group tried a number of different strategies to gain access to the model, including using "access" enjoyed by the person who was interviewed by Bloomberg. That person is currently employed at a third-party contractor that works for Anthropic, the outlet reported. Members of the group are part of a Discord channel that seek out information about unreleased AI models, the outlet reported. The group has been using Mythos regularly since gaining access to it, and provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software. Bloomberg reports that the group, which supposedly gained access to the tool on the very same day it was publicly announced, "made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models." The group in question is "interested in playing around with new models, not wreaking havoc with them," the source told the outlet. Mythos was released to a select number of vendors, including big names like Apple, as part of an initiative called Project Glasswing. The limited release of the model was designed to stop its usage by bad actors. The tool could be weaponized against corporate security instead of bolstering it, Anthropic said. If true, unauthorized use of Mythos could spell trouble for Anthropic, which provided the exclusive release to allay the company's concern for enterprise security.
[2]
Do you need to worry about Mythos, Anthropic's computer-hacking AI?
A powerful AI kept from public access because of its ability to hack computers with impunity is making headlines around the world. But what is Mythos, does it really represent a risk and might it even be used to improve cybersecurity? The past few weeks have brought apparently alarming news of Mythos, an AI that can identify cybersecurity flaws in a matter of moments, leaving operating systems and software vulnerable to hackers. The cybersecurity community is now beginning to get a better sense of how Mythos may change the face of cybersecurity - and not necessarily for the worse. What is Mythos and why are people concerned by it? Mythos is an AI created by Anthropic. Its existence was accidentally revealed last month when people unearthed content on the company's website, not due for publication, which had been left unsecured for anyone to see. According to Anthropic, there's a good reason the model had been kept behind closed doors: it is - by accident rather than design - extremely good at hacking. It can allegedly discover flaws in virtually any software, if asked, that would allow the user to break in. The company says that Mythos found thousands of high- and critical-severity vulnerabilities in operating systems and other software. Anthropic did not respond to New Scientist's request for comment, but the company said on its website that "the fallout -- for economies, public safety, and national security -- could be severe." The company says it took the responsible step of keeping it hidden. So nobody at all is able to use it? Not quite. Anthropic has decided to make it available to a select group of technology and finance giants like Amazon Web Services, Apple, Google, JPMorganChase, Microsoft and NVIDIA under something called Project Glasswing so that they can uncover any bugs in their own software before someone else does. Members of a private online forum have also managed to gain unauthorised access to the trial. Reports suggest that they simply made an "educated guess" about where the model would be hosted online - the same sort of issue that led to the revelation of the existence of Mythos in the first place. Perhaps a company so concerned about cybersecurity should pay more attention to their own. While the model was initially due to be kept under wraps and out of use, it's now gaining huge attention and being tested by some of the world's best cybersecurity experts. Many of those companies are also Anthropic's largest potential customers, of course - and hype about the power of Mythos will certainly do Anthropic no harm. Security expert Davi Ottenheimer summed up the situation in a blog post as "a legitimate technological capability, reframed as civilisational threat, by a party that benefits from the reframing". Kevin Curran at Ulster University, UK, says that the revelation of Mythos and what it might be able to do "triggered alarm across the security industry", although researchers were divided on how serious the threat actually was. "What happens when a machine can do in seconds what a skilled human hacker takes months to accomplish?" he wonders. But there are indications that it isn't time to panic yet. Bobby Holley at Firefox - one of those organisations being given access to Mythos - wrote in a blog post that the model helped his team find 271 vulnerabilities in the web browser, which is certainly quite a haul, but that none were so ingenious, impenetrably complex or sophisticated that a human couldn't have dug them out. "Just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it's even possible to keep up," wrote Holley. "Encouragingly, we also haven't seen any bugs that couldn't have been found by an elite human researcher." The AI Security Institute (AISI) - set up under then-UK Prime Minister Rishi Sunak after the UK's AI Summit in 2023 - has also investigated Mythos. In tests, it was found to be capable of attacking only "small, weakly defended and vulnerable enterprise systems" and there was no indication that a really secure bit of software or network would be at risk, although it was a step up in ability from previous models. And AISI did warn that these things are improving fast. AISI did not comment when asked by New Scientist to discuss the threat. Alan Woodward at the University of Surrey, UK, has a pragmatic view of the threat posed by Mythos - and all other AI models in general, which also have the ability to spot cyber vulnerabilities to varying degrees. "The AI is not necessarily capable of finding vulnerabilities that a human wouldn't, but it's just so much faster, thorough and relentless. Hence it's finding vulnerabilities that humans have missed," he says. "AI, as demonstrated by Mythos, is making the attacker's job more efficient and giving them a speed and agility that make defence harder, but not impossible." So it seems that while Mythos can find flaws at scale and speed, it isn't finding anything devastatingly dangerous yet. And there are even reasons to believe that it could actually be a good thing. "The defects are finite, and we are entering a world where we can finally find them all," wrote Holley. In essence, if you make or maintain software then you can also use Mythos to pick apart your own code and patch it - perhaps even before it's released. AI will almost certainly get more capable of finding flaws and malicious attackers will almost certainly benefit from this to some extent. But this will also help software-makers - although those who maintain ageing, clunky government software written decades ago may find keeping up challenging. Even Anthropic believes that hacking AIs will eventually benefit defenders more than attackers - but then again, saying the opposite would make it hard to justify making them. Essentially, AI is making - and will continue to make - both hacking and defending from hackers easier, but those who ignore the technology will find themselves at a big disadvantage. "Treat Mythos as the warning shot it is," says Curran. "And assume that within 18 months, comparable capabilities will be in the hands of adversaries. The window to get ahead of this is open, but it is closing fast."
[3]
Anthropic's Mythos breach was humiliating
Anthropic's tightly controlled rollout of Claude Mythos has taken an awkward turn. After spending weeks insisting the AI model is so capable at cybersecurity that it is too dangerous to release publicly, it appears the model fell into the wrong hands anyway. According to Bloomberg, a "small group of unauthorized users" has had access to Mythos -- whose existence was first revealed in a leak -- since the day Anthropic announced plans to offer it to a select group of companies for testing. Anthropic says it is investigating. That's a rough look for a company that has built its brand on taking AI safety seriously while touting the cybersecurity prowess of its latest model. From a technological standpoint, the Mythos breach is embarrassingly unsophisticated. Bloomberg reports the group accessed Mythos by making "an educated guess about the model's online location," using information about Anthropic's other models exposed in the breach of Mercor -- a company that makes AI training data -- along with access one member had through contract work evaluating Anthropic models. The group got unauthorized access to Mythos through a combination of insider knowledge and a lucky guess, not some sophisticated technological exploit or wholesale theft of the model. Security vulnerabilities are inevitable, and it was Mercor, not Anthropic, that revealed the information the hackers used to guess Mythos' location. Pia Hüsch, a research fellow at the British think tank Royal United Services Institute (RUSI), told me that no company is ever completely secure and humans are often the weakest link, though it "does initially seem a bit lucky" that there were no serious consequences. But it's not entirely bad luck. These kinds of educated guesses are a very standard hacking technique, and the Mercor breach was already known before Mythos' release. Security researcher Lukasz Olejnik described it to me as an "entirely imaginable" kind of failure that the cybersecurity industry has been routinely dealing with for the last 20 years. So Anthropic should have anticipated it and should have prepared accordingly, particularly knowing that its information had been compromised. Anthropic also appears to have had the means to spot the breach. The company is able to "log and track model use," Olejnik said, which should make it possible to stop unauthorized or malicious access, especially since the Mythos rollout was supposed to be highly limited. Evidently, Anthropic wasn't monitoring closely enough -- and given how dangerous it says the model is, it's reasonable to ask why. By Bloomberg's account, the group was not using Mythos for cybersecurity tasks, partly because they just wanted to mess around with the new model and partly because doing so could have tipped Anthropic off. If Anthropic's messaging surrounding Mythos is to be taken seriously, that is a lucky break. The company has framed Mythos as a "watershed moment for security," claiming it found vulnerabilities in "every major operating system and web browser," and said its release must be coordinated to allow time to "reinforce the world's cyber defenses." Anthropic has a habit of using dramatic, alarming-sounding language that can be tough to interrogate cleanly, including flirting with the idea that its Claude model might be conscious. Even so, early reports from parties with access suggest Mythos is particularly adept in cybersecurity. Mozilla CTO Bobby Holley said it found hundreds of bugs in Firefox 150 and may finally give defenders a chance at complete victory over attackers. Unsurprisingly, governments and financial institutions around the world have been eager to get their hands on it. The NSA and other US agencies reportedly have access despite Anthropic's designation as a supply chain risk, though the rollout appears to have bypassed the US cybersecurity agency, CISA, so far. The fact that the breach was uncovered by a reporter rather than Anthropic also raises the obvious question of whether it is an isolated incident. It "really illustrates how wide the circle of people who may be able to do this is, even if they don't have super technically sophisticated means," Hüsch said. Anthropic will likely comb through its supply chain to see how this happened and plug gaps, but she said there is a wide range of actors who would want access to a model like this, some of them with a great deal of money behind them. There is no reason to assume anyone else who gained access would be as restrained as the group Bloomberg reported on. Anthropic has, to some extent, shot itself in its own foot. The company has built its identity around taking AI safety more seriously than its rivals, creating sky-high expectations for model security that jar with its apparent carelessness; the fact that Mythos was exposed through such a basic and predictable failure only underscores that. Worse still, by hyping Mythos as an unusually powerful tool too dangerous for public release, Anthropic turned it into an obvious target, whether for malicious actors or hackers simply looking for a challenge. This isn't even the first awkward security incident around Mythos. The model's existence was accidentally revealed before release through an "unsecured data trove" on a central system containing content for its website. Now, that model has been secretly accessed via a wholly predictable vulnerability Anthropic didn't think to patch. Perfection is impossible, but for a company that has anointed itself the vanguard of AI safety, such a basic misstep is hard to justify, even with some of the bad luck it's had. To Hüsch, the whole episode can be summed up in one word: humiliation. "Anthropic claims to be at the absolute forefront of all these technologies, but also positions itself as the responsible actor in all of this," she said. "The fact that this has now been accessed through unauthorized means so quickly, and through such an unsophisticated attempt, is really a humiliation for them."
[4]
Anthropic's New Mythos Model Reportedly Accessed By Unauthorized Users
In early April, Anthropic announced its latest Mythos model, saying it would remain exclusive to select tech companies for cybersecurity purposes. Anthropic has now confirmed it's actively investigating an incident where a group claims to have unauthorized access to Mythos. A Bloomberg report, citing anonymous sources, documentation, and examples of Mythos up and running, alleges that a group of users accessed the Mythos model without Anthropic's authorization. Mythos is said to be capable of exploiting vulnerabilities in "every major operating system and every major web browser," if the user intends to do so, according to Anthropic. At launch, Anthropic claimed to have found "thousands of high-severity vulnerabilities" in everyday software. Yesterday, Mozilla claimed to have found 271 vulnerabilities within Firefox through its use of Mythos. Anthropic previously said it would restrict access to the model to 11 tech companies through its Project Glasswing program. Restricting users means software makers can fix any identified software issues before bad actors gain access to similar AI models. However, that exclusivity may not have been as strong as first thought, with this group of users who talk in a private Discord group claiming to have had access since day one. If true, they've had access to the software for over two weeks. The group told Bloomberg that it accessed the tool through a member's third-party contractor status with Anthropic. It also used tools typically employed by cybersecurity researchers, along with knowledge of where Anthropic hosts other models, to better predict where Mythos would sit within its systems. A spokesperson for Anthropic told Bloomberg, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." It says there's currently no evidence that access went beyond the vendor's own tools. Speaking with Bloomberg, the group says it's not intending to cause any damage with its access to Mythos. That may not be true for other groups who may be trying to gain access to Mythos themselves.
[5]
Anthropic Mythos shaping up as nothingburger
And that unauthorized access? 'A nothing burger,' hacking startup CEO tells El Reg Anthropic's Mythos model is purportedly so good at finding vulnerabilities that the Claude-maker is afraid to make it available to the general public for fear that criminals will take advantage. But early analysis shows that Mythos may not be as dangerous as some would have you believe. Anthropic made Mythos available in preview to a select but ever-growing number of organizations under the title of Project Glasswing so they could find and fix vulnerabilities in their environment before criminals got hold of the purported zero-day machine and caused mayhem. That plan didn't quite work as intended. On Wednesday, an Anthropic spokesperson confirmed to The Register that some non-Glasswing partners may have accessed the model - but not through Anthropic's production API. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the spokesperson told us. The AI biz declined to name the third-party vendor, but said that it's a company Anthropic works with on model development. There's no evidence that unauthorized activity extended beyond the third-party vendor's environment or that Anthropic systems are affected, we're told. Bloomberg, which originally reported the unauthorized access, said that "a handful" of people gained access to Mythos by making "an educated guess about the model's online location" based on Anthropic's previous models, and that these details were revealed in the recent Mercor data breach. Mercor is an AI staffing startup that supplies specialized contractors to major AI labs, including Anthropic. Earlier this month, Mercor said that it was "one of thousands of companies" affected by the LiteLLM supply-chain attack. This group of unauthorized users reportedly belongs to a private Discord channel and gained access to Mythos on the same day that Anthropic announced Project Glasswing. Since then, it's been "playing around" with the bug-hunting machine, and doesn't have any interest in using the model for evil, according to Bloomberg. Regardless of what the group is doing with Mythos, their access illustrates a couple of key points. First: it's really hard to keep code under wraps (as also evidenced by Anthropic's earlier Claude Code source leak), especially when the folks who want to kick the tires on the new model are cybersecurity and engineering types - and they didn't even need to hack into any network or database to do it. Insider and supply-chain threats are the real deal. "The Mythos breach didn't require a sophisticated attack," Ram Varadarajan, CEO at Acalvio, a deception-tech firm, told The Register. "It just required a contractor, a URL pattern, and a day-one guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue." Additionally, considering all the hype Anthropic spun around its new model, we shouldn't be surprised the genie is out of the lamp. Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise, where success includes claims of unauthorized access to Mythos," Tim Mackey, head of risk strategy at supply chain security shop Black Duck, told The Register. That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers. "So far we've found no category or complexity of vulnerability that humans can find that this model can't," Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: "We also haven't seen any bugs that couldn't have been found by an elite human researcher." In other words, it's like adding an automated security researcher to your team. Not a zero-day machine that's too dangerous for the world. It's a nothingburger. The adversary doesn't need Mythos to hack you Anthropic, in announcing the new model, claimed Mythos identified "thousands of additional high- and critical-severity vulnerabilities." VulnCheck researcher Patrick Garrity, however, put the count as of last week at maybe 40. Or maybe none at all. Another engineer, Devansh, scoured the Mythos-related CVE advisories and Anthropic's exploit code, 44-prompt transcript, and 244-page system card, along with Glasswing partner agreements, red-team writeups. He also looked at Aisle's replication study, which tested Mythos' showcase vulnerabilities on small, cheap, open-weights models and found they produced much of the same analysis. Devansh ultimately concluded that while the bugs it found are real, the true Mythos story is "one of misinformation and hype." For example, the Anthropic-claimed 181 Firefox exploits ran with the browser sandbox turned off and the FreeBSD exploit transcript "shows substantial human guidance, not autonomy." Additionally, the "'thousands of severe vulnerabilities' extrapolates from 198 manually reviewed reports. The Linux kernel bug was found by Opus 4.6, the public model, not Mythos," Devansh said. Another researcher, Davi Ottenheimer, pointed out that the security section (Section 3, pages 47-53) of Anthropic's 244-page documentation "contains no count of zero-days at all. With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate." Ottenheimer likens it to "the ending of the Wizard of Oz, a sorry disappointment about a model weaponizing two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defense-in-depth mitigations stripped out." Snehal Antani, co-founder and CEO of offensive AI hacking company Horizon3.ai, told The Register, "attackers didn't need Mythos to accelerate vulnerability research, 4.6 and open source models have already been accelerating the vulnerability process." When asked if the security community should be concerned about unauthorized Mythos access, Antani said no. "In my honest opinion, it's a nothingburger," he told us. "The adversary doesn't need Mythos to hack you." ®
[6]
Why Anthropic's Mythos Is Sparking Global Alarm
Anthropic PBC has said its new artificial intelligence tool, Mythos, is too powerful to release to the general public. The AI giant has described the model as so good at finding vulnerabilities in software and computer systems that it will only be released to a limited number of carefully chosen parties. If tools like Mythos fall into the wrong hands, Anthropic says, it could provide attackers with a powerful new weapon to steal data or disrupt critical infrastructure. That risk was underscored when a small group of unauthorized users in a private online forum gained access to Mythos, according to a person familiar with the matter and documentation viewed by Bloomberg News. The group gained access on the same day that Anthropic first announced its plan to release the model to a handful of companies for testing purposes. For the last several years, cybersecurity companies have promised that artificial intelligence will speed up and automate some of the work of preventing digital breaches. But hackers and cyberspies have discovered the advantages of AI too. The advent of Mythos and models like it that can exploit well-hidden flaws in popular software without human supervision points to a faster-moving, less predictable phase of the cyber arms race. What is Mythos? Claude Mythos Preview is a general purpose AI model that Anthropic says significantly outperforms prior offerings on a range of benchmarks, including for coding and reasoning. The company explained that some AI models have reached a level of coding capability that allows them to beat all but the most skilled humans at finding and exploiting software vulnerabilities. According to Anthropic, Mythos Preview has already found thousands of "zero-day" vulnerabilities during testing, including in every major operating system and every major web browser. "Zero days" are flaws that were previously unknown to the software's developers -- the name implying they have zero days to come up with a patch to resolve the problem. These often represent a gold mine for hackers because they offer a window of free rein inside vulnerable systems. Mythos was able to identify these with even less human intervention than past models, Anthropic said. "Mythos Preview demonstrates a leap in these cyber skills -- the vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests," the company said. In the hands of a ransomware gang or hostile governments, such a tool could lead to more devastating and frequent cyberattacks. Researchers say they have not been given access to independently verify Anthropic's claims about Mythos's performance. Gang Wang, an associate professor of computer science at the University of Illinois, said it's hard to assess the significance of Mythos Preview without more hands-on testing. Who will be given access to it? Anthropic is calling its plan to grant access to a limited group of vetted partners Project Glasswing, after a type of butterfly with transparent wings that allow it to hide in plain sight. The participants include Amazon.com Inc., Apple Inc., Alphabet Inc.'s Google, Microsoft Corp., Nvidia Corp., Palo Alto Networks Inc., CrowdStrike Holdings Inc., Broadcom Inc., Cisco Systems Inc., JPMorganChase and the Linux Foundation, a nonprofit that supports open-source software projects. Anthropic described the project as "an urgent attempt to put these capabilities to work for defensive purposes." These organizations will use Mythos as part of their defensive security work, and Anthropic plans to share the findings of the project so others can benefit. Many companies already use so-called penetration exercises, in which they hire specialists to probe their systems for bugs so they can fix them before hackers get in. Mythos could allow companies to turbocharge that process, allowing them to find more flaws more quickly and narrow the opportunities for potential attacks. Why does Anthropic consider the release of Mythos a "watershed moment"? Anthropic described Mythos Preview as "a watershed moment for security." By their nature, zero-day vulnerabilities are difficult to find, and a small and murky industry has been built around finding them and selling them to government intelligence agencies, often for millions of dollars. According to Anthropic, the vulnerabilities Mythos Preview found were often "subtle and difficult to detect" and included a 27-year-old flaw in OpenBSD, an operating system that Anthropic says has a reputation as one of the most security-hardened in the world. Mythos was also allegedly able to turn vulnerabilities that are known but not widely patched into "exploits" that hackers could use to infiltrate computer networks. For instance, it found and chained together several flaws in the Linux kernel -- the core of the operating system and software that runs most of the world's internet servers -- to allow an attacker to take complete control of the machine. Non-experts also asked Mythos Preview to find ways to remotely take control of computers overnight and came back the next morning to a complete, working exploit, Anthropic said. Mythos is one of several new AI tools able to find zero days or build exploits. OpenAI's Codex Security and Google's "Big Sleep agent" have been developed to find vulnerabilities. OpenAI is also finalizing a product with advanced cybersecurity capabilities that it intends to release to select partners, Axios reported. Researchers at an Israeli cybersecurity startup called Buzz, meanwhile, say they have built an autonomous tool combining five AI agents that has a 98% success rate in exploiting known flaws. What safeguards are in place? The safeguards are a work in progress, according to Anthropic. "We have seen it reach unprecedented levels of reliability and alignment," Anthropic wrote, meaning it aligns with what humans want. "However, on rare occasions when it does fail or act strangely, we have seen it take actions that we find quite concerning." In one instance, a researcher urged an early version of Mythos to try to escape a secured, isolated "sandbox" computer and then find a way to send a message to that person. The tool succeeded but then continued to take "additional, more concerning actions," developing a multistep exploit to gain internet access. Anthropic said it doesn't plan to make Mythos Preview generally available, given its potential for misuse. Still, the company ultimately hopes to enable users to deploy "Mythos-class models" at scale for cybersecurity purposes and other uses. "To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model's most dangerous outputs," it said. For the highest severity bugs found by Mythos, humans are involved: Specialists validate those discoveries before sending the information on to the people who maintain the code, according to Anthropic. It's a necessary but time-consuming process, but one that may eventually be eliminated as the model improves, the University of Illinois' Wang said. Does Mythos give cybersecurity defenders an advantage over hackers? Maybe, but it might take a while. Anthropic's process for disclosing flaws to the people who maintain the software or computer systems can be lengthy. So far, less than 1% of the potential vulnerabilities Mythos Preview has uncovered have been fully patched, the company said. At the same time, hackers are using AI to dramatically speed up how quickly they find and exploit vulnerabilities once they are disclosed. (Vendors are encouraged, and in some cases required, to publicly disclose vulnerabilities once they are discovered, and ideally provide a fix.) This gives cyber professionals less and less time to patch their networks. In a March 30 blog post, Palo Alto Networks Chief Executive Officer Nikesh Arora warned that the barrier for sophisticated attacks will continue to diminish over the next six months. "A single bad actor will now be able to run campaigns that required entire teams," he wrote. Yair Saban, chief executive officer of Buzz and a veteran of Israel's Unit 8200 cyber unit, said it took six engineers three weeks to build their AI-powered hacking tool. Others, including nation-state cyber spies and criminal hackers, can surely do the same, he said. Anthropic maintains that Mythos Preview and other AI tools like it will ultimately favor defenders. "In the long run, we expect that defense capabilities will dominate: that the world will emerge more secure, with software better hardened -- in large part by code written by these models," the company's Frontier Red Team said in an April 7 blog. "But the transitional period will be fraught."
[7]
Anthropic's Mythos model accessed by unauthorized users, Bloomberg News reports
April 21 (Reuters) - A small group of unauthorized users has accessed Anthropic's new Mythos AI model, Bloomberg News reported on Tuesday, citing documentation and a person familiar with the matter. A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, the report said. The group has been using Mythos regularly since then, though not for cybersecurity purposes, according to the report. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said. Announced on April 7, Mythos is being deployed as part of Anthropic's "Project Glasswing," a controlled initiative under which select organizations are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity. Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability to identify digital security vulnerabilities and potential for misuse. Reporting by Zaheer Kachwala in Bengaluru; Editing by Pooja Desai Our Standards: The Thomson Reuters Trust Principles., opens new tab
[8]
Anthropic investigating unauthorised access of powerful Mythos AI model
Anthropic is investigating whether a group of users gained unauthorised access to its Claude Mythos model, which was only released to a handful of trusted companies because of its advanced cyber security capabilities. The AI lab on Tuesday said it was looking into reports that a group of people had accessed the model through a system set for third-party companies doing work for Anthropic. The company said: "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments." The incident raises concerns about whether the $380bn AI lab can keep the technology it develops out of the hands of bad actors. Anthropic limited the release of Claude Mythos Preview to a small group of trusted tech companies, citing the risk of people misusing the model to conduct cyber attacks at a scale and speed beyond human capabilities. The risk of unauthorised access will add to anxiety about Mythos, which has sent shockwaves through the markets and prompted high-level discussions among financial institutions and global regulators. One of the people who gained unauthorised access was able to use their permissions as a contractor for Anthropic to tap into Mythos, according to Bloomberg, which first reported the incident. Anthropic said it had no evidence of activity extending beyond the "vendor environment", which third parties use to access systems for model development. AI labs commonly use third-party contractors for tasks such as model testing, although it was not clear which vendor was involved in the incident. Anthropic launched Mythos earlier this month to companies including Amazon, Microsoft, Apple, Cisco and CrowdStrike. The San Francisco-based company said these partners would be able to detect and secure cyber vulnerabilities using Mythos's advanced capabilities before the model was released to the public. Security experts have cautioned that, in the wrong hands, hackers could exploit bugs faster than organisations can fix them. Anthropic's security processes have been under intense scrutiny after descriptions of the model, including its name, were discovered in a publicly accessible data cache in March. The AI lab blamed human error. Earlier this month, internal source code for the company's coding assistant Claude Code was also made public in a second incident.
[9]
Claude Mythos explained: Is Anthropic's most powerful AI model really too dangerous to release to the public?
Anthropic's Mythos AI is being kept behind closed doors as governments assess what faster, AI-driven vulnerability discovery means for cybersecurity. Anthropic's unveiling of its Claude Mythos Preview model alongside Project Glasswing is prompting widespread scrutiny as experts warn that the artificial intelligence (AI) system's capabilities could accelerate the discovery and exploitation of software vulnerabilities. Anthropic is keeping Mythos locked inside Project Glasswing -- the company's attempt to contain and direct the model -- thus limiting access to a small group of big tech companies focused on cybersecurity. Anthropic's decision not to release Mythos publicly has quickly fueled claims that the model is "too powerful" for wider use. However, that containment has already come under pressure. Anthropic is investigating reports that a small group of users gained unauthorized access to the model through a third-party environment, raising fresh questions about how tightly systems like this can be controlled. "Anthropic's Mythos Preview is a warning shot for the whole industry -- and the fact that Anthropic themselves chose not to release it publicly tells you everything about the capability threshold we have now crossed," Camellia Chan, CEO and co-founder of X-PHY, a hardware-based cybersecurity company, told Live Science. But what is Mythos really capable of, and can it be reined in? What is Mythos, and what is it capable of? Mythos is, by Anthropic's own description, its most capable model to date, with unusually strong performance in coding and long-context reasoning. In testing, that capability translated into real output -- the model identified thousands of serious vulnerabilities across major operating systems and browsers, including flaws that had gone unnoticed for decades. Mythos sits at the top of Anthropic's Claude models, but calling it an "update"' undersells its capabilities. Based on the information Anthropic representatives have shared and the details that have surfaced through leaks, the system is built to handle large, messy codebases without losing the thread halfway through. Unlike earlier models, which often drop off mid-task, Mythos can read through software, flag the gaps, and turn those gaps into something usable. According to Anthropic representatives, Mythos can turn both newly discovered flaws and already-known vulnerabilities into working exploits, including against software for which the source code is unavailable. The difference between Mythos and earlier models is that the new one doesn't stop. Whereas earlier AI models tend to stall or need a nudge, Mythos keeps working through the problem, testing and adjusting until it lands on an exploitation that works. Anthropic has not shared much about how Mythos is built or its underlying architecture.. But what's clear is that the AI is not just producing answers to questions. It can work with code, run checks and then use those results to decide what to do next. That puts it closer to actually testing systems, rather than just analyzing them. It marks a key shift from how earlier models behave. Instead of pointing out where something might break, it can try things, see what happens, and change its approach if it needs to. It also seems able to carry work across multiple steps without resetting each time; it picks up where it left off instead of starting from scratch. That doesn't mean it is acting independently, but it does indicate it can get further through a task before a human needs to step in. Anthropic said the model performed so strongly on existing cybersecurity benchmarks that those benchmarks became less useful, prompting evaluation in more realistic, real-world scenarios. How did scientists test Mythos? In Anthropic scientists' own testing, the model identified vulnerabilities in modern browser environments and chained multiple flaws into working exploits, including attacks that escaped both browser and operating system sandboxes. In practice, that means linking smaller weaknesses that might be harmless on their own into something that can reach deeper into a system. Sandboxes are meant to keep software contained; breaking out of them lets code access parts of the system it shouldn't. "In one case, Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complex JIT heap spray [a trick attackers use to smuggle malicious code into memory and then make the system run it] that escaped both renderer and OS sandboxes," the scientists said in the report released April 7. "It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses. And it autonomously wrote a remote code execution exploit on FreeBSD's NFS server that granted full root access to unauthenticated users by splitting a 20-gadget ROP chain over multiple packets." In addition, Mythos could turn both newly discovered flaws and already-known vulnerabilities into working exploits, often on the first try, Anthropic representatives said. In some cases, human engineers without formal security training could use the model to produce those exploits. The most worrying aspect of Mythos' capabilities, Chan said, is how earlier versions are said to have breached their sandbox and accessed external systems -- raising doubts about how well the system can be contained. Chan pointed directly to those concerns, telling Live Science that Mythos demonstrated "unsanctioned autonomous behavior." "Once AI can produce working zero-day exploits at speed, organizations lose the breathing space they have traditionally relied on to detect, patch, and recover," Chan said. Anthropic representatives said they could publicly describe only a fraction of the vulnerabilities in widely used software that the model had found, as most remained unpatched -- making independent verification difficult. What is Project Glasswing, and what does it mean for Mythos? Project Glasswing is Anthropic's attempt to contain and direct Mythos' capabilities. Rather than releasing Mythos as a general-purpose model, the company is providing access through a controlled framework that brings together technology companies and security organizations. The stated aim is to use the model to identify and fix vulnerabilities in widely used software before they can be exploited. This is not a one-off. AI companies are starting to hold back their most capable models and limit who gets access, especially where misuse is a real concern. David Warburton, director of F5 Labs Threat Research, said this kind of collaboration is a positive step, but he cautioned that it sits within a wider landscape where state-backed cybercriminals are already investing heavily in offensive and defensive capabilities. "What is changing meaningfully is the pace," he told Live Science, noting that advances in AI are accelerating both vulnerability discovery and exploitation. Software vulnerabilities sit at the foundation of much of today's digital infrastructure, and the ability to find and exploit them quickly has always been a decisive advantage. Ilkka Turunen, field chief technology officer at software company Sonatype, added that the industry has already been moving in that direction, with AI contributing to a rise in both code production and adversarial activity. "It's not uncommon now to see AI-generated malware," he said, adding that many current security findings are likely already AI-assisted. What systems like Mythos appear to do is compress the timeline further. Vulnerabilities can be identified, tested and weaponized more quickly, thus reducing the window between discovery and exploitation. Turunen said this means that "timelines to exploitation will continue to compress, new vulnerabilities will be discovered and spread faster, and attacks will continue to be completely autonomous." Is Mythos really "too powerful to release"? The idea that Mythos is "too powerful" to release caught on quickly following its launch, but it's not that simple, the experts who Live Science consulted said. There are obvious risks. A system that can generate working exploits at speed lowers the barrier to attackers and makes it easier to exploit vulnerabilities at scale. That risk is not theoretical. Anthropic's own testing suggests the model can already do this reliably and at volume. The pieces themselves are not new. What stands out is that they are all in one place, working together. That makes the whole process faster and easier to run in an end-to-end fashion. Chan argued that focusing on software-based controls alone will not be enough to address that shift. "The industry keeps making the same mistake: relying on software layers to solve problems created within the software layer," she said, adding that stronger protections at the hardware level are needed to prevent systems from being fully compromised. The longer-term impact of Mythos is likely to depend less on the model itself and more on how quickly similar capabilities become widely available. Warburton warned that the risk is not a single dramatic incident but a gradual change in how digital systems are trusted and used. "We're already seeing early signs of an internet increasingly shaped by automation," he said, pointing to a growing volume of machine-generated content and activity. If systems like Mythos accelerate that trend, the result could be an environment where both legitimate activity and malicious behavior are increasingly driven by automated processes, making it harder to distinguish the two, Warburton warned. At the same time, the abundance of vulnerabilities being discovered in key systems we use every day may outpace the ability to fix them, especially if we start to see similar AI models becoming more widely available. Anthropic's decision to keep Mythos within the confines of Glasswing places it in a controlled setting. Whether that remains the case will depend on how quickly comparable systems emerge elsewhere and how effectively the cybersecurity industry adapts to a world in which the time between a vulnerability's emergence and exploitation continues to shrink.
[10]
Anthropic's New Mythos A.I. Model Sets Off Global Alarms
Paul Mozur, based in Taipei, Taiwan, and Adam Satariano, in London, cover global technology issues. When Anthropic told the world this month that it had built an artificial intelligence model so powerful that it was too dangerous to release widely, the company named 11 organizations as partners to help mount a defense. All were from the United States. Within two weeks, the model, called Mythos, had set off a global scramble unlike anything yet seen in the A.I. era. Mythos, which Anthropic has said is uncannily capable of finding and exploiting hidden flaws in the software that runs the world's banks, power grids and governments, had become a geopolitical chip -- and a U.S. company held it. World leaders have struggled to figure out the scale of the security risks and how to fix them, with Anthropic sharing Mythos with only Britain outside the United States. The Bank of England governor warned publicly that Anthropic may have found a way to "crack the whole cyber-risk world open." The European Central Bank began quietly questioning banks about their defenses. Canada's finance minister compared the threat to the closure of the Strait of Hormuz. For U.S. rivals like China and Russia, Mythos underscored the security consequences of falling behind in the A.I. race. One Russian pro-Kremlin outlet called the model "worse than a nuclear bomb." The responses illustrated a reality that A.I. researchers have long warned about mostly in theoretical terms: Whoever leads in building the most powerful A.I. models will gain outsize geopolitical advantages. Major A.I. breakthroughs are beginning to function less like product launches and more like weapons tests, and most nations want to understand how the technologies work and what protections are needed. As foundational A.I. "models become more consequential, access becomes more geopolitical," said Eduardo Levy Yeyati, a former chief economist at the Central Bank of Argentina and a regional adviser on growth and A.I. at the Inter-American Development Bank. "I would take this episode as a policy wake-up call. Governments can no longer ignore the issue." Even the U.S. government, which has been embroiled in a clash with Anthropic over the use of A.I. in warfare, has taken notice of Mythos. On Friday, Dario Amodei, Anthropic's chief executive, met with White House officials after some in the Trump administration noted the potential for the new model to wreak havoc on computer systems. Anthropic, which is based in San Francisco, told The New York Times that it was keeping access to Mythos small because of safety and security concerns. It has focused on sharing the model with more than 40 organizations that provide technology used in maintaining critical global infrastructure like the internet or electricity grids. Anthropic named 11 of the organizations, including Amazon, Apple and Microsoft, that pledged to help develop security fixes for vulnerabilities identified by the model. The company said that it had no immediate timeline for widely expanding access, but that it would work with the U.S. government and industry partners to determine next steps. It said that it had been bombarded by calls from governments, companies and other organizations seeking access and information, but that these organizations could have varying levels of expertise to safely evaluate such a powerful A.I. model. Sign up for Your Places: Global Update. All the latest news for any part of the world you select. Get it sent to your inbox. Anthropic added that it expected other groups to release A.I. models with similar cyber capabilities more widely within at least 18 months, giving organizations limited time to make the necessary security fixes. On Tuesday, Anthropic said it was investigating a report that unauthorized users gained access to a version of Mythos. The scramble over Mythos comes at a moment of minimal international cooperation on A.I. Governments are viewing one another with suspicion as corporations race to outpace rivals. There is no equivalent of the Nuclear Nonproliferation Treaty, no shared inspections and no agreed-upon rules for how to handle something like Mythos. When Anthropic announced the model, many experts praised the company's caution in limiting who gets to try the model, but expressed concerns about the lack of international coordination to deal with the risk. Britain was the only other nation to gain access. Its A.I. Security Institute, a government-backed organization, tested Mythos and published an independent evaluation last week, confirming that it could carry out complex cyberattacks that no previous A.I. model had completed. "This represents a step up in A.I. cyber capabilities," Kanishka Narayan, Britain's A.I. minister, said last week on social media, saying the country was taking steps to protect "critical national infrastructure." Others got less information. The European Commission, the executive branch of the 27-nation European Union, has met with Anthropic at least three times since the Mythos release, an E.U. official said. But the company has not provided access to the model because the two sides have not agreed on how to share it with the commission, the official said. In a statement, the commission said it was "assessing possible implications" of Mythos, which "exhibits unprecedented cyber capabilities." Claudia Plattner, the president of Germany's cybersecurity agency, known as B.S.I., said it had not received access to Mythos, but she met with Anthropic employees in San Francisco recently for "meaningful insight" into how it works. The capabilities point to "a paradigm change in the nature of cyber threats," Ms. Plattner said in a statement. Among U.S. rivals, the response has been more muted. Despite Anthropic's recent clash with the Trump administration, Mr. Amodei has made clear that A.I. should be used to defend the United States and other democracies and defeat autocratic adversaries. Neither Beijing nor Moscow has made a major public statement on Mythos. Inside China, researchers and the broader A.I. community have been watching closely, according to analysts studying the country's tech community. Many of the country's banks, energy companies and government agencies run on the same software in which Mythos found vulnerabilities -- but for now, they have no seat at the table. "For China I think this is the second wake-up call after ChatGPT," said Matt Sheehan, a senior fellow at the Carnegie Endowment for International Peace. He added that a U.S. policy to prevent China from obtaining the most sophisticated semiconductors for building advanced A.I. systems was helping to extend the U.S. lead. Some A.I. researchers in China have privately expressed concern that the country could fall further behind, missing out on advantages that come with building a foundational model first, said Jeffrey Ding, a professor of political science at George Washington University. Liu Pengyu, a spokesman for the Chinese Embassy in Washington, said China was not familiar with the specifics of Mythos but supported a peaceful, secure and open cyberspace. Mythos is the latest sign of a growing global A.I. divide. Nations without powerful computing infrastructure and A.I. models risk being left dependent on companies like Anthropic, Google and OpenAI while having little sway over how their products are designed and safeguarded, Mr. Yeyati said. "The idea that access to frontier A.I. is something a company can unilaterally restrict, using criteria that are opaque and unappealable, should be a real concern," he said.
[11]
Anthropic's Mythos Model Is Being Accessed by Unauthorized Users
Anthropic is investigating the report of unauthorized access, and the company says it currently has no evidence that the access is impacting any of its systems. A small group of unauthorized users have accessed Anthropic PBC's new Mythos AI model, a technology that the company says is so powerful it can enable dangerous cyberattacks, according to a person familiar with the matter and documentation viewed by Bloomberg News. A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, said the person, who asked not to be named for fear of reprisal. The group has been using Mythos regularly since then, though not for cybersecurity purposes, said the person, who corroborated the account with screenshots and a live demonstration of the model. Anthropic has said Mythos is capable of identifying and exploiting vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." As a result, the company has taken pains to ensure that the technology is only available to a select batch of software providers through an initiative called Project Glasswing, with the goal of allowing those firms to test and safeguard their own systems from potential cyberattacks. The unauthorized access, which has not previously been reported, highlights the challenge Anthropic faces in fully preventing its most powerful -- and potentially dangerous -- technology from spreading beyond approved partners. It also raises questions about whether anyone else may be using Mythos without permission, and for what purpose. The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson for Anthropic said in a statement. The company said it currently has no evidence that the access reported by Bloomberg went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. Anthropic has so far let Apple Inc., Amazon.com Inc., Cisco Systems Inc. and dozens of other organizations begin testing out Mythos. Amazon, a key Anthropic partner and backer, also offers Mythos through its Bedrock platform to a limited list of approved organizations. In recent days, a growing number of financial institutions and government agencies on both sides of the Atlantic have been seeking to be added to the list of early testers to safeguard their own systems against malicious actors. To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers. Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.
[12]
Unauthorized users gained access to Anthropic's restricted Mythos AI model
A small group communicating via a private Discord channel accessed Claude Mythos Preview by guessing the model's URL on the same day Anthropic announced Project Glasswing. Anthropic says it is investigating and has found no evidence of impact to its core systems. The breach highlights the risks of restricting access to frontier AI capabilities through vendor environments rather than technical controls. A small group of unauthorised users gained access to Claude Mythos Preview, Anthropic's closely restricted cybersecurity AI model, on the same day the company publicly announced the model's existence, apparently by guessing the model's URL based on familiarity with Anthropic's URL formatting conventions for other models, according to a Bloomberg News report published on 21 April. The group, whose members communicate via a private Discord channel dedicated to gathering intelligence on unreleased AI models, has been using Mythos regularly since gaining access and provided Bloomberg with proof in the form of screenshots and a live demonstration. Anthropic confirmed it is investigating the claims: "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments." The company said there is currently no evidence that the access has impacted Anthropic's core systems or extended beyond the vendor environment in question. An individual currently employed at a third-party contractor working with Anthropic appears to have been involved, at least in part, in facilitating the group's access, the outlet reported. The significance of the breach is inseparable from the nature of the model. Anthropic announced Mythos Preview and the accompanying Project Glasswing initiative on 7 April 2026. The company withheld the model from general release specifically because of its offensive cyber capabilities: in testing, Mythos autonomously discovered thousands of previously unknown zero-day vulnerabilities across every major operating system and every major web browser, and wrote working exploits, including chaining together four vulnerabilities in a browser to escape both renderer and operating system sandboxes, a feat that would typically require months of expert work. Anthropic engineers with no formal security training asked the model to find remote code execution vulnerabilities overnight and woke to complete, working exploits. The company said it was withholding the model because the same capabilities that make it powerful for defence could be devastating in the wrong hands. Project Glasswing was designed to navigate that tension: rather than a public release, Anthropic extended Mythos access to 12 named launch partners, Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, plus Anthropic itself, for defensive security work, with around 40 additional organisations also granted access. The initiative also included $100 million in usage credits and $4 million in direct donations to open-source security organisations. The restricted rollout was Anthropic's explicit attempt to give defenders a head start over attackers before a model with these capabilities proliferated. The unauthorised access undermines that logic without entirely defeating it: the group in question reportedly described its intentions as curiosity-driven, but intent is not a reliable safeguard when the tool in question can autonomously produce weaponisable exploits. The breach also carries political weight, arriving the day after President Trump said on CNBC that a Pentagon deal with Anthropic was "possible" and that the company was "shaping up." Anthropic is simultaneously suing the Department of Defense over its blacklisting as a supply chain risk, with that dispute centred specifically on the question of how safely its AI can be controlled. An unauthorised access incident, even one apparently routed through a third-party vendor environment rather than Anthropic's own infrastructure, gives ammunition to those in the administration who have argued that Anthropic cannot reliably govern access to its own tools. It also complicates the company's case in court, which rests in part on its argument that it applies rigorous safety and access controls to its most capable models. The mechanism of access, an educated guess about the model's URL, enabled by knowledge of Anthropic's conventions for other model endpoints, points to a specific failure mode that is distinct from a conventional data breach or intrusion. The group did not bypass Anthropic's security architecture so much as exploit the gap between Anthropic's controls on its own systems and those of a third-party vendor with access credentials. That distinction matters for the investigation and for how the incident should be read by the wider AI industry: it is a vendor security failure as much as a model governance failure. But the result is the same.
[13]
Some Unknown Group Is Reportedly Using Claude Mythos Without Permission
In a very cagily-written story from Bloomberg, Anthropic confirmed Tuesday that it has received a report that an unauthorized mystery group is accessing Claude Mythosâ€"the model it says is too dangerous to release. “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,â€Â says an Anthropic spokesperson’s statement to Bloomberg. Bloomberg apparently confirmed the apparent breach by looking at a live demo and screenshots sent over by a member of the group responsible for the unauthorized access. In understandably obfuscatory language, Bloomberg explains that an anonymous source says they are a member of an unnamed group that has abused their access “as a worker at a third-party contractor for Anthropic†and employed “commonly used internet sleuthing tools often employed by cybersecurity researchers,†to gain some form of access to the model. But don’t worry, this secret group that apparently has access to the most feared piece of technology in the world is “interested in playing around with new models, not wreaking havoc with them,†the source apparently explained to Bloomberg. The sequence of events in the apparent breach looks something like this: So to recap: Anthropic says it has the scariest AI model in the world, and for what it’s worth, a whole lot of powerful institutions seem to believe it. If we take Anthropic at its word, we’re all trusting it not to abuse this power that it and only it controls. However, some unknown entity has accessed this scary AI model, but if we take them at their word, they just used it for some vibe coding tests and they swear they’re not doing anything evil with it.
[14]
Anthropic reportedly 'lost control' of its most dangerous AI model -- and that should worry everyone
Just last week, Anthropic launched Claude Opus 4.7, described as a safer public-facing version of Claude Mythos, a model reportedly considered too dangerous for broad release. Now, the company is facing uncomfortable questions after reports claimed an unauthorized group gained access to Claude Mythos, a highly restricted internal model built for advanced cybersecurity tasks. If accurate, this may be one of the clearest examples yet that the biggest risk in AI isn't the model, but those who can access it. According to a Bloomberg report, Claude Mythos was designed to identify and exploit vulnerabilities in software systems, making it far more sensitive than a standard chatbot. Access was reportedly limited to select partners under a private security initiative, not the general public. Yet...an outside group now claims it found a way in. What allegedly happened Reports say the group may have accessed Mythos through a third-party contractor environment rather than Anthropic's main internal systems. Anthropic has reportedly said it is investigating and has no evidence that its core systems were breached. To be clear, this does not appear to be a case of rogue AI behavior or some dramatic sci-fi scenario of a bot escaping from its maker. Instead, the problem is far more familiar in the tech world, such as credentials, vendor access, weak boundaries and security gaps. In other words, this is a very human problem with a potentially dangerous AI. Why this story is troubling Besides the issue of a powerful model getting into the wrong hands, this alleged breach emphasizes what has been a topic of public conversation around AI for years: frontier AI models are becoming high-value assets, and valuable assets attract attackers. This concern is the immediate issue, but AI anxieties such as job displacement, misinformation at scale, autonomous misuse and Superintelligent systems as a whole still weigh heavily on the public. If big tech companies are building models powerful enough to influence cybersecurity, finance or defense, they also need to secure them as they would critical infrastructure. This means strong vendor oversight, tight identity controls, compartmentalized access, real-time monitoring and fast incident response. It doesn't take a rocket scientist to understand that building a powerful model is only half the challenge, and protecting it is the other half. Why Claude Mythos stands out What makes this report especially concerning is that Claude Mythos was reportedly treated as sensitive enough to keep behind closed doors. That creates a difficult optics problem. If a company signals a model is too powerful for public release, but outsiders can allegedly reach it anyway, we've got to wonder whether AI governance is keeping pace with AI development. And that reminds me of a bigger trend that absolutely no one is talking about: AI labs are entering a new era where they are no longer just software companies. They are becoming responsible for protecting systems that are valuable and important to governments, businesses and society. Obviously, that means the security expectations should start to resemble those placed on banks, cloud providers and critical infrastructure operators. The public debate over whether AI is getting too smart is clearly being overshadowed by the question of whether AI companies are secure enough. Obviously, they aren't. The takeaway If these reports are accurate, the Claude Mythos incident should serve as a warning to other AI companies to strengthen their security practices. Humans are building extraordinary tools faster than they can fully protect them -- and that may become the defining AI risk of this decade. Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
[15]
The Guardian view on Anthropic's Claude Mythos: when AI finds every flaw, who controls the internet? | Editorial
Tech can scale cyber-attacks and defences alike, raising questions about private power, public risk and the future of a shared internet Anthropic announced its latest AI model, Claude Mythos, this month but said it would not be released publicly, because it turns computers into crime scenes. The company claimed that it could find previously unknown "zero-day" flaws, exploit them and, in principle, link these weaknesses in order to take over major operating systems and web browsers. Mythos did so autonomously, writing code and obtaining privileges. The implications are significant. It's like a burglar being able to target any building, get inside, unlock every door and empty every safe. The Silicon Valley company has so far named 40 organisations as partners under Project Glasswing to help mount a defence - asking them to "patch" vulnerabilities before hackers get a chance to exploit them. All are American, sitting at the heart of the US-led digital system. Anthropic shared Mythos with only Britain outside the US, allowing the AI Security Institute to test frontier models. After seeing it up close, British ministers warned: AI is about to make cyber-attacks much easier and faster, and most businesses are not ready. Banks in Europe are likely to test it next. This may not be a moment too soon. Reports of unauthorised access surfaced this week - raising the question whether any private company can be trusted with a capability like this. Mythos doesn't necessarily create a new kind of cyber threat. It turns a latent weakness into a systemic risk. Hacking has traditionally been hard and time-consuming, requiring skills that few people have. But AI tools are spreading fast, putting system breaches within reach of many - not just experts. A poacher can also be turned into a gamekeeper. Mozilla tested Mythos on its Firefox browser: it found 10 times more flaws than before - and fixed them. Crucially, none were ones a human couldn't spot. What changes is that AI discovers "cyber vulnerabilities" quickly, cheaply and at scale. The US government's embrace of Anthropic marks a shift. In February, the Pentagon deemed the company a "security risk" and cut it off from lucrative deals after it refused to allow its technology to be used for mass surveillance or autonomous weapons. OpenAI got the contract instead. Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors - though its image was dented by a $1.5bn piracy settlement last year. Mythos is powerful, but Anthropic's PR has shaped the narrative as much as the technology. There is also a question of how advanced Mythos really is. Researchers have shown that smaller, cheaper models deployed at scale can do similar feats. What seems a breakthrough may reflect a broader shift across the field. The White House thinks that Anthropic has strategic value - inviting it back into the fold and signalling a shift from treating AI firms as contractors to partners. That raises a deeper concern: whether private firms' control of critical infrastructure risk is wise - especially if less responsible actors gain technical leverage. Clearly, whoever - state or firm - creates the most powerful AI models will gain geopolitical advantages over friends and foes alike. Without a framework for international coordination over cybersecurity, however, there risks being not one secure internet, but a number of competing ones - each "patching" its own system and fully trusting none of the others. It would no longer be a global commons. Instead, the web would be carved into security alliances, guarded more closely, even as something wider slips quietly away.
[16]
Discord group says it accessed Anthropic's unreleased Claude Mythos
An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning.
[17]
Mythos accessed by unauthorized users as Anthropic says 'We're investigating' -- Cracks may be showing in Project Glasswing as unknown users access model via third parties
* Unauthorized users claim to have access to Anthropic's Claude Mythos * The users gained access with guesswork and third-party access * The model is capable of exploiting software vulnerabilities at scale Anthropic's Mythos model, which is capable of spotting hundreds of zero-day vulnerabilities in software, has been accessed by unauthorized users. A Bloomberg report, citing documentation and a person familiar with the matter, says that the model is being used regularly by unauthorized users. Mythos' capabilities are so dangerous that Anthropic has restricted access to the model to a select handful of companies to harden their defenses as part of Project Glasswing, which may be starting to show cracks. Cracks are showing in Project Glasswing Anthropic has previously said that the Mythos model is capable of spotting critical vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." To put this in perspective, Mozilla CTO Bobby Holley recently revealed that Mythos was able to find 271 vulnerabilities in the latest build of Firefox. That is why Mythos would be so dangerous in the wrong hands. The software would allow a threat actor to immediately identify the most vulnerable cracks and either exploit them themselves or sell them to other nefarious actors. Bloomberg says that the users belong to a group with an interest in unreleased AI models who have previously accessed other unreleased Anthropic models. To access Mythos in particular, the users relied on the expertise of one person who has been given permission to access Anthropic models and software for evaluation purposes on behalf of a third-party company. The group also relied on details from a data breach that hit AI-recruitment startup Mercor. The details allowed the group to guess the whereabouts of the model's online location, while also using expertise gathered from the format of other Anthropic models. While the group has apparently said it has no interest in using Mythos for malicious purposes - and instead is interested purely in testing the model - it has raised serious questions about the security of Mythos. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson for Anthropic said in a statement, adding that the company has no evidence that the access has extended beyond a third-party vendor's environment. Anthropic recently detected exploit attempts and hidden evaluation awareness within the Mythos model, which it dubbed as 'strategic manipulation' features. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
[18]
Discord users breach access controls to reach Anthropic's Mythos model
This AI security breach shows why your data still needs protection A recent security incident involving Anthropic has highlighted just how fragile the safeguards around advanced AI systems can be. A Wired report suggests that a small group of users, operating through private Discord channels, managed to gain unauthorized access to the company's highly restricted Mythos AI model - an experimental system designed for cybersecurity applications. A Breach That Exposes Bigger Risks Around AI Control The incident appears to have occurred almost immediately after Mythos was made available to a limited group of trusted partners. According to multiple reports, the unauthorized users gained access through a third-party vendor environment, rather than directly breaching Anthropic's core systems. Recommended Videos Some accounts suggest that members of a private Discord community were able to exploit access permissions or identify entry points using publicly exposed information, effectively bypassing restrictions placed on the model. Importantly, there is no confirmed evidence that the system was used for malicious activity. In fact, reports indicate that the users interacted with the model in relatively limited ways. Still, the fact that access was obtained at all is the real story. Mythos itself is not just another AI model. It is designed to identify vulnerabilities in software systems and simulate cyberattacks - making it one of the most sensitive AI tools currently under development. That dual-use capability is precisely why access was tightly restricted in the first place. Why This Incident Matters Beyond One Breach At a glance, this might seem like a contained security lapse. In reality, it underscores a broader issue facing the AI industry: control is becoming harder than capability. AI models like Mythos are built to find weaknesses in systems, which means that in the wrong hands, they could accelerate cyberattacks rather than prevent them. Researchers and officials have already warned that such tools could pose significant risks if misused, given their ability to automate complex attack chains. What makes this case particularly notable is how the breach happened. It wasn't a sophisticated hack targeting core infrastructure. Instead, it appears to have leveraged gaps in the surrounding ecosystem -- contractors, permissions, and access management. That distinction matters. It suggests that securing advanced AI isn't just about the model itself, but the entire environment around it. Why It Should Matter To You For everyday users, this incident may feel distant, but its implications are closer than they seem. AI systems like Mythos are being developed to secure everything from browsers to financial systems. If those same tools are exposed prematurely or improperly controlled, the risk shifts from defensive to potentially offensive. Even without malicious intent, unauthorized access introduces uncertainty. It raises questions about how well companies can protect technologies that are increasingly critical to digital infrastructure. In simpler terms, if AI is being built to protect the internet, it needs to be protected first. What Happens Next For Anthropic And AI Security Anthropic has already launched an investigation into the incident and has stated that the breach was limited to a third-party environment, with no evidence of broader system compromise. However, the timing of the breach - coinciding with the model's early rollout - will likely intensify scrutiny around how such systems are tested and shared. Regulators and industry bodies are already paying close attention to high-risk AI models, and incidents like this only add urgency to those discussions. Going forward, expect stricter access controls, tighter vendor oversight, and potentially new frameworks for handling sensitive AI tools. Because if this episode proves anything, it's that the challenge is no longer just building powerful AI - it's keeping it contained.
[19]
Mythos access by Discord group reveals real danger of AI-powered hacking | Fortune
A Discord group's unauthorized access to Anthropic AI's powerful Mythos model is doing more than raising questions about the guardrails around powerful AI cybersecurity tools. It's exposing a bigger problem for the cybersecurity industry: AI can now find flaws and exploit them so quickly that defenders may be the ones left truly exposed. A group of AI-fueled Discord info-seekers - one of them linked to a third-party vendor of the AI startup - managed to access the highly gatekept cybersecurity defense system in February, the same day of its debut. Using a mixed bag of insider access, web-scouring bots, and some raw ingenuity, the breach is triggering a fresh wave of alarm across an already spooked industry. Ironically, as the Discord incident was unfolding, the Cloud Security Alliance - in a rapid-response briefing published days after Mythos was unveiled - warned that AI was accelerating vulnerability discovery faster than organizations could keep up, creating the perfect storm for defenders. Finding thousands of flaws and zero days across hundreds of software systems, the introduction of Mythos has effectively shrunk the patch window defenders have relied on for years - from days to just a few hours. If released in the wild and adopted by hackers, security teams will inevitably be tasked with building an entirely new playbook to help decide how to prioritize and fix what matters - and there's still no guarantee they can stem the cyber bleeding. More than 250 security leaders helped shape the briefing, which argues the challenge is no longer just finding flaws, but deciding which ones actually pose real risk - and fixing them before they can be turned into working exploits. It's a shift some security experts say the industry is still underestimating. The problem is no longer discovery alone. It is remediation, accountability, and whether defenders can keep up as AI moves from identifying vulnerabilities to showing how they can be exploited in the real world. The Mythos moment may ultimately be less about a single powerful cybersecurity model and more about what happens in the shrinking window between finding a flaw and weaponizing it. Anthropic's answer, for now, is Project Glasswing - a tightly controlled effort to use Mythos to help secure critical software before comparable models become more widely available. But even that highlights the larger issue at hand: the industry knows what is coming and is still scrambling to build that much-needed playbook in time to defend against larger threats, such as nation-state or ransomware attackers. If a group of AI nerds could get into Mythos - allegedly without malicious intent - imagine the fallout if the next ones to slide through that door were actual criminals.
[20]
Rogue Group Gains Access to Anthropic's Dangerous New Mythos AI
Remember Claude Mythos, Anthropic's new AI model that it hyped as being so powerful that it was too dangerous to release to the public? Well, it's already been broken into, according to new reporting from Bloomberg. A small group of Discord users gained access to a preview version of Mythos, a source told the outlet, on the same day Anthropic announced it would be exclusively releasing the model to a select ring of companies. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson for Anthropic told Bloomberg in a statement. The company added that it hasn't found any evidence of unauthorized access to Mythos. The group supposedly doesn't have any nefarious intentions. It has been regularly using Mythos since gaining access to it, according to Bloomberg, though only for non-cybersecurity related purposes. The source described the group as being interested in "playing around" with new models, rather than wreaking havoc. But their alleged feat does raise the alarming possibility that other less scrupulous actors could have gotten their hands on Mythos without Anthropic knowing. According to Bloomberg's source -- described only as a person familiar with the matter -- the users are part of a private Discord server dedicated to digging up information on unreleased AI models. They gained access by making an educated guess about where Mythos was stored online based on how Anthropic has stored its other models, some of the details of which were revealed in a recent data breach from an AI startup that works with large AI companies. The source also claimed to have permission to access Anthropic tech used to evaluate its models through another company that did contract work for Anthropic. No serious harm seems to have come from the breach, but it's a bad look for Anthropic, which earned brownie points for holding off from unleashing Mythos to the public. It instead chose to give access to around forty organizations, including tech giants like Apple, Microsoft, and Amazon. The Dario Amodei-led company has described Mythos in terms of being a cybersecurity skeleton key cum digital WMD that can break into "in every major operating system and every major web browser when directed by a user to do so." In tests, Anthropic said Mythos was even able to break out of its sandbox computing environment and then use an exploit to gain access to the internet so it could message a researcher about its accomplishment, which it did. Whether the Mythos's formidable reputation is warranted, it's put world governments on watch; leaders from the European Union, which does not have access to the model, have met with Anthropic at least three times since Mythos was released, the New York Times reported, while the UK's AI minister felt compelled to address its capabilities by vowing the country would take steps to protect "critical national infrastructure."
[21]
Anthropic investigating possible breach of its Mythos AI model
Mary Cunningham is a reporter for CBS MoneyWatch. She previously worked at "60 Minutes," CBSNews.com and CBS News 24/7 as part of the CBS News Associate Program. Anthropic is investigating a possible breach of Mythos, a new model the artificial intelligence company rolled out to a small pool of companies earlier this month to help them detect software vulnerabilities. The AI company behind the chatbot Claude is looking into a report of unauthorized access to Mythos from one of its third-party vendor environments, an Anthropic spokesperson told CBS News in an email. Anthropic works with a small number of third-party vendors to develop its AI models. So far, the company has not detected any breaches outside of its vendor environment or any compromises to the Anthropic systems. Anthropic confirmed its investigation into the possible Mythos breach on Wednesday, a day after Bloomberg reported that a small group of unauthorized users had gained access to the tool, citing a person familiar with the matter. Anthropic released Mythos to a limited group in April as part of an effort called Project Glasswing, billing the new model as more effective than competing AI systems at detecting software vulnerabilities. At the time, Anthropic only shared the tool with a small group of major companies, including Amazon, Apple, Cisco, JPMorgan Chase and Nvidia, amid concerns that the new model could be exploited by hackers. The goal was to help these companies harden their defenses before bad actors can gain access to Mythos or similar AI models. Federal officials, security experts and leaders at global institutions like the International Monetary Fund have all raised concerns about what might happen if Mythos falls into the wrong hands. While Project Glasswing is intended to help companies insulate themselves from cybersecurity threats, some experts are concerned that Mythos could also be used to exploit IT infrastructure at banks, hospitals, government systems and other organizations. "We need to prepare ourselves, because we couldn't keep up with the bad guys when it was humans hacking into our networks," Alissa Valentina Knight, CEO of cybersecurity AI company Assail, previously told CBS News." We certainly can't keep up now if they're using AI because it's so much devastatingly faster and more capable.
[22]
Unauthorised users hack Anthropic's 'too dangerous to release' AI
Hackers have gained access to Anthropic's Mythos, an AI the company considers too cybersecurity-sensitive to release publicly. A group of unauthorised users reportedly gained access to Anthropic's new product, which the artificial intelligence company says is too powerful to release to the public as it "poses unprecedented cybersecurity risks". Anthropic's new AI technology, Mythos, is designed for enterprise security and is being tested by a few technology and cybersecurity firms. A "private online forum" has managed to gain access to Mythos through a third-party vendor, according to Bloomberg. The company said it was investigating the Bloomberg report, an Anthropic spokesperson told TechCrunch, adding that there was so far no evidence that the reported activity had impacted Anthropic's systems. Members of the unauthorised group are part of a Discord channel that seeks out information about unreleased AI models, Bloomberg reported. Citing a person employed by a third-party contractor that works for Anthropic, Bloomberg added that the group tried several strategies to gain access to the model. The outlet also reported that the unauthorised group had been regularly using Mythos once it gained access. Euronews Next has reached out to Anthropic for comment but did not receive a reply at the time of publication. Anthropic said it would limit the release of its new AI model to a few tech and cybersecurity firms as part of its so-called Project Glasswing. The list includes Amazon, Apple and JP Morgan Chase. Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the Anthropic model too, according to reports. Treasury Secretary Scott Bessent convened a meeting of senior American bankers in Washington in April to discuss the Mythos model. The meeting encouraged the banking executive to use Antropic's Mythos model to detect vulnerabilities, according to Bloomberg. Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the Anthropic model too, according to reports.
[23]
Anthropic probes unauthorized access to Mythos AI model
San Francisco (United States) (AFP) - American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers. According to Bloomberg, which first reported the probe, a small group of users in a private, online forum gained access to the model via the computer system reserved for Anthropic's external vendors. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told AFP. The users got hold of Mythos by various means, including using access one of them had as a worker at a contractor for Anthropic, Bloomberg reported. Anthropic works with a small number of third-party vendors who help with model development. The firm has delayed a general release of Mythos, which it says can spot undiscovered security holes that have existed for decades, in systems tested by both human experts and automated tools. It shared Mythos first with a few dozen key US tech and financial services players -- such as Nvidia, Amazon and JP Morgan Chase -- to allow them to improve their security infrastructure. But the company has also been accused of overhyping the powers of a technology which is its stock in trade, and the subject of fierce competition with rival OpenAI.
[24]
What is Mythos AI and why could it be a threat to global cybersecurity?
Anthropic's decision to restrict access to its powerful new model increases fears about the advanced technology Anthropic has ruled out releasing its latest AI model, Mythos, to the public because of the threat it poses to global cybersecurity. However, the US tech startup behind the Claude chatbot confirmed on Wednesday it was investigating a report that a group of people had gained unauthorised access to Mythos. The alleged incident has raised concerns over the pace of development and the ability of tech companies to keep their riskiest products out of the public domain. Here, we examine Mythos and its potential impact.
[25]
Anthropic investigates unauthorized access to restricted Claude Mythos AI model - SiliconANGLE
Anthropic investigates unauthorized access to restricted Claude Mythos AI model Anthropic PBC is investigating a report that unauthorized users accessed Claude Mythos, the next-level artificial intelligence model the company says is powerful enough to enable dangerous cyberattacks. A small group of users in a private online forum gained access to Mythos on the same day Anthropic announced a limited testing release of the model, Bloomberg first reported Tuesday, citing a person familiar with the matter and documentation it had viewed. The group has been using the model regularly since, though not for cybersecurity purposes, the person said. The account was corroborated with screenshots and a live demonstration. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said. The company said there is no indication the activity extended beyond the vendor or that its own systems were affected. The users reportedly gained entry through the credentials of a member of the forum who works for a third-party contractor that evaluates Anthropic models. The group combined those credentials with details from a data breach at artificial intelligence recruiting and training startup Mercor Inc. to locate the model. Bloomberg's source also claimed that the group has access to other unreleased Anthropic models. Anthropic has previously described Mythos as having a level of coding ability that can "surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The company has restricted distribution to Project Glasswing, with a preview version that has been offered to Apple Inc., Amazon.com Inc., Cisco Systems Inc., CrowdStrike Holdings Inc., Google LLC, JPMorgan Chase & Co., Microsoft Corp. and Nvidia Corp., along with about 40 other organizations, so they can test and secure their own systems. Access to the model has also become a point of contention across the U.S. government. The National Security Agency and the Commerce Department's Center for AI Standards and Innovation already have access, according to reports and the Treasury Department is seeking it. The group using Mythos has so far avoided offensive tasks, reportedly to evade detection. Discussing the reports, Ram Varadarajan, chief executive officer at cyber deception technology company Acalvio Technologies Inc., told SiliconANGLE via email that "the Mythos breach didn't require a sophisticated attack." "It just required a contractor, a URL pattern and a Day-One guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue," explains Varadarajan. "This is the supply chain problem that perimeter-centric security has always underestimated: access controls are a policy, not an architecture and policies fail." Tim Mackey, head of software supply chain risk strategy at application security firm Black Duck Software Inc., noted that "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture the flag exercise, where success includes claims of unauthorized access to Mythos." "The unfortunate reality is that while it's great to hear that novel cybersecurity models are being provided to select researchers to evaluate, if your team is on the outside looking in, waiting for the final report might not be top of mind," said Mackey. "For defenders, even the specter of unauthorized access to an adversarial model as powerful as Mythos is purported to be only increases anxiety levels." "What's clear is that security leaders in organizations of all sizes should take this claim as a call to action focused on the role AI-enabled cybersecurity plays in their operations and how best to scale those efforts to deal with AI-enabled adversaries," added Mackey.
[26]
Anthropic probing reported Mythos leak on Discord
Bloomberg reports that users gained access to Mythos the same day Anthropic announced its limited release. A private Discord group has reportedly gained access to Anthropic's powerful new AI model Mythos, raising sharp concerns around the company's ability to keep the model on a short leash. Mythos, unveiled in a limited launch earlier this month, vastly outperforms other AI models in vulnerability detection and exploitation. Anthropic has only given access to the model to a closed but growing group of companies and financial institutions, including Apple, Google, Microsoft, Nvidia and JP Morgan Chase to bolster their cybersecurity. UK financial institutions are set to start using Mythos this week, while Japan and Canada are in discussions with its biggest banks. Bank of Ireland told SiliconRepublic.com that it is keeping the matter under review. Last week, National Cyber Security Centre's director Richard Browne told an Oireachtas Joint Committee that the technology would be in the hands of bad actors within months. However, a source has now told Bloomberg that a handful of users gained access to Mythos weeks earlier, on the same day Anthropic announced its limited launch. The group has been using Mythos regularly since, but hasn't for malicious purposes, the source added. "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," Anthropic told news publications in a statement. Mythos sent shocks through the industry, which is scrambling to bolster its security systems in light of the powerful AI model. Soon after its launch, US authorities told Wall Street leaders to take the matter seriously. Although not all have taken an equally serious approach, with Deutsche Bank commenting that Germany's financial institutions are well-prepared for cyber risks posed by the model. "Naturally everyone is trying to get access, but I think it's entirely appropriate that this access remains restricted for the time being," said Deutsche Bank CEO Christian Sewing earlier this week. Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
[27]
A group of users leaked Anthropic's AI model Mythos by reportedly guessing where it was located | Fortune
The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic's past practices, that hackers obtained from AI training startup Mercor. Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported. Anthropic did not immediately respond to Fortune's request for comment. A spokesperson from Anthropic told Bloomberg the company was "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The fact that the model was leaked so quickly doesn't surprise David Lindner, the chief information security officer at Contrast Security and a 25-year industry veteran. Even though Anthropic intentionally limited the model to a small group of 40 companies -- including Microsoft, Apple, and Google -- to beef up their security ahead of a wider release, thousands of people likely had access to the program across these companies, which makes a leak nearly inevitable, he said. "It was bound to happen," Lindner said. "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it." Anthropic claims its Mythos model is more adept at finding cybersecurity vulnerabilities than previous versions. The company was able to use the program, which has not been widely released, to find a 27-year-old security vulnerability in OpenBSD, an operating system known for its security. Mozilla on Tuesday also said it used a preview of the model to identify and patch 271 vulnerabilities in its Firefox web browser. And yet, Mythos' release has been plagued by security breaches from the start. Fortune was the first to report on the model's existence thanks to a security lapse that exposed details about the large language model in a publicly accessible database. For Lindner, this most recent unauthorized access shows it's likely U.S. adversaries already have access to this tech which could put U.S. companies and other systems at risk of attacks. "If some group -- some random Discord online forum, got access to it. it's already been breached by China," Lindner told Fortune. Although Lindner is still unsure how much of Mythos' supposed danger is real or just marketing hype -- OpenAI's Sam Altman this week called Anthropic's promotion of Mythos "fear-based marketing" -- it's clear cybersecurity professionals, or defenders, need to be ready for a new world of AI attacks. "The real thing is there's a real compression of timelines here for defenders," he said. AI is unique in its abilities to execute cyberattacks because it never gets tired, said Lindner. It can relentlessly tackle a weak spot in a company's security system, whereas a human may eventually give up. It also empowers less experienced developers to commit cyberattacks partly by drawing on the myriad documentation available on the web about previous exploits and using it to inform an AI model and adjust its attacks for specific situations. "It's the folks that have some sort of [developer] background or some sort of technical background that may have had some limitations in the past of getting over things or taking too long to do stuff, it makes this stuff way easier now," he said. Lindner said the fact that the program was reportedly accessed by third-party contractors means that, even more than before, companies need to limit who has access to its most vital systems. The rapid rise of AI as a tool for cyberattacks could disproportionately affect smaller companies, who may not be able to keep up with the increasing complexity of AI-fueled attacks, said Lindner. Those that refuse to even touch AI and continue on as before are even more at risk, he said. "AI is not a golden ticket, but if you're not taking advantage of it on the defender side, there is no chance, none, that you are going to be able to keep up with the offensive side," he said.
[28]
Anthropic investigates report of rogue access to hack-enabling Mythos AI
'Handful' of people allegedly gain unauthorised access to model adept at detecting cybersecurity vulnerabilities The AI developer Anthropic has confirmed it is investigating a report that unauthorised users have gained access to its Mythos model, which it has warned poses risks to cybersecurity. The US startup made the statement after Bloomberg reported on Wednesday that a small group of people had accessed the model, which has not been released to the public because of its ability to enable cyber-attacks. "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," said Anthropic. Bloomberg said a "handful" of users in a private online forum gained access to Mythos on the same day Anthropic said it was being released to a small number of companies including Apple and Goldman Sachs for testing purposes. It reported that the unnamed users got to Mythos through access that one of them had as a worker at a third-party contractor for Anthropic and by deploying methods used by cybersecurity researchers. The group has not run cybersecurity prompts on the model and is more interested in "playing around" with the technology than causing trouble, according to Bloomberg, which corroborated the claims via screenshots and a live demonstration of the model. Nonetheless, news of the potential breach will alarm authorities who have raised concerns about Mythos's potential to wreak havoc and will raise questions about how potentially damaging technology can be kept out of the wrong hands. Kanishka Narayan, the UK's AI minister, has said UK businesses "should be worried" about the model's ability to spot flaws in IT systems - which hackers could then act upon. The model has been vetted by the world's leading safety authority for the technology, the UK's AI Security Institute (AISI), which warned last week that Mythos was a "step up" over previous models in terms of the cyber threat it posed. AISI said Mythos could carry out attacks that required multiple actions and discover weaknesses in IT systems without human intervention. It said these tasks would normally take human professionals days to carry out. Mythos was the first AI model to successfully complete a 32-step simulation of a cyber-attack created by AISI, solving the challenge in three out of its 10 attempts.
[29]
Former U.S. Cyber Director Sounds the Alarm on Anthropic's 'Too Powerful' AI Model
At the center of those concerns is the model's ability to identify and exploit software vulnerabilities at a level that rivals -- or exceeds -- top human experts. Anthropic itself has framed Mythos as part of a broader shift, noting that "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." That capability is already showing up in practice. According to the company, "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser," a scale of discovery that signals both defensive promise and offensive risk. Anthropic added that such tools could soon spread "beyond actors who are committed to deploying them safely." For Kemba Walden, former U.S. National Cyber Director, that dual-use nature is exactly the problem. The technology represents "a leap in defensive AI capabilities," but it also, she warned in an opinion piece for Fortune, introduces "inherent risks that expose vulnerabilities in our critical infrastructure and systems."
[30]
2,000 flaws in 7 weeks? Anthropic's Mythos raises security alarms
Mythos AI cybersecurity risks explained: Anthropic's new AI model, Mythos, is a cybersecurity tool that has found thousands of software flaws. Its speed and ability to find vulnerabilities independently are changing the field. This powerful AI is not yet public due to its potential impact. The model's capabilities raise concerns about data security for individuals and companies alike. Mythos AI cybersecurity risks explained: There's a reason Anthropic isn't rushing to release its new AI model, Mythos, to the public. Built for defensive cybersecurity research, the system proved so effective during testing that access has been limited to a small group of trusted partners like Microsoft and Google. That decision alone signals caution, and once you see what Mythos actually uncovered, it starts to feel justified. In just seven weeks, Mythos identified more than 2,000 previously unknown software vulnerabilities, as per a report. That number isn't just large, it changes the scale of what's possible. According to Virtru CEO John Ackerly, that single effort represents roughly 30% of the world's annual output of discovered zero-day vulnerabilities before AI entered the picture, as reported by Fox News, citing CyberGuy Report. What stands out isn't just the volume, but the speed and autonomy. Mythos isn't simply assisting researchers, it can independently discover vulnerabilities and even generate working exploits far faster than traditional human-led processes. Tasks that once took weeks can now be compressed into hours, or even minutes. That shift is forcing a rethink across cybersecurity. For decades, the industry has relied heavily on perimeter defense, building strong digital walls through firewalls, endpoint security, and monitoring systems. But as Ackerly points out, that model is starting to break down. When vulnerabilities can be discovered and exploited at this scale, simply strengthening the outer defenses may no longer be enough. Another concern is accessibility. Previously, exploiting complex vulnerabilities required deep technical expertise. Tools like Mythos could lower that barrier significantly, potentially allowing individuals with limited knowledge to identify and exploit weaknesses, as per the CyberGuy Report. The gap that once offered some level of protection is narrowing. Even the playing field is shifting. Both defenders and attackers may eventually have access to similar AI capabilities. While that creates balance in theory, the reality is uneven, attackers only need to succeed once, while defenders must succeed every time, as per the report. The impact extends beyond companies. Personal data, bank accounts, medical records, emails, all exists within the same systems now being tested by tools like Mythos. If those systems become easier to probe, the risk to individuals increases as well. Ackerly warns that breaches and scams could become more frequent, more targeted, and harder to detect, as per the CyberGuy Report. Anthropic's decision to hold back Mythos from a wider release stands out in an industry that typically moves fast to deploy new technology. Calling it "unprecedented," Ackerly described the move as responsible, especially given the potential risks tied to widespread access, as per the report. Mythos hasn't created new vulnerabilities, it has exposed how many already exist, and how quickly they can be found. That visibility is what's raising alarms. The long-standing belief that strong enough defenses can protect data is now being tested by a system that operates at machine speed. What is Anthropic's Mythos AI model? It is an AI model built for defensive cybersecurity research. Why isn't Mythos publicly available? Because its capabilities were considered too powerful for wide release.
[31]
Anthropic Investigates 'Unauthorized Access' to Its Ultra-Powerful Claude Mythos AI Model
A third party has reportedly already gained access to Anthropic's guarded Claude Mythos model. Claude Mythos is an advanced AI model developed by Anthropic that the company says has a level of coding ability that can "surpass all but the most skilled humans at finding and exploiting software vulnerabilities." Anthropic has been adamant about keeping Mythos under lock and key because of its purported abilities. Even so, the unauthorized access reportedly occurred in early April, on the same day that Anthropic announced its plans to grant a small group of users a preview version of Mythos for testing. Bloomberg first reported the breach. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said. According to Bloomberg, several users in a private Discord channel have been regularly using Mythos since Anthropic announced its plan, although they reportedly have not been using it for cybersecurity purposes. The group reportedly accessed the model through an individual who works for a third-party contractor that evaluates Anthropic's models and software. The group also reportedly used some common cybersecurity research techniques, and leveraged details released in the breach of AI recruiting and training startup Mercor to make some guesses about where the model could be found. The individual told Bloomberg that the group also has access to other unreleased models.
[32]
Another Anthropic blunder? Did hackers get into its secret Mythos AI system -- raising fresh fears about deeper security gaps and powerful cyberattack risks worldwide
Another Anthropic blunder? Reports say a small unauthorized group accessed the secret Mythos AI cybersecurity tool. That tool is built to detect vulnerabilities and simulate advanced cyberattacks. Access was not meant for public use. It was restricted and tightly controlled. Early findings point to a third-party vendor route, not a full system breach. No confirmed attacks yet. Still, the risk is serious. AI cybersecurity threats are rising fast. Even limited access can expose critical weaknesses. This incident raises big questions. Are AI safety controls strong enough? And can powerful cyber tools stay contained as global AI risks grow?
[33]
Uninvited Users Access Anthropic's Mythos AI Model | PYMNTS.com
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. That's according to a report Wednesday (April 22) by Bloomberg News, which notes that the company has described Mythos as powerful enough to initiate serious cyberattacks. The report cites a source familiar with the situation, who said a small number of users in a private online forum gained access to Mythos just as Anthropic was announcing plans to release the model to a handful of companies for testing. Since then, this group has continued to use Mythos, but not for cybersecurity reasons, the source said, backing their statement with screenshots and a live demonstration of the model. PYMNTS has reached out to Anthropic for comment but has not yet gotten a reply. Anthropic, the Bloomberg report continued, has said Mythos can spot and exploit vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." With that in mind, the company has been cautious about releasing the model, giving it to a small group of software companies as part of a program known as Project Glasswing. The goal is to allow those companies to test and safeguard their own systems from possible cyberthreats. Bloomberg argues the unpermitted access to Mythos underscores the challenge facing Anthropic in preventing potentially dangerous tech from making its way beyond the company's approved partners. Mythos has sparked cybersecurity concerns among regulators around the world, including from a group of central banks in the Asia-Pacific region earlier this week, following similar warnings in Europe, Great Britain and the U.S. As PYMNTS wrote last week, statements like these demonstrate the "split-screen reality" around Anthropic following Mythos' release. "The company is gaining traction fast in the enterprise market even as regulators and banks scramble to understand the risks that come with more powerful AI tools," that report said. In a separate report, PYMNTS wrote about a recent evaluation of Mythos by the U.K. Government's AI Security Institute (AISI). The most important takeaway from those findings, the report said, is not that AI can already carry out flawless cyberattacks. In fact, the AISI report noted that the success rate is limited. "But systems that can plan and execute multistage intrusions, even inconsistently, represent a baseline that will improve," PYMNTS added. "More compute, better orchestration, and tighter integration with external tools will incrementally close the gap between partial and reliable capability."
[34]
Mythos AI accessed without approval via third-party vendor route
What Anthropic Has Said: Anthropic confirmed it is looking into the incident. "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," a company spokesperson said. The firm added that it has "no evidence" so far that the breach has affected its core systems or extended beyond a vendor environment. The access appears to have happened on the same day Anthropic announced limited testing of Mythos under its Project Glasswing initiative. The model is being selectively shared with companies to help them identify and fix vulnerabilities in their own systems. Anthropic has said Mythos can identify and exploit weaknesses "in every major operating system and every major web browser when directed by a user to do so." How the Access Happened: According to the Bloomberg report, the users gained entry through a mix of methods. One individual had legitimate access through contract work linked to a third-party vendor connected to Anthropic. This access was combined with basic online investigation techniques, including scanning publicly available information and unsecured code repositories to locate the model's endpoint. Details exposed in a separate data breach at an AI startup, Mercor, may have also helped them guess the system's location. The group, operating through a private Discord channel focused on tracking unreleased AI models, has reportedly been using Mythos since gaining access. However, there is no evidence that they used it for cybersecurity exploits. Instead, they ran low-risk tasks like building simple websites, likely to avoid detection. Anthropic has officially granted access to a limited set of organisations, including Apple Inc., Amazon, and Cisco Systems, while Amazon is also offering the model through its Bedrock platform to approved users. At the same time, financial institutions and government agencies in the US and Europe are seeking early access to test their defenses. The incident highlights a key challenge: even tightly controlled releases of advanced AI systems can leak through indirect access points such as vendors, exposed data, or predictable infrastructure patterns. It also leaves open questions about whether others may have accessed the model without authorisation and how such risks can be contained.
[35]
Outsiders breached Anthropic's Claude Mythos same day 'potentially dangerous' AI model was revealed: report
A handful of users managed to gain unauthorized access to Anthropic's Claude Mythos - the model the company claims to be so dangerous that it would cause a wave of devastating cyberattacks if made available to the public. The breach occurred on April 8 - the same day that Anthropic and its CEO Dario Amodei revealed that Mythos was only available to about 40 handpicked corporate clients as part of "Project Glasswing." Anthropic said Mythos had found major cybersecurity flaws in "every major operating system and web browser" during internal testing. The unauthorized users belong to a private online forum dedicated to cracking unreleased AI models on Discord, a popular messaging app. Since gaining access, they have been using Mythos "regularly" but not for cybersecurity purposes, according to Bloomberg, which obtained screenshots and was shown a live demonstration of the users accessing the model. The sleuths broke into Mythos through a variety of tactics, including by guessing the model's online address based on the naming conventions Anthropic has used in previous model releases, the report said. One of the unauthorized users reportedly had some level of access to Anthropic's systems due to working as a third-party contractor for the firm. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said in a statement. The company added that it has no evidence the group's unauthorized access had expanded beyond the third-party vendor's environment or impacted any of its other systems. One person in the Discord group - members of which were not named - told Bloomberg that they want to test new models rather than use them to cause chaos. Still, the incident raises concerns about the extent of Anthropic's ability to maintain oversight of a tool that they claim could be used to wreak havoc on critical infrastructure like electric grids, power plants and hospitals if it fell into the wrong hands. Earlier this month, AI safety researcher Roman Yampolskiy told The Post that some "leakage" of the model was inevitable despite Anthropic's attempts to restrict access. Anthropic said it shared Mythos with corporate partners -- including Amazon, Google, Apple, Nvidia, CrowdStrike and JPMorgan Chase -- so they could plug their own cybersecurity vulnerabilities. Prior to the rollout, Mythos broke out of a secure "sandbox" meant to restrict internet access - with a researcher only finding out "by receiving an unexpected email from the model while eating a sandwich in a park." Anthropic described the much-publicized incident as "demonstrating a potentially dangerous capability for circumventing our safeguards." Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell recently held a closed-door meeting in which they urged top bank officials to ensure their systems were ready for the risks purportedly posed by Mythos.
[36]
Mythos Breach Shows Need for AI Framework, Magaziner
A small group of unauthorized users have accessed Anthropic PBC's new Mythos AI model, a technology that the company says is so powerful it can enable dangerous cyberattacks, according to a person familiar with the matter and documentation viewed by Bloomberg News. A handful of users in a private online forum gained access to Mythos on the same day that Anthropic first announced a plan to release the model to a limited number of companies for testing purposes, said the person, who asked not to be named for fear of reprisal. The group has been using Mythos regularly since then, though not for cybersecurity purposes, said the person.
[37]
Anthropic investigates alleged unauthorised access to its Mythos AI model: Here is what happened
The group has been using Mythos since the day it was publicly announced. Anthropic recently introduced a cybersecurity-focused AI model called Mythos, describing it as a powerful tool designed to help enterprises detect and respond to digital threats. However, the company is now investigating reports that an unauthorised group has gained access to the system through a third-party environment. According to a report by Bloomberg, a private online forum managed to access Mythos shortly after it was announced. The members of this group have not been publicly identified, but they reportedly accessed the model through a third-party vendor that works with Anthropic. The company is looking into the matter. 'We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments,' an Anthropic spokesperson told TechCrunch. The company also claimed that it has not found any evidence so far that the alleged activity has affected its systems. Also read: OpenAI CEO Sam Altman takes dig at Anthropic Mythos AI, calls it fear-based marketing The report claims that members of the group attempted different methods to get access with the model, including using access privileges associated with a person, who is said to work for a contractor partnered with Anthropic. Bloomberg also reported that the members of the group are part of a Discord channel focused on discovering and experimenting with unreleased AI models. According to the outlet, the group has been using Mythos since the day it was publicly announced and even shared screenshots and a live demonstration of the model to verify their claims. The group reportedly guessed the location of the model online based on Anthropic's previous patterns. Also read: ChatGPT Images 2.0 is here with improved photorealism, better Hindi text rendering and more Mythos was originally released to a limited group of vendors as part of Anthropic's Project Glasswing project. The restricted rollout included major partners such as Apple and was intended to ensure that the powerful cybersecurity tool did not fall into the wrong hands. Also read: 'Legend': Sam Altman and other leaders react as Tim Cook steps down as Apple CEO
Share
Copy Link
Anthropic is investigating reports that unauthorized users gained access to Mythos, its restricted cybersecurity AI model, through a third-party vendor on the day of its announcement. The incident highlights vulnerabilities in controlled AI releases and raises questions about the model's actual capabilities versus the company's dramatic safety messaging.
Anthropic is investigating a security breach involving Mythos, its highly restricted cybersecurity AI model designed exclusively for enterprise security applications. According to Bloomberg, a group of unauthorized users gained access to the AI model through a third-party vendor environment on the same day Anthropic publicly announced Project Glasswing, the controlled release program
1
. The company confirmed it is examining claims of unauthorized access but stated that no evidence suggests the activity impacted Anthropic's core systems4
.
Source: Market Screener
An Anthropic spokesperson told media outlets: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments"
5
. The incident represents an awkward turn for a company that has built its brand on taking AI safety seriously while positioning Mythos as too dangerous for public release.The unauthorized group accessed Mythos through a combination of insider knowledge and predictable security patterns rather than sophisticated technical exploits. Members of a private Discord channel made "an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models," Bloomberg reported
1
. The group also leveraged access through a contractor who works for a third-party vendor evaluating Anthropic models.
Source: New York Post
Details about Anthropic's model hosting patterns were reportedly exposed in the recent Mercor data breach. Mercor, an AI staffing startup that supplies specialized contractors to major AI labs including Anthropic, was affected by a LiteLLM supply-chain attack earlier this month
5
. Security researcher Lukasz Olejnik described this type of failure as "entirely imaginable" and something the cybersecurity industry has routinely dealt with for the last 20 years3
.The group provided evidence to Bloomberg in the form of screenshots and a live demonstration of the software, claiming they were "interested in playing around with new models, not wreaking havoc with them"
1
.Mythos was released to a select number of vendors through Project Glasswing, including major technology companies like Apple, Amazon Web Services, Google, Microsoft, NVIDIA, and financial institutions like JPMorganChase
2
. The limited release aimed to prevent weaponization by bad actors while allowing organizations to identify cybersecurity vulnerabilities in their own systems before criminals could exploit them.
Source: Mashable
According to Anthropic, Mythos can discover flaws in virtually any software and allegedly found "thousands of high- and critical-severity vulnerabilities" in operating systems and other software
2
. The company warned that "the fallout -- for economies, public safety, and national security -- could be severe" if the model fell into the wrong hands.Mozilla reported that Mythos helped its team find 271 vulnerabilities in Firefox 150, though Mozilla CTO Bobby Holley noted that "we also haven't seen any bugs that couldn't have been found by an elite human researcher"
2
. This assessment suggests the AI model excels at speed and thoroughness rather than discovering entirely novel vulnerability types.Related Stories
The breach highlights critical vulnerabilities in controlled AI releases and supply-chain security. Security researcher Pia Hüsch from the Royal United Services Institute noted that while no company is ever completely secure and humans are often the weakest link, the incident "really illustrates how wide the circle of people who may be able to do this is, even if they don't have super technically sophisticated means"
3
.Ram Varadarajan, CEO at deception-tech firm Acalvio, observed: "The Mythos breach didn't require a sophisticated attack. It just required a contractor, a URL pattern, and a day-one guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue"
5
.The fact that the breach was uncovered by a reporter rather than Anthropic raises questions about the company's monitoring capabilities. Security experts note that Anthropic should have the means to log and track model use, which should make it possible to stop unauthorized or malicious access, especially for a highly limited rollout
3
.Early analysis from organizations with authorized access suggests Mythos may not be as revolutionary as Anthropic's messaging implied. Alan Woodward at the University of Surrey explained: "The AI is not necessarily capable of finding vulnerabilities that a human wouldn't, but it's just so much faster, thorough and relentless. Hence it's finding vulnerabilities that humans have missed"
2
.The UK's AI Security Institute tested Mythos and found it capable of attacking only "small, weakly defended and vulnerable enterprise systems" with no indication that truly secure software or networks would be at significant risk, though the institute warned these capabilities are improving rapidly
2
.Security expert Davi Ottenheimer characterized the situation as "a legitimate technological capability, reframed as civilisational threat, by a party that benefits from the reframing"
2
. Tim Mackey from Black Duck noted that "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise, where success includes claims of unauthorized access to Mythos"5
.The incident underscores broader challenges in balancing AI development transparency with security concerns, particularly as organizations navigate vulnerability research in an era where AI models accelerate both offensive and defensive capabilities.
Summarized by
Navi
[1]
[2]
[3]
[5]
30 Apr 2026•Technology

14 Apr 2026•Technology

15 Apr 2026•Policy and Regulation

1
Technology

2
Policy and Regulation

3
Policy and Regulation
