Anthropic Mythos discovers 271 software vulnerabilities in Firefox 150 before release

Reviewed byNidhi Govil

20 Sources

Share

Mozilla's Firefox 150 includes fixes for 271 security flaws discovered by Anthropic Mythos Preview, a new AI model with advanced bug-finding capabilities. The AI identified vulnerabilities by analyzing unreleased source code, a task that would have required months of human effort. Firefox CTO Bobby Holley says defenders now have a decisive advantage in cybersecurity.

Mozilla Leverages Anthropic Mythos to Uncover Record Security Flaws

Mozilla revealed that early access to Anthropic Mythos Preview helped pre-identify 271 software vulnerabilities in Firefox 150, released this week

1

. The discovery represents a significant leap in AI bug finding capabilities, with Firefox CTO Bobby Holley declaring that "defenders finally have a chance to win, decisively" in the ongoing battle between cyberattackers and defenders

1

. The AI model analyzed unreleased source code to detect critical software vulnerabilities that would have required months of concentrated human effort to uncover through traditional methods.

Source: TweakTown

Source: TweakTown

The scale of this achievement becomes clear when compared to previous efforts. Anthropic's Opus 4.6 model found only 22 security-sensitive bugs when analyzing Firefox 148 last month

1

. While these vulnerabilities could theoretically be discovered through automated fuzzing techniques or elite security researchers reasoning through complex source code analysis, using Anthropic Mythos eliminated the need to concentrate costly human resources on finding individual bugs

1

.

AI in Cybersecurity Shifts the Balance Toward Defenders

Holley emphasized that AI-aided security analysis fundamentally changes the economics of vulnerability discovery. When automated vulnerability hunting becomes cheaper for both attackers and defenders, defenders benefit because they can proactively patch security bugs before exploitation

1

. "Computers were completely incapable of doing this a few months ago, and now they excel at it," Holley wrote, adding that Mythos Preview is "every bit as capable" as the world's best security researchers

1

.

Source: Hacker News

Source: Hacker News

In an interview with Wired, Holley stated that every piece of software will need to engage with this type of semantic bug detection "because every piece of software has a lot of bugs buried underneath the surface that are now discoverable"

3

. He expressed confidence that Firefox has "rounded the curve" on this transition, even as future models may become more advanced

3

.

Project Glasswing Brings AI Security to Major Tech Players

Anthropics launched Project Glasswing to help thwart AI-driven cyberattacks, with launch partners including Amazon Web Services, Apple, Google, Microsoft, and Nvidia

2

. These companies will use Mythos Preview to scan and secure their software. The model has identified thousands of high- and critical-severity vulnerabilities across major operating systems and web browsers, despite not being explicitly trained for this purpose

2

.

Among the discoveries are a 27-year-old bug in OpenBSD enabling remote attackers to crash machines, web browser exploits allowing cross-domain data theft, and weaknesses in cryptography libraries that could let hackers decrypt encrypted communications

2

. The model can even chain together separate vulnerabilities to form step-by-step exploits granting root access to the Linux kernel

2

.

Open-Source Software Faces Critical Security Moment

The implications for open-source software are particularly significant. Mozilla Firefox and other open-source projects could be especially impacted by zero-day vulnerability discovery capabilities, since their public codebases are easier for AI systems to explore and many rely on volunteer maintenance with wildly insufficient security resources

1

. Mozilla CTO Raffi Krikorian argued in a New York Times essay that "the programmer who gave 20 years of his life to maintain code that runs inside products used by billions of people" should have access to Mythos

1

.

Nayan Goel, principal application security engineer at Upgrade, noted that speed and semantics set AI models apart from traditional tools. They can pinpoint vulnerabilities faster than humans, and their ability to reason about code semantics and follow data flows across abstraction layers exceeds pattern-matching functionalities of static analysis tools

2

. "That's the kind of cross-component reasoning that is structurally beyond what rule-based tools can do," Goel explained

2

.

Dual-Use AI Concerns and the Rise of Script Kiddies

While Anthropic Mythos offers powerful defensive capabilities, cybersecurity experts warn about dual-use AI risks. The same capabilities that enable vulnerability disclosure can be weaponized for exploit development

4

. Security researchers fear that AI could put advanced hacking skills into the hands of script kiddies—no-skill hackers who previously relied on copying exploits without understanding them

4

.

Source: The Verge

Source: The Verge

"There's a tidal wave coming. You can see it. We can all see it," said Dan Guido, CEO of Trail of Bits

4

. Tim Becker, senior security researcher at Theori, noted that "you can use AI tools and with very minimal human guidance, and in some cases no human guidance, find a zero day in widely used software"

4

.

Anthropics is attempting to prevent misuse by limiting initial release to critical industry partners and building safeguards into Claude Opus 4.7 to block malicious cybersecurity requests

4

. Security professionals wanting defensive use can apply to the company's Cyber Verification Program.

Challenges in Patch Management and False Positives

Despite promising potential, large language models remain prone to generating false positives in red teaming scenarios—incorrectly flagging bugs as security vulnerabilities or overstating severity

2

. Jeremy Katz, vice president of code security at Sonar, reported "a drastic uptick in the number of things being reported" to open-source maintainers, many being real bugs but not actual security vulnerabilities

2

. The volume creates significant triage challenges for volunteers under pressure to provide prompt fixes.

Tools like Claude Code Security and Google's CodeMender conduct adversarial self-review passes, challenging their own results before presentation to reduce false positives and build verification layers

2

. This additional scrutiny helps maintain the cybersecurity balance while managing the firehose of discovered bugs.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved