OpenAI releases GPT-5.5-Cyber to vetted cybersecurity teams, a month after Anthropic's Mythos

2 Sources

Share

OpenAI announced GPT-5.5-Cyber, a specialized AI model for cybersecurity teams to identify vulnerabilities and analyze malware. The limited preview follows Anthropic's Mythos release, with both models showing advanced capabilities in finding software bugs. OpenAI's approach offers broader access through its Trusted Access for Cyber program compared to Anthropic's restrictive rollout.

OpenAI Launches GPT-5.5-Cyber for Vetted Security Teams

OpenAI on Thursday announced the limited preview of GPT-5.5-Cyber, a specialized variation of its latest AI model designed specifically for vetted cybersecurity teams

1

. The announcement comes approximately one month after rival Anthropic captivated investors and government officials with its Claude Mythos Preview model. Sam Altman's company is making the AI model for cybersecurity teams available to defenders responsible for securing critical infrastructure through its Trusted Access for Cyber program

2

.

Source: Axios

Source: Axios

Reduced Safeguards Enable Advanced Security Workflows

The preview of GPT-5.5-Cyber is not intended to be a major step up in terms of cyber capability, but is instead trained to be more permissive on security-related tasks compared to the generally available GPT-5.5 model

1

. With the cyber-specific version, vetted teams will have an easier time using OpenAI's latest model for workflows like vulnerability identification and triage, patch validation and malware analysis. The safeguards built into the generally available GPT model would have made these specialized tasks more challenging. Cyber defenders will be able to use it to hunt for bugs, study malware and reverse engineering attacks, though they will still be blocked from certain tasks like credential theft and writing malware

2

.

Performance Rivals Anthropic's Mythos in Security Testing

Recent security testing suggests that the GPT-5.5 model is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview

2

. A source familiar with GPT-5.5-Cyber's abilities told Axios that they were roughly on par with Mythos, with one major recent test putting Mythos narrowly ahead. The U.K. AI Security Institute said last week that GPT-5.5 was able to complete a 32-step simulated corporate cyberattack in 2 out of 10 test runs, while Mythos did the same in 3 out of 10 runs. Before Mythos, no AI model had ever successfully completed that test, highlighting the rapid advancement in AI capabilities for identifying software vulnerabilities.

Contrasting Approaches to Responsible Deployment of Advanced AI

OpenAI and Anthropic are pursuing two different approaches to rolling out their cyber-capable models as they both try to keep the technology out of the hands of malicious actors and adversarial governments

2

. Anthropic has taken a more restrictive view, allowing approximately 40 organizations to access Mythos through its new cybersecurity initiative called Project Glasswing, where members are trading information about how they're testing the model. Anthropic CEO Dario Amodei met with senior members of the Trump administration to discuss the model and its potential power

1

. OpenAI is taking a more open approach, releasing one version of its advanced models with stricter guardrails while creating a version with fewer safeguards for companies that apply for access through its Trusted Access for Cyber program.

White House Weighs Federal Oversight of AI Security Models

The capabilities of these new models have sparked an urgent debate in Silicon Valley and the White House about how to keep them out of the hands of bad actors

2

. Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent met with major U.S. bank CEOs to discuss Mythos last month, and Vice President JD Vance and Bessent held a call with leading tech CEOs ahead of the model's release

1

. The White House is actively discussing a slate of executive actions that could change how the federal government is involved in future model rollouts, signaling potential regulatory frameworks for the responsible deployment of advanced AI systems capable of identifying and exploiting security flaws. Organizations and cyber defenders should watch closely as these policy discussions could reshape access to powerful AI security tools in the coming months.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved