2 Sources
[1]
OpenAI rolls out new model for cybersecurity teams a month after Anthropic's Mythos debut
OpenAI CEO Sam Altman speaks during the BlackRock Infrastructure Summit on March 11, 2026, in Washington. OpenAI on Thursday announced that GPT-5.5-Cyber, a variation of its latest artificial intelligence model, is rolling out in a limited preview capacity to vetted cybersecurity teams, a month after rival Anthropic captivated investors and government officials with Claude Mythos Preview. The preview of GPT-5.5-Cyber is not intended to be a major step up in terms of cyber capability, but is instead trained to be more permissive on security-related tasks, OpenAI said in a blog post. OpenAI announced GPT-5.5 late last month. With the cyber-specific version, vetted teams will have an easier time using OpenAI's latest model for workflows like vulnerability identification and triage, patch validation and malware analysis, the company said. The safeguards built into the generally available GPT-5.5 model would have made that more challenging. "GPT‑5.5‑Cyber lets a smaller set of partners study advanced workflows where specialized access behavior may matter," OpenAI said in the blog post. In rolling out Mythos last month, Anthropic decided to limit access to a select group of companies as part of a new cybersecurity initiative called Project Glasswing. Anthropic CEO Dario Amodei met with senior members of the Trump administration to talk about the model and its potential power, even after the company had been blacklisted by the Pentagon just weeks earlier. Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent met with major U.S. bank CEOs to discuss Mythos last month, and Vice President JD Vance and Bessent held a call with leading tech CEOs ahead of the model's release.
[2]
OpenAI makes its rival to Anthropic's Mythos more widely available to cyber defenders
Why it matters: Recent security testing suggests that GPT-5.5 model is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview. The capabilities of the new models have sparked an urgent debate in Silicon Valley and the White House about how to keep them out of the hands of bad actors. Driving the news: OpenAI is opening a limited preview of GPT-5.5-Cyber to vetted cyber defenders who are "responsible for securing critical infrastructure," per a press release. * A source familiar with GPT-5.5-Cyber's abilities told Axios that they were roughly on par with Mythos. One major recent test put Mythos narrowly ahead. * Cyber defenders who are vetted and approved for the highest tier of OpenAI's Trusted Access for Cyber program will receive a version of GPT-5.5 that has fewer guardrails than the publicly available model. They will be able to use it to hunt for bugs, study malware and reverse engineer attacks. * Defenders will still be blocked from certain tasks like credential theft and writing malware, but the new abilities are designed to help them automate popular cybersecurity workflows, OpenAI noted in the press release. Zoom in: The new GPT-5.5-Cyber model is specifically designed to help defenders write proofs of concept for the bugs they can find or run simulations to test their organization's security posture. * Meanwhile, OpenAI has also made another version of GPT-5.5 available to other members of its Trusted Access for Cyber program that can help with understanding unfamiliar code, mapping affected surfaces or reviewing patches for software flaws. The big picture: Advanced AI models are getting scary good at finding and exploiting flaws in technology, including everything from operating systems to web browsers. * The U.K. AI Security Institute said last week that GPT-5.5 was able to complete a 32-step simulated corporate cyberattack in 2 out of 10 test runs. Mythos did the same in 3 out of 10 runs. * Before Mythos, no AI model had ever successfully completed that test. Between the lines: OpenAI and Anthropic are pursuing two different approaches to rolling out their cyber-capable models as they both try to keep the technology out of the hands of malicious hackers and adversarial governments. * Anthropic has taken a more restrictive view, allowing approximately 40 organizations to access Mythos. Some of those companies are also part of the company's new Project Glasswing, where members are trading information about how they're testing the model. * OpenAI is taking a more open approach. It's releasing one version of its advanced models with stricter guardrails, while also creating a version with fewer safeguards for companies that apply for access. What to watch: The White House is actively discussing a slate of executive actions that could change how the federal government is involved in future model rollouts.
Share
Copy Link
OpenAI announced GPT-5.5-Cyber, a specialized AI model for cybersecurity teams to identify vulnerabilities and analyze malware. The limited preview follows Anthropic's Mythos release, with both models showing advanced capabilities in finding software bugs. OpenAI's approach offers broader access through its Trusted Access for Cyber program compared to Anthropic's restrictive rollout.
OpenAI on Thursday announced the limited preview of GPT-5.5-Cyber, a specialized variation of its latest AI model designed specifically for vetted cybersecurity teams
1
. The announcement comes approximately one month after rival Anthropic captivated investors and government officials with its Claude Mythos Preview model. Sam Altman's company is making the AI model for cybersecurity teams available to defenders responsible for securing critical infrastructure through its Trusted Access for Cyber program2
.
Source: Axios
The preview of GPT-5.5-Cyber is not intended to be a major step up in terms of cyber capability, but is instead trained to be more permissive on security-related tasks compared to the generally available GPT-5.5 model
1
. With the cyber-specific version, vetted teams will have an easier time using OpenAI's latest model for workflows like vulnerability identification and triage, patch validation and malware analysis. The safeguards built into the generally available GPT model would have made these specialized tasks more challenging. Cyber defenders will be able to use it to hunt for bugs, study malware and reverse engineering attacks, though they will still be blocked from certain tasks like credential theft and writing malware2
.Recent security testing suggests that the GPT-5.5 model is nearly as good at finding and exploiting software bugs as Anthropic's Mythos Preview
2
. A source familiar with GPT-5.5-Cyber's abilities told Axios that they were roughly on par with Mythos, with one major recent test putting Mythos narrowly ahead. The U.K. AI Security Institute said last week that GPT-5.5 was able to complete a 32-step simulated corporate cyberattack in 2 out of 10 test runs, while Mythos did the same in 3 out of 10 runs. Before Mythos, no AI model had ever successfully completed that test, highlighting the rapid advancement in AI capabilities for identifying software vulnerabilities.Related Stories
OpenAI and Anthropic are pursuing two different approaches to rolling out their cyber-capable models as they both try to keep the technology out of the hands of malicious actors and adversarial governments
2
. Anthropic has taken a more restrictive view, allowing approximately 40 organizations to access Mythos through its new cybersecurity initiative called Project Glasswing, where members are trading information about how they're testing the model. Anthropic CEO Dario Amodei met with senior members of the Trump administration to discuss the model and its potential power1
. OpenAI is taking a more open approach, releasing one version of its advanced models with stricter guardrails while creating a version with fewer safeguards for companies that apply for access through its Trusted Access for Cyber program.The capabilities of these new models have sparked an urgent debate in Silicon Valley and the White House about how to keep them out of the hands of bad actors
2
. Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent met with major U.S. bank CEOs to discuss Mythos last month, and Vice President JD Vance and Bessent held a call with leading tech CEOs ahead of the model's release1
. The White House is actively discussing a slate of executive actions that could change how the federal government is involved in future model rollouts, signaling potential regulatory frameworks for the responsible deployment of advanced AI systems capable of identifying and exploiting security flaws. Organizations and cyber defenders should watch closely as these policy discussions could reshape access to powerful AI security tools in the coming months.Summarized by
Navi
22 Apr 2026•Technology

11 Dec 2025•Policy and Regulation

10 Apr 2026•Policy and Regulation

1
Science and Research

2
Technology

3
Technology
