OpenAI launches GPT-5.4-Cyber model week after Anthropic's Mythos rattles cybersecurity world

Reviewed byNidhi Govil

12 Sources

Share

OpenAI unveiled GPT-5.4-Cyber, an AI model designed to find software security vulnerabilities, just one week after rival Anthropic released its controversial Mythos model. The new model is being released to a limited group of verified cybersecurity professionals through OpenAI's Trusted Access for Cyber program, reflecting growing concerns about AI-powered hacking and the potential for misuse of increasingly capable models.

OpenAI Unveils GPT-5.4-Cyber in Response to Anthropic's Mythos

OpenAI announced GPT-5.4-Cyber on Tuesday, a specialized AI model for cybersecurity designed to identify software security vulnerabilities before malicious actors can exploit them

1

. The announcement comes exactly one week after competitor Anthropic revealed its Claude Mythos Preview model, which the company claimed has already detected thousands of severe vulnerabilities in "every major operating system and web browser"

5

. Unlike a completely new architecture, GPT-5.4-Cyber is a fine-tuned version of OpenAI's existing GPT-5.4 large language model, specifically adjusted for defensive cybersecurity work with lower guardrails for security tasks

2

.

Source: Axios

Source: Axios

Limited Release Through Trusted Access for Cyber Program

The model will be available exclusively through OpenAI's Trusted Access for Cyber program, which launched in February to provide verified cybersecurity professionals early access to advanced models

3

. Initially, hundreds of users will test the new model, with plans to expand to thousands in the coming weeks

3

. OpenAI is introducing new tiers to its TAC program, with higher verification levels unlocking more powerful capabilities

4

. Users approved for the highest tier will gain access to GPT-5.4-Cyber, which has fewer restrictions on sensitive tasks such as vulnerability research and analysis

4

.

Source: CNET

Source: CNET

Striking a Different Tone on AI Safety

While Anthropic's Mythos announcement emphasized catastrophic risks and led to the formation of Project Glasswing, a controlled initiative restricting access to select organizations including Amazon, Apple, and Microsoft

3

, OpenAI adopted a more measured approach. The company stated that "the class of safeguards in use today sufficiently reduce cyber risk enough to support broad deployment of current models"

1

. However, OpenAI acknowledged that models explicitly trained for cybersecurity work require more restrictive deployments and appropriate controls, and that long-term AI safety in cybersecurity will demand more expansive defenses

1

.

Three-Pillar Cybersecurity Strategy

OpenAI outlined three core pillars for its cybersecurity approach. The first involves "know your customer" validation systems designed to democratize access while preventing arbitrary gatekeeping

1

. The second component focuses on iterative deployment, carefully releasing and refining new capabilities to improve resilience to jailbreaks and other adversarial attacks while enhancing digital defense capabilities

1

. The third pillar centers on investments supporting application security as generative AI proliferates

1

.

Escalating AI Cyber Security Arms Race

The rapid succession of these announcements highlights the intensifying AI cyber security arms race between OpenAI and Anthropic. Both companies have been competing throughout the year to prove their models are most capable, particularly targeting government agencies and enterprise contracts

2

. The concern about AI-powered hacking has reached the highest levels of government. Last week, US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to warn them about taking the Mythos model seriously

3

. The Treasury Department's technology team is seeking access to Mythos to hunt for vulnerabilities, while financial institutions and financial regulators have held discussions about the model's implications

5

.

Source: SiliconANGLE

Source: SiliconANGLE

Proven Track Record and Broader Security Efforts

OpenAI positioned GPT-5.4-Cyber within its broader security ecosystem, pointing to Codex Security, an application security AI agent launched in March that has contributed to fixing more than 3,000 critical and high-priority vulnerabilities

5

. The company also referenced its cybersecurity grants program that began in 2023, a recent donation to the Linux Foundation supporting open source security, and its Preparedness Framework designed to assess and defend against severe harm from frontier AI capabilities

1

.

Divided Expert Opinion on the Threat Level

Anthropic's claims that more capable AI models necessitate a cybersecurity reckoning have sparked controversy among security experts. Some argue the concern is overstated and could fuel anti-hacker sentiment while consolidating power with tech giants

1

. Others emphasize that well-known vulnerabilities and shortcomings in current security defenses could indeed be exploited with new speed and intensity by a broader range of bad actors, including state-sponsored hackers, in the age of agentic AI

1

. OpenAI acknowledged this reality, stating that "cyber risk is already here and accelerating" and noting that "digital infrastructure has already been vulnerable for years, before advanced AI even came along"

5

. The potential for misuse remains a central concern as both companies navigate the delicate balance between enabling defenders and preventing exploitation by malicious actors, making the limited release approach critical for understanding software bugs and testing for jailbreaks before wider deployment.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo