3 Sources
3 Sources
[1]
OpenAI unveils new measures as frontier AI grows cyber-powerful
The company expects future models could reach "High" capability levels under its Preparedness Framework. That means models powerful enough to develop working zero-day exploits or assist with sophisticated enterprise intrusions. In anticipation, OpenAI says it is preparing safeguards as if every new model could reach that threshold, ensuring progress is paired with strong risk controls. OpenAI is expanding investments in models designed to support defensive workflows, from auditing code to patching vulnerabilities at scale. The company says its aim is to give defenders an edge in a landscape where they are often "outnumbered and under-resourced." Because offensive and defensive cyber tasks rely on the same knowledge, OpenAI says it is adopting a defense-in-depth approach rather than depending on any single safeguard. The company emphasizes shaping "how capabilities are accessed, guided, and applied" to ensure AI strengthens cybersecurity rather than lowering barriers to misuse.
[2]
Exclusive: Future OpenAI models likely to pose "high" cybersecurity risk, it says
Why it matters: The models' growing capabilities could significantly expand the number of people able to carry out cyberattacks. Driving the news: OpenAI said it has already seen a significant increase in capabilities in recent releases, particularly as models are able to operate longer autonomously, paving the way for brute force attacks. * The company notes that GPT-5 scored a 27% on a capture-the-flag exercise in August, GPT-5.1-Codex-Max was able to score 76% last month. * "We expect that upcoming AI models will continue on this trajectory," the company says in the report. "In preparation, we are planning and evaluating as though each new model could reach 'high' levels of cybersecurity capability as measured by our Preparedness Framework." Catch up quick: OpenAI issued a similar warning relative to bioweapons risk in June, and then released ChatGPT Agent in July, which did indeed rate "high" on its risk levels. * "High" is the second-highest level, below the "critical" level at which models are unsafe to be released publicly. Yes, but: The company didn't say exactly when to expect the first models rated "high" for cybersecurity risk, or which types of future models could pose such a risk. What they're saying: "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios in an exclusive interview. * These kinds of brute force attacks that rely on this extended time are more easily defended, Matin says. * "In any defended environment this would be caught pretty easily," he added. The big picture: Leading models are getting better at finding security vulnerabilities -- and not just models from OpenAI.
[3]
OpenAI warns new models pose 'high' cybersecurity risk
OpenAI warned that its upcoming models could create serious cyber risks, including helping generate zero-day exploits or aiding sophisticated attacks. The company says it is boosting defensive uses of AI, such as code audits and vulnerability fixes. It is also tightening controls and monitoring to reduce misuse. OpenAI said on Wednesday the cyber capabilities of its artificial intelligence models are increasing and warned that upcoming models are likely to pose a "high" cybersecurity risk. The AI models might either develop working zero-day remote exploits against well-defended systems or assist with complex enterprise or industrial intrusion operations aimed at real-world effects, the ChatGPT maker said in a blog post. As capabilities advance, OpenAI said it is "investing in strengthening models for defensive cybersecurity tasks and creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities". To counter cybersecurity risks, OpenAI said it is relying on a mix of access controls, infrastructure hardening, egress controls and monitoring.
Share
Share
Copy Link
OpenAI has issued a warning that its next-generation AI models are expected to reach high cybersecurity risk levels under its Preparedness Framework. The company revealed dramatic capability increases, with recent models scoring 76% on penetration testing exercises compared to 27% months earlier. OpenAI is now preparing safeguards and investing in defensive AI tools to help security teams audit code and patch vulnerabilities at scale.
OpenAI has issued a stark warning that its upcoming AI models will likely pose a "high" cybersecurity risk, marking a significant escalation in the dual-use capabilities of advanced AI models
1
2
. Under the company's Preparedness Framework, this rating means future models could develop working zero-day exploits or assist with sophisticated cyberattacks targeting enterprise intrusions and industrial systems3
. The "high" designation sits just below the "critical" threshold at which models would be deemed unsafe for public release2
.
Source: ET
The warning comes amid evidence of rapid capability growth in recent releases. GPT-5 scored just 27% on a capture-the-flag cybersecurity exercise in August, but GPT-5.1-Codex-Max achieved a striking 76% success rate in the same test just months later
2
. This nearly threefold improvement demonstrates how quickly these systems are advancing. According to OpenAI's Fouad Matin, the key forcing function behind this escalation is the models' growing autonomous capabilities—specifically their ability to work for extended periods without human intervention, enabling brute-force attacks that require sustained effort over time2
.In response to these advanced cyber threats, OpenAI says it is preparing safeguards as if every new model could reach the high-risk threshold, ensuring progress is paired with strong risk controls
1
. The company is expanding investments in models designed to support defensive workflows, from code auditing to vulnerability patching at scale1
3
. OpenAI emphasizes giving defenders an edge in a landscape where security teams are "outnumbered and under-resourced"1
.
Source: Axios
Because offensive and defensive cyber tasks rely on the same knowledge base, OpenAI is adopting a defense-in-depth strategy rather than depending on any single safeguard
1
. The company is implementing a mix of access controls, infrastructure hardening, egress controls, and monitoring to counter potential misuse3
. The focus is on shaping "how capabilities are accessed, guided, and applied" to ensure AI strengthens cybersecurity rather than lowering barriers to attacks1
.Related Stories
The implications extend beyond OpenAI alone, as leading models across the industry are getting better at finding security vulnerabilities
2
. The growing capabilities could significantly expand the number of people able to carry out cyberattacks, democratizing access to sophisticated techniques previously limited to skilled threat actors2
. However, Matin noted that brute-force attacks relying on extended autonomous operation are more easily defended against, stating "in any defended environment this would be caught pretty easily"2
.This warning follows a similar alert OpenAI issued regarding bioweapons risk in June, before releasing ChatGPT Agent in July—which indeed rated "high" on its risk levels
2
. The company has not specified exactly when to expect the first models rated high for cybersecurity risk or which model types could pose such threats2
. As AI capabilities continue their upward trajectory, the race between offensive potential and defensive AI capabilities will define the security posture of organizations worldwide.Summarized by
Navi
[1]
16 Apr 2025•Technology

19 Jun 2025•Technology

13 Nov 2025•Technology

1
Science and Research

2
Technology

3
Policy and Regulation
