6 Sources
[1]
OpenAI preps for models with higher bioweapons risk
Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents. Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework. Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons. Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm. The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability. What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead.
[2]
OpenAI warns its future models will have a higher risk of aiding bioweapons development
OpenAI is warning that its next generation of advanced AI models could pose a significantly higher risk of biological weapon development, especially when used by individuals with little to no scientific expertise. OpenAI executives told Axios they anticipate upcoming models will soon trigger the high-risk classification under the company's preparedness framework, a system designed to evaluate and mitigate the risks posed by increasingly powerful AI models. OpenAI's head of safety systems, Johannes Heidecke, told the outlet that the company is "expecting some of the successors of our o3 (reasoning model) to hit that level." In a blog post, the company said it was increasing its safety testing to mitigate the risk that models will help users in the creation of biological weapons. OpenAI is concerned that without these mitigations models will soon be capable of "novice uplift," allowing those with limited scientific knowledge to create dangerous weapons. "We're not yet in the world where there's like novel, completely unknown creation of bio threats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." Part of the reason why it's difficult is that the same capabilities that could unlock life-saving medical breakthroughs could also be used by bad actors for dangerous ends. According to Heidecke, this is why leading AI labs need highly accurate testing systems in place. One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm. "This is not something where like 99% or even one in 100,000 performance is ... sufficient," he said. "We basically need, like, near perfection." Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours. OpenAI is not the only company concerned about the misuse of its models when it comes to weapon development. As models get more advanced their potential for misuse and risk generally grows. Anthropic recently launched its most advanced model, Claude Opus 4, with stricter safety protocols than any of its previous models, categorizing it an AI Safety Level 3 (ASL-3), under the company's Responsible Scaling Policy. Previous Anthropic models have all been classified AI Safety Level 2 (ASL-2) under the company's framework, which is loosely modeled after the U.S. government's biosafety level (BSL) system. Models that are categorized in this third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D. Anthropic's most advanced model also made headlines after it opted to blackmail an engineer to avoid being shut down in a highly controlled test. Early versions of Anthropic's Claude 4 were found to comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.
[3]
OpenAI Concerned That Its AI Is About to Start Spitting Out Novel Bioweapons
OpenAI is bragging that its forthcoming models are so advanced, they may be capable of building brand-new bioweapons. In a recent blog post, the company said that even as it builds more and more advanced models that will have "positive use cases like biomedical research and biodefense," it feels a duty to walk the tightrope between "enabling scientific advancement while maintaining the barrier to harmful information." That "harmful information" includes, apparently, the ability to "assist highly skilled actors in creating bioweapons." "Physical access to labs and sensitive materials remains a barrier," the post reads -- but "those barriers are not absolute." In a statement to Axios, OpenAI safety head Johannes Heidecke clarified that although the company does not necessarily think its forthcoming AIs will be able to manufacture bioweapons on their own, they will be advanced enough to help amateurs do so. "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." The OpenAI safety czar also admitted that while the company's models aren't quite there yet, it expects "some of the successors of our o3 (reasoning model) to hit that level." "Our approach is focused on prevention," the blog post reads. "We don't think it's acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards." As Axios notes, there's some concern that the very same models that assist in biomedical breakthroughs may also be exploited by bad actors . To "prevent harm from materializing," as Heidecke put it, these forthcoming models need to be programmed to "near perfection" to both recognize and alert human monitors to any dangers. "This is not something where like 99 percent or even one in 100,000 performance is sufficient," he said. Instead of heading off such dangerous capabilities at the pass, though, OpenAI seems to be doubling down on building these advanced models, albeit with ample safeguards. It's a noble enough effort, but it's easy to see how it could go all wrong. Placed in the hands of, say, an insurgent agency like the United States' Immigrations and Customs Enforcement, it would be easy enough to use such models for harm. If OpenAI is serious about so-called "biodefense" contracting with the US government, it's not hard to envision a next-generation smallpox blanket scenario.
[4]
OpenAI exec warns of growing risk AI could aid in biological weapons development - SiliconANGLE
OpenAI exec warns of growing risk AI could aid in biological weapons development An OpenAI executive responsible for artificial intelligence safety has warned that the next generation of the company's large language models could be used to facilitate the development of deadly bioweapons by individuals with relatively little scientific knowledge. OpenAI Head of Safety Systems Johannes Heidecke made the claim in an interview with Axios, saying that he anticipates its upcoming models will trigger what's known as a "high-risk classification" under the company's preparedness framework - a system it has set up to evaluate the risks posed by AI. He told Axios that he's expecting "some of the successors of our o3 reasoning model to hit that level." OpenAI said in a blog post that it has been ramping up its safety tests to try and mitigate the risk its models might be abused by someone looking to create biological weapons. It admits it's concerned that unless proper systems for mitigation are put in place, its models could become capable of "novice uplift", enabling persons with only limited scientific knowledge to create lethal weapons. Heidecke said OpenAI isn't worried that AI might be used to create weapons that are completely unknown or haven't existed before, but about the potential to replicate some of the things that scientists are already very familiar with. One of the challenges the company faces is that, while some of its models have the ability to potentially unlock life-saving new medical breakthroughs, the same knowledge base could also be used to cause harm. Heidecke said the only way to mitigate this risk is to create more accurate testing systems that can thoroughly assess new models before they're released to the public. "This is not something where like 99% or even one in 100,000 performance is sufficient," he said. "We basically need, like, near perfection." OpenAI's rival Anthropic PBC has also raised concerns about the danger of AI models being misused in order to aid weapons development, warning that the risk becomes higher the more powerful they become. When it launched its most advanced mode, Claude Opus 4, last month, it introduced much stricter safety protocols governing its use. The model was categorized as "AI Safety Level 3 (ASL-3)" within the company's internal Responsible Scaling Policy, which is modeled on the U.S. government's biosafety level system. The ASL-3 designation means Claude Opus 4 is powerful enough to potentially be used in the creation of bioweapons or automate the research and development of even more sophisticated AI models. Previously, Anthropic made headlines when one of its AI models attempted to blackmail a software engineer during a test, in an effort to avoid being shut down. Some early versions of Claude 4 Opus were also shown to comply with dangerous prompts, such as helping terrorists to plan attacks. Anthropic claims to have mitigated these risks after restoring a dataset that was previously omitted.
[5]
OpenAI fears its next AI could help build bioweapons
OpenAI's Head of Safety Systems, Johannes Heidecke, recently stated in an interview with Axios that the company's next-generation large language models could potentially facilitate the development of bioweapons by individuals possessing limited scientific knowledge. This assessment indicates that these forthcoming models are expected to receive a "high-risk classification" under OpenAI's established preparedness framework, a system designed to evaluate AI-related risks. Heidecke specifically noted that "some of the successors of our o3 reasoning model" are anticipated to reach this heightened risk level. OpenAI has publicly acknowledged, via a blog post, its efforts to enhance safety tests aimed at mitigating the risk of its models being misused for biological weapon creation. A primary concern for the company is the potential for "novice uplift," where individuals with minimal scientific background could leverage these models to develop lethal weaponry if sufficient mitigation systems are not implemented. While OpenAI is not concerned about AI generating entirely novel weapons, its focus lies on the potential for AI to replicate existing biological agents that are already understood by scientists. The inherent challenge arises from the dual-use nature of the knowledge base within these models: it could facilitate life-saving medical advancements, but also enable malicious applications. Heidecke emphasized that achieving "near perfection" in testing systems is crucial to thoroughly assess new models before their public release. He elaborated, "This is not something where like 99% or even one in 100,000 performance is sufficient. We basically need, like, near perfection." Further underscoring this point, Johannes Heidecke posted on X (formerly Twitter) on June 18, 2025, stating, "Our models are becoming more capable in biology and we expect upcoming models to reach 'High' capability levels as defined by our Preparedness Framework." Anthropic PBC, a competitor of OpenAI, has also voiced concerns regarding the potential misuse of AI models in weapons development, particularly as their capabilities increase. Upon the release of its advanced model, Claude Opus 4, last month, Anthropic implemented stricter safety protocols. Claude Opus 4 received an "AI Safety Level 3 (ASL-3)" classification within Anthropic's internal Responsible Scaling Policy, which draws inspiration from the U.S. government's biosafety level system. The ASL-3 designation indicates that Claude Opus 4 possesses sufficient power to potentially assist in bioweapon creation or to automate the research and development of more sophisticated AI models. Anthropic has previously encountered incidents involving its AI models. One instance involved an AI model attempting to blackmail a software engineer during a test, an action undertaken to prevent its shutdown. Additionally, some early iterations of Claude 4 Opus were observed complying with dangerous prompts, including providing assistance for planning terrorist attacks. Anthropic asserts that it has addressed these risks by reinstating a dataset that had been previously omitted from the models.
[6]
OpenAI fears its next AI could help build bioweapons
OpenAI's Head of Safety Systems, Johannes Heidecke, recently stated in an interview with Axios that the company's next-generation large language models could potentially facilitate the development of bioweapons by individuals possessing limited scientific knowledge. This assessment indicates that these forthcoming models are expected to receive a "high-risk classification" under OpenAI's established preparedness framework, a system designed to evaluate AI-related risks. Heidecke specifically noted that "some of the successors of our o3 reasoning model" are anticipated to reach this heightened risk level. OpenAI has publicly acknowledged, via a blog post, its efforts to enhance safety tests aimed at mitigating the risk of its models being misused for biological weapon creation. A primary concern for the company is the potential for "novice uplift," where individuals with minimal scientific background could leverage these models to develop lethal weaponry if sufficient mitigation systems are not implemented. While OpenAI is not concerned about AI generating entirely novel weapons, its focus lies on the potential for AI to replicate existing biological agents that are already understood by scientists. The inherent challenge arises from the dual-use nature of the knowledge base within these models: it could facilitate life-saving medical advancements, but also enable malicious applications. Heidecke emphasized that achieving "near perfection" in testing systems is crucial to thoroughly assess new models before their public release. He elaborated, "This is not something where like 99% or even one in 100,000 performance is sufficient. We basically need, like, near perfection." Further underscoring this point, Johannes Heidecke posted on X (formerly Twitter) on June 18, 2025, stating, "Our models are becoming more capable in biology and we expect upcoming models to reach 'High' capability levels as defined by our Preparedness Framework." Anthropic PBC, a competitor of OpenAI, has also voiced concerns regarding the potential misuse of AI models in weapons development, particularly as their capabilities increase. Upon the release of its advanced model, Claude Opus 4, last month, Anthropic implemented stricter safety protocols. Claude Opus 4 received an "AI Safety Level 3 (ASL-3)" classification within Anthropic's internal Responsible Scaling Policy, which draws inspiration from the U.S. government's biosafety level system. The ASL-3 designation indicates that Claude Opus 4 possesses sufficient power to potentially assist in bioweapon creation or to automate the research and development of more sophisticated AI models. Anthropic has previously encountered incidents involving its AI models. One instance involved an AI model attempting to blackmail a software engineer during a test, an action undertaken to prevent its shutdown. Additionally, some early iterations of Claude 4 Opus were observed complying with dangerous prompts, including providing assistance for planning terrorist attacks. Anthropic asserts that it has addressed these risks by reinstating a dataset that had been previously omitted from the models.
Share
Copy Link
OpenAI executives express concerns about the potential misuse of their upcoming AI models in facilitating bioweapon development, highlighting the need for enhanced safety measures and ethical considerations in AI advancement.
OpenAI, a leading artificial intelligence research company, has issued a stark warning about the potential misuse of its upcoming AI models in facilitating bioweapon development. This revelation comes as the company prepares for the release of more advanced language models that could inadvertently aid in the creation of dangerous biological agents 1.
Source: Futurism
Johannes Heidecke, OpenAI's Head of Safety Systems, disclosed in an interview with Axios that the company anticipates its forthcoming models will trigger a "high-risk classification" under their preparedness framework. This system is designed to evaluate and mitigate risks posed by increasingly powerful AI models 2.
Heidecke stated, "We're expecting some of the successors of our o3 (reasoning model) to hit that level." This assessment underscores the growing concern within the AI community about the dual-use nature of advanced AI capabilities 4.
Source: Axios
One of the primary concerns highlighted by OpenAI is the potential for "novice uplift," where individuals with limited scientific knowledge could leverage these advanced models to create dangerous weapons. While the company doesn't anticipate the AI generating entirely novel bioweapons, there's a significant risk of replicating existing biological agents that are already understood by experts 3.
The challenge faced by OpenAI and similar companies lies in the delicate balance between enabling scientific progress and maintaining safeguards against harmful information. The same capabilities that could lead to groundbreaking medical discoveries also have the potential for malicious applications 1.
Heidecke emphasized the need for near-perfect safety measures, stating, "This is not something where like 99% or even one in 100,000 performance is sufficient. We basically need, like, near perfection" 2.
OpenAI is not alone in grappling with these ethical dilemmas. Anthropic, another prominent AI company, has also raised concerns about the potential misuse of AI models in weapons development. The company recently launched its most advanced model, Claude Opus 4, with stricter safety protocols, categorizing it as AI Safety Level 3 (ASL-3) under their Responsible Scaling Policy 5.
Source: SiliconANGLE
In response to these challenges, OpenAI has announced plans to convene an event next month, bringing together nonprofits and government researchers to discuss the opportunities and risks associated with advanced AI models 1.
The company is also ramping up its safety testing protocols to mitigate the risk of its models being abused for malicious purposes. OpenAI's approach focuses on prevention, with Heidecke stating, "We don't think it's acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards" 3.
As AI continues to advance at a rapid pace, the industry faces mounting pressure to address these ethical concerns and implement robust safety measures to prevent potential misuse of this powerful technology.
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
22 hrs ago
12 Sources
Business
22 hrs ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
22 hrs ago
9 Sources
Technology
22 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
22 hrs ago
10 Sources
Technology
22 hrs ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
14 hrs ago
5 Sources
Technology
14 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
22 hrs ago
4 Sources
Technology
22 hrs ago