3 Sources
[1]
OpenAI preps for models with higher bioweapons risk
Why it matters: The company, and society at large, need to be prepared for a future where amateurs can more readily graduate from simple garage weapons to sophisticated agents. Driving the news: OpenAI executives told Axios the company expects forthcoming models will reach a high level of risk under the company's preparedness framework. Reality check: OpenAI isn't necessarily saying that its platform will be capable of creating new types of bioweapons. Between the lines: One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm. The big picture: OpenAI is not the only company warning of models reaching new levels of potentially harmful capability. What's next: OpenAI said it will convene an event next month to bring together certain nonprofits and government researchers to discuss the opportunities and risks ahead.
[2]
OpenAI warns its future models will have a higher risk of aiding bioweapons development
OpenAI is warning that its next generation of advanced AI models could pose a significantly higher risk of biological weapon development, especially when used by individuals with little to no scientific expertise. OpenAI executives told Axios they anticipate upcoming models will soon trigger the high-risk classification under the company's preparedness framework, a system designed to evaluate and mitigate the risks posed by increasingly powerful AI models. OpenAI's head of safety systems, Johannes Heidecke, told the outlet that the company is "expecting some of the successors of our o3 (reasoning model) to hit that level." In a blog post, the company said it was increasing its safety testing to mitigate the risk that models will help users in the creation of biological weapons. OpenAI is concerned that without these mitigations models will soon be capable of "novice uplift," allowing those with limited scientific knowledge to create dangerous weapons. "We're not yet in the world where there's like novel, completely unknown creation of bio threats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." Part of the reason why it's difficult is that the same capabilities that could unlock life-saving medical breakthroughs could also be used by bad actors for dangerous ends. According to Heidecke, this is why leading AI labs need highly accurate testing systems in place. One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm. "This is not something where like 99% or even one in 100,000 performance is ... sufficient," he said. "We basically need, like, near perfection." Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours. OpenAI is not the only company concerned about the misuse of its models when it comes to weapon development. As models get more advanced their potential for misuse and risk generally grows. Anthropic recently launched its most advanced model, Claude Opus 4, with stricter safety protocols than any of its previous models, categorizing it an AI Safety Level 3 (ASL-3), under the company's Responsible Scaling Policy. Previous Anthropic models have all been classified AI Safety Level 2 (ASL-2) under the company's framework, which is loosely modeled after the U.S. government's biosafety level (BSL) system. Models that are categorized in this third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D. Anthropic's most advanced model also made headlines after it opted to blackmail an engineer to avoid being shut down in a highly controlled test. Early versions of Anthropic's Claude 4 were found to comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.
[3]
OpenAI exec warns of growing risk AI could aid in biological weapons development - SiliconANGLE
OpenAI exec warns of growing risk AI could aid in biological weapons development An OpenAI executive responsible for artificial intelligence safety has warned that the next generation of the company's large language models could be used to facilitate the development of deadly bioweapons by individuals with relatively little scientific knowledge. OpenAI Head of Safety Systems Johannes Heidecke made the claim in an interview with Axios, saying that he anticipates its upcoming models will trigger what's known as a "high-risk classification" under the company's preparedness framework - a system it has set up to evaluate the risks posed by AI. He told Axios that he's expecting "some of the successors of our o3 reasoning model to hit that level." OpenAI said in a blog post that it has been ramping up its safety tests to try and mitigate the risk its models might be abused by someone looking to create biological weapons. It admits it's concerned that unless proper systems for mitigation are put in place, its models could become capable of "novice uplift", enabling persons with only limited scientific knowledge to create lethal weapons. Heidecke said OpenAI isn't worried that AI might be used to create weapons that are completely unknown or haven't existed before, but about the potential to replicate some of the things that scientists are already very familiar with. One of the challenges the company faces is that, while some of its models have the ability to potentially unlock life-saving new medical breakthroughs, the same knowledge base could also be used to cause harm. Heidecke said the only way to mitigate this risk is to create more accurate testing systems that can thoroughly assess new models before they're released to the public. "This is not something where like 99% or even one in 100,000 performance is sufficient," he said. "We basically need, like, near perfection." OpenAI's rival Anthropic PBC has also raised concerns about the danger of AI models being misused in order to aid weapons development, warning that the risk becomes higher the more powerful they become. When it launched its most advanced mode, Claude Opus 4, last month, it introduced much stricter safety protocols governing its use. The model was categorized as "AI Safety Level 3 (ASL-3)" within the company's internal Responsible Scaling Policy, which is modeled on the U.S. government's biosafety level system. The ASL-3 designation means Claude Opus 4 is powerful enough to potentially be used in the creation of bioweapons or automate the research and development of even more sophisticated AI models. Previously, Anthropic made headlines when one of its AI models attempted to blackmail a software engineer during a test, in an effort to avoid being shut down. Some early versions of Claude 4 Opus were also shown to comply with dangerous prompts, such as helping terrorists to plan attacks. Anthropic claims to have mitigated these risks after restoring a dataset that was previously omitted.
Share
Copy Link
OpenAI executives alert the public about the potential dangers of their upcoming AI models, which could aid in bioweapons development even by individuals with limited scientific knowledge.
OpenAI, a leading artificial intelligence research laboratory, has issued a stark warning about the potential dangers associated with its upcoming AI models. Executives from the company have revealed that they expect forthcoming iterations of their technology to reach a high level of risk under OpenAI's preparedness framework, particularly concerning the development of biological weapons 1.
Source: Axios
Johannes Heidecke, OpenAI's Head of Safety Systems, expressed concern about a phenomenon termed "novice uplift." This refers to the ability of AI models to enable individuals with limited scientific expertise to create sophisticated and potentially dangerous weapons 2. The company is particularly worried about the replication of known bio threats, rather than the creation of entirely new ones.
One of the key challenges highlighted by OpenAI is the dual-use nature of AI capabilities. The same advancements that could lead to groundbreaking medical discoveries also have the potential to be misused for harmful purposes. This dilemma underscores the need for extremely accurate and robust safety measures 3.
In response to these concerns, OpenAI has announced plans to significantly enhance its safety testing protocols. Heidecke emphasized the need for near-perfect performance in safety systems, stating that even 99.999% accuracy would be insufficient given the high stakes involved 2.
OpenAI is not alone in recognizing these risks. Other AI companies, such as Anthropic, have also implemented stricter safety protocols for their advanced models. Anthropic's Claude Opus 4, for instance, has been classified under a higher safety level due to its potential to aid in weapons development or automate AI research and development 2.
To address these challenges, OpenAI plans to convene an event next month, bringing together nonprofits and government researchers to discuss both the opportunities and risks associated with advanced AI models 1.
Source: SiliconANGLE
As AI continues to advance at a rapid pace, the tech industry faces the critical task of balancing innovation with responsible development. The warnings from OpenAI serve as a reminder of the potential consequences of unchecked AI progress and the urgent need for robust safety measures and ethical guidelines in the field of artificial intelligence.
A new study reveals that AI reasoning models produce significantly higher COβ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
16 hrs ago
8 Sources
Technology
16 hrs ago
European drone manufacturers are flocking to Ukraine, using the ongoing conflict as a real-world laboratory to test and improve their technologies, with implications for both military and civilian applications.
4 Sources
Technology
16 hrs ago
4 Sources
Technology
16 hrs ago
Protocol AI unveils a groundbreaking platform that uses AI agents to simplify Web3 development, potentially capturing a $16 billion market and democratizing blockchain innovation.
2 Sources
Technology
16 hrs ago
2 Sources
Technology
16 hrs ago
The International Energy Agency (IEA) has introduced an online platform called the Energy and AI Observatory to monitor and analyze AI's impact on the global energy sector, providing interactive tools and case studies.
2 Sources
Technology
8 hrs ago
2 Sources
Technology
8 hrs ago
LTIMindtree unveils BlueVerse, a comprehensive AI-driven business unit with 300 specialized AI agents, aiming to revolutionize enterprise AI adoption and productivity across various industries.
2 Sources
Technology
16 hrs ago
2 Sources
Technology
16 hrs ago