2 Sources
[1]
A call for built-in biosecurity safeguards for generative AI tools - Nature Biotechnology
Generative AI is changing biotechnology research, and accelerating drug discovery, protein design and synthetic biology. It also enhances biomedical imaging, personalized medicine and laboratory automation, which enables faster and more efficient scientific advancements. However, these breakthroughs have also raised biosecurity concerns, which has prompted policy and community discussions. The power of generative AI lies in its ability to generalize from known data to the unknown. Deep generative models can predict novel biological molecules that might not resemble existing genome sequences or proteins. This capability introduces dual-use risks and serious biosecurity threats -- such models could potentially bypass the established safety screening mechanisms used by nucleic acid synthesis providers, which presently rely on database matching to identify sequences of concerns. AI-driven tools could be misused to engineer pathogens, toxins or destabilizing biomolecules, and AI science agents could amplify risks by automating experimental designs.
[2]
Built-in safeguards might stop AI from designing bioweapons
Artificial intelligence (AI) is fast-tracking the design of new proteins that could serve as drugs, vaccines, and other therapies. But that promise comes with fears that the same tools could be put to work designing the building blocks of biological weapons or harmful toxins. Now, scientists are proposing a range of protective measures that could be built into the AI tools themselves, to either block malicious uses or make it possible to trace a novel bioweapon to its AI creator. "Getting this kind of framework right will be key for ... harnessing the great potential of this technology while preventing the emergence of very serious risks," Thomas Inglesby, epidemiologist and director of the Johns Hopkins University Center for Health Security, wrote in an email to Science. Inglesby was not an author of the new article, published today in Nature Biotechnology, but he has previously shared concerns around the misuse of AI in biological settings. In recent years, scientists have demonstrated that AI models can not only predict protein structures based on their amino acid sequence, but also generate never-before-seen protein sequences with novel functions, and in record time. Recent AI models, such as RFdiffusion and ProGen, can custom design proteins in a matter of seconds. Few question their promise for basic science and medicine. But Mengdi Wang, a computer scientist at Princeton University and an author of the new paper, notes their power and ease of use are worrisome. "AI has become so easy and accessible. Someone doesn't have to have a Ph.D. to be able to generate a toxic compound or a virus sequence," she says. Kevin Esvelt, a computer scientist at the Massachusetts Institute of Technology Media Lab who has testified in front of the U.S. Congress in support of stricter control on research into risky viruses and DNA production, notes the concern remains theoretical. "There's no laboratory evidence indicating that the models are good enough to actually let you cause a new pandemic today," he says. Still, a group of 130 protein researchers, including Ingelsby, signed a pledge last year to use AI safely in their work. Now, Wang and her colleagues go beyond voluntary measures by outlining measures that could be built into AI models themselves. One is a guardrail is known as FoldMark, which was developed in Wang's lab. It borrows its concept from existing tools such as Google DeepMind's SynthID, which embed digital patterns into AI-generated contents without changing their quality. In FoldMark's case, a code that serves as a unique identifier is inserted into a protein structure without changing the protein's function. If a novel toxin were detected, the code could be used to trace it to its source. This kind of intervention is "both feasible and of great potential value in reducing risks," Inglesby says. Wang and her colleagues also suggest ways to modify AI models so they are less likely to do harm. Protein prediction models are trained on existing proteins, including toxins and pathogenic proteins, and an approach called "unlearning" would strip away some of that training, making it harder for the model to propose dangerous new proteins. Their paper also suggests "antijailbreaking," which systematically trains AI models to recognize and reject potentially malicious prompts. And it urges developers to adopt external safeguards such as autonomous agents that can monitor how an AI is being used and alert a safety officer when someone attempts to produce hazardous biological materials. Alvaro Velasquez, a program manager overseeing AI projects at the Defense Advanced Research Projects Agency and a co-author of the paper, concedes that implementing these safeguards will not be straightforward. "Having a regulating body or some level of oversight would be a starting point," Velasquez says. James Zou, a computational biologist at Stanford University, thinks that instead of requiring the AI models themselves to incorporate guardrails, regulators could focus on service facilities or organizations that can turn AI-generated protein designs into large-scale production. "I think the place where it makes sense to have more guardrails and regulations is at the level of where the AI meets the real world," he says. Production facilities could ask about the origin of new molecules and what their intended use is. "And maybe even run some tests to see if these molecules are potentially dangerous," he adds. But Zou agrees the new focus on safeguards is healthy. "People have not given as much thought [to AI and biosecurity] as they have for other areas like misinformation or deep fake [technology]. I'm glad that researchers are starting to pay attention to this."
Share
Copy Link
Researchers propose built-in safeguards for AI tools in biotechnology to mitigate potential biosecurity risks while harnessing the technology's potential for scientific advancements.
Generative AI is revolutionizing biotechnology research, accelerating advancements in drug discovery, protein design, and synthetic biology 1. These AI-driven tools are enhancing various aspects of scientific research, including biomedical imaging, personalized medicine, and laboratory automation. However, the rapid progress in AI capabilities has also raised significant biosecurity concerns, prompting discussions within the scientific community and policymakers 12.
The power of generative AI lies in its ability to predict novel biological molecules that may not resemble existing genome sequences or proteins. This capability introduces dual-use risks and serious biosecurity threats. There are concerns that AI models could potentially bypass established safety screening mechanisms used by nucleic acid synthesis providers, which currently rely on database matching to identify sequences of concern 1.
To address these concerns, scientists are proposing a range of protective measures that could be built into AI tools themselves. These safeguards aim to either block malicious uses or make it possible to trace a novel bioweapon to its AI creator 2.
FoldMark: Developed in Mengdi Wang's lab at Princeton University, this guardrail embeds a unique identifier code into protein structures without altering their function. This could help trace the origin of potentially harmful novel toxins 2.
Unlearning: This approach involves modifying AI models to strip away some of their training on existing toxins and pathogenic proteins, making it harder for the models to propose dangerous new proteins 2.
Antijailbreaking: This method systematically trains AI models to recognize and reject potentially malicious prompts 2.
External Safeguards: The implementation of autonomous agents to monitor AI usage and alert safety officers when attempts are made to produce hazardous biological materials 2.
Implementing these safeguards presents challenges, and some experts suggest alternative approaches. James Zou, a computational biologist at Stanford University, proposes focusing regulations on service facilities or organizations that can turn AI-generated protein designs into large-scale production. This approach would involve scrutinizing the origin and intended use of new molecules at the production level 2.
Recent AI models, such as RFdiffusion and ProGen, have demonstrated the ability to custom design proteins in a matter of seconds. While these advancements hold great promise for basic science and medicine, their power and accessibility raise concerns. As Mengdi Wang notes, "AI has become so easy and accessible. Someone doesn't have to have a Ph.D. to be able to generate a toxic compound or a virus sequence" 2.
Thomas Inglesby, director of the Johns Hopkins University Center for Health Security, emphasizes the importance of developing a framework to harness the potential of AI technology while preventing serious risks 2. Kevin Esvelt from MIT Media Lab notes that while concerns remain theoretical, proactive measures are necessary 2.
As the field of AI in biotechnology continues to evolve, the focus on safeguards and responsible development is gaining traction. Researchers and policymakers alike recognize the need to balance innovation with security, ensuring that the transformative potential of AI in biotechnology can be realized without compromising public safety.
NVIDIA CEO Jensen Huang confirms the development of the company's most advanced AI architecture, 'Rubin', with six new chips currently in trial production at TSMC.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Databricks, a leading data and AI company, is set to acquire machine learning startup Tecton to bolster its AI agent offerings. This strategic move aims to improve real-time data processing and expand Databricks' suite of AI tools for enterprise customers.
3 Sources
Technology
17 hrs ago
3 Sources
Technology
17 hrs ago
Google is providing free users of its Gemini app temporary access to the Veo 3 AI video generation tool, typically reserved for paying subscribers, for a limited time this weekend.
3 Sources
Technology
9 hrs ago
3 Sources
Technology
9 hrs ago
Broadcom's stock rises as the company capitalizes on the AI boom, driven by massive investments from tech giants in data infrastructure. The chipmaker faces both opportunities and challenges in this rapidly evolving landscape.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago
Apple is set to introduce new enterprise-focused AI tools, including ChatGPT configuration options and potential support for other AI providers, as part of its upcoming software updates.
2 Sources
Technology
17 hrs ago
2 Sources
Technology
17 hrs ago