8 Sources
8 Sources
[1]
Biothreat hunters catch dangerous DNA before it gets made
DNA-synthesis firms routinely use -biosecurity-screening software to ensure that they don't inadvertently create dangerous sequences. But a paper published in Science on 2 October describes a potential vulnerability in this workflow. It details how protein-design strategies aided by artificial intelligence (AI) could circumvent the screening software that many DNA-synthesis firms use to ensure that they avoid unintentionally producing sequences encoding harmful proteins or pathogens. The researchers used an approach from the cybersecurity world: 'red teaming', in which one team attempts to break through another's defences (with their knowledge). They found that some screening tools were unprepared to catch AI-generated protein sequences that recreate the structure, but not the sequence, of known biothreats, says Eric Horvitz, chief scientific officer at Microsoft in Redmond, Washington. This is a type of zero-day vulnerability -- one that, in the cybersecurity world, blindsides software developers and users. "The diversified proteins essentially flew through the screening techniques" that were tested, Horvitz says. After the developers patched their software to address the new threat, the tools performed much better, flagging all but about 3% of malicious sequences in a larger second attempt. The impetus for the study is researchers' rapidly growing ability to create new, custom proteins. Armed with AI-powered tools such as RFdiffusion and ProteinMPNN, researchers can now invent proteins to attack tumours, defend against viruses and break down -pollutants. David Baker, a biochemist at the University of Washington in Seattle, whose team developed both RFdiffusion and -ProteinMPNN, won a share of the 2024 Nobel Prize in Chemistry for his pioneering work in this area. But biodesign tools could have other uses -- not all of them noble. Someone might intentionally or accidentally create a toxic compound or pathogen, putting many people at risk. The Microsoft-led project aims to prevent that possibility, focusing on a key checkpoint: synthesizing the DNA strands that encode these proteins. Researchers identified gaps in the screening of risky sequences and helped DNA-synthesis providers to close them. But as AI for protein design advances, defences, too, must evolve. Horvitz has long recognized that AI, like all technologies, has both good and bad applications. In 2023, motivated by concerns about potential misuse of AI-based protein design, he, Baker and others organized a workshop at the University of Washington to hammer out responsible practices. Horvitz asked Bruce Wittmann, an applied scientist at Microsoft, to create a concrete example of the threat. Proteins, built of amino acids, are the workhorses of the cell. They are first written in the language of DNA -- a string of nucleotides, denoted by A, C, G and T, whose order defines the sequence of amino acids. To create a protein, researchers specify the underlying nucleotide sequence, which they send to a DNA--synthesis company. The provider uses biosecurity screening software (BSS) to look for similarities between the new sequence and known sequences of concern -- genes that encode, say, a toxin. If nothing is flagged, the provider creates the requested DNA and mails it back. Horvitz and Wittmann wanted to see how porous such screening was. So, Wittmann adapted open-source AI protein-design software to alter the amino-acid sequence of a protein of concern while retaining its folded, 3D shape -- and, potentially, its function. It's the protein-design equivalent of paraphrasing a sentence, Wittmann says. The AI designed thousands of variants. Horvitz and Wittmann then reached out to two synthesis providers and asked them to use their BSS tools to test the sequences. One was Twist Bioscience in San Francisco, California, which used ThreatSeq from Battelle in Columbus, Ohio; the other was Integrated DNA Technologies (IDT) in Coralville, Iowa, which uses FAST-NA Scanner from RTX BBN Technologies in Cambridge, Massachusetts. The result: the tools were porous, indeed. Jacob Beal, a computer scientist at BBN, recalls a "moment of panic" looking at one of the tools: "Oh my goodness, this just goes straight through everything, like butter." Because the findings could have been dangerous in the wrong hands, the team began by sharing them with a small circle of people, including select workshop attendees; US government biosecurity officials; and James Diggans, the chair of the International Gene Synthesis Consortium (IGSC), a coalition of synthesis providers, formed in 2009 to create and share standards for screening both sequences and customers. "The results of the framing study were not a huge surprise," says Nicole Wheeler, a microbiologist then at the University of Birmingham, UK, and a co-author of the report. But "the study gave a clear indication of the scale of the problem today and data we could use to start testing and improving our screening tools". Horvitz and Wittmann then conducted a larger study. They started with 72 proteins of concern -- both toxins and viral proteins -- and generated tens of thousands of variants of the amino-acid sequences. As before, they ran the design software in two modes, one of which kept amino acids untouched at key locations. This mode increased the chance not only that the proteins would retain the functionality of the unaltered template proteins that they were emulating, but also that they'd be flagged by the BSS. Then, they reverse-translated the -amino-acid sequences into DNA, which they sent to four BSS providers who were in on the exercise. The team also scored the variant proteins for predicted risk. Proteins that exceeded a threshold on two measures were deemed dangerous. First, the proteins needed to be structurally similar (on the basis of computer simulations) to the template proteins. Second, the software needed to have high confidence in the predicted structure, indicating that the protein was likely to fold up properly and be functional. The researchers never actually made the toxic proteins, but in work posted to the preprint server bioRxiv in May, they synthesized some benign ones generated through their design method. They found that their metrics accurately predicted when a protein variant would maintain functionality, suggesting that at least some of the dangerous protein variants would have been functional, too. (But perhaps not many; most of the synthesized variants of benign proteins were inactive.) Overall, of the proteins that Horvitz and Wittmann deemed most dangerous, the patched BSS caught 97%, while keeping the false-positive rate under 2.5%. Diggans, who is also the head of biosecurity at Twist, says that the BSS tools that they use were patched in different ways during the Science study. In one case, developers used Wittmann's sequences to fine-tune a machine-learning model; in others, they lowered the statistical-significance threshold for similarity to cast "a wider net", now that they knew the degree to which AI could change sequences. Beal, at BBN, says that FAST-NA Scanner works differently. Before the red-teaming exercise, it looked for exact matches between short substrings of nucleotides and the sequences of genes encoding proteins of concern. After being patched, it scans for exact matches only at locations known to be important to a protein's functionality, allowing for harmless variation elsewhere. The company uses machine learning to generate diverse new sequences of concern, then identifies the important parts of their structures on the basis of similarities between those sequences. Some of the providers have since made further patches on the basis of this work. Horvitz and Wittmann teamed up with co-authors, including Wheeler, Diggans and Beal, to write up and share the results. Some colleagues felt the authors should provide every detail, whereas others said they should share nothing. "Our first reaction was, 'Anybody in the field would know how to do this kind of thing, wouldn't they?'" Horvitz says. "And even senior folks said, 'Well, that's not exactly true.' And so that went back and forth." In the end, they posted a version of their white paper on the preprint server bioRxiv in December, with key details removed. It doesn't describe the proteins they modified (the Science version of the paper lists them), the design tools they used or how they used them. It also omits a section on common BSS failures and glosses over obfuscation techniques -- ways to design sequences that won't raise flags but that produce DNA strands that can easily be modified after synthesis to become more dangerous. For the published version, the authors worked with journal editors to create a tiered system for data access. Parties must apply through the International Biosecurity and Biosafety Initiative for Science (IBBIS) in Geneva. (The highest-risk data tier includes the study's code.) "I'm really excited about this," Tessa Alexanian, a technical lead at IBBIS, said in a press briefing on 30 September. "This managed-access programme is an experiment, and we're very eager to evolve our approach." "There are two communities, which each have very well-grounded principles that both apply here and are in opposition to one another," Beal says. In the cybersecurity world, people often share vulnerabilities, so they can be patched widely; in biosecurity, threats are potentially deadly and difficult to counter, so people prefer to keep them under wraps. "Now we're in a place where these two worlds overlap." Even if screening tools work perfectly, bad actors could still design and build dangerous proteins. There are no laws requiring DNA-synthesis providers to screen orders, for instance. "That's a scary situation," says Jaime Yassif, who runs the global biosecurity programme at the Nuclear Threat Initiative (NTI), a non-profit organization in Washington DC. "Not only is screening not required, but the cost of DNA synthesis has been plummeting exponentially for years, and the cost of biosecurity has been basically fixed, so the profit margins on DNA synthesis are pretty thin." To maximize profit, companies could skimp on screening. In 2020, the NTI and the World Economic Forum organized a working group to make DNA-synthesis screening more accessible to synthesis firms. The NTI began building a BSS tool called the Common Mechanism, and last year it spun off of IBBIS, which now manages the tool. (Wheeler was the technical lead who developed it.) The Common Mechanism is free, open-source software that includes a database of concerning sequences and an algorithm that detects similarities between those sequences and submitted ones. Users can integrate more databases and analysis modules as they become available. Still, some scientists think that regulations are necessary. In 2010, the US Department of Health and Human Services issued guidelines recommending that providers of synthetic double-stranded DNAs screen both sequences and customers, but screening was voluntary. In 2023, former US president Joe Biden issued an executive order on AI safety that, among other things, required researchers who receive federal funding and order synthetic DNA to get it only from providers that screen the orders. The aim wasn't to stop federally funded researchers from becoming terrorists, Yassif says; it was to add another intervention point to safeguard well-intentioned research that might result in a lab leak or lead to published work that informs terrorists. In any case, President Donald Trump rescinded the order when he took office in January. (An executive order issued on 5 May that halts 'gain of function' pathogen research, also directs the Director of the Office of Science and Technology Policy to "revise or replace" the 2024 Framework for Nucleic Acid Synthesis Screening -- a product of the 2023 executive order -- to ensure that it "effectively encourages providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms to minimize the risk of misuse".) Beyond DNA sequences themselves, Yassif says that regulators should look at protecting protein-design software and other biological AI models against misuse. "It's so important to get this right, and DNA-synthesis screening can't be the single point of failure." In 2023, the NTI released a report on AI in the life sciences, based on interviews with 30 specialists. It floated several ideas, including having protein-design AI models screen user requests, restricting the training data, requiring the evaluation of model safety and controlling model access. A correspondence in Nature Biotechnology earlier this year recommended similar safeguards. But regulations to protect biological AI models against misuse could be difficult to iron out, Yassif says, because people disagree on risks and rewards. Participants at the University of Washington workshop had a hard time agreeing on a community statement, notes Ian Haydon, the head of AI policy at the university's Institute for Protein Design (which Baker directs). "It's a document that's signed by scores of professors who famously can be a bit stubborn," Haydon says. "It's a bit like herding cats." As a result, its commitments are vague. The biggest area of contention, Haydon says, involved open-source software. "We had people unwilling to sign the language that we arrived at for opposing reasons," he says: some thought it was too supportive of openness, and others thought it was not supportive enough. The risks of sharing design tools are obvious. Sharing screening tools is also risky, because people who want to synthesize -dangerous sequences might work out where the blind spots are and potentially exploit them. The databases in IBBIS's Common Mechanism include well-known proteins of concern, but not some of the more obscure ones. One idea is to send a list of those proteins to approved recipients, but "invariably things will leak", Yassif says. "The challenge this community is facing is: how do we deal with the extra threats beyond the baseline that's publicly known in a way that doesn't create an info hazard?" she says. "That's an unsolved problem." Even if all synthesis providers did screening, there's a potential workaround: would-be bioterrorists can buy a synthesis device, although benchtop versions are error-prone and make relatively short segments of DNA (called oligonucleotides) that need to be pieced together. "Oligo synthesis is a bit of an art," Diggans says. But state-of-the-art technology is changing rapidly. In 2023, the NTI issued a report warning that benchtop synthesizers might be able to build complete viral genomes in as little as a decade. The report recommended regulation. One idea is to require benchtop machines to implement screening internally or over the cloud. But "if there's hardware and software, it can be hacked", says James Demmitt, chief executive of Biolytic Lab Performance, a biotech company in Fremont, California, that makes DNA-synthesis hardware. That said, defences don't need to be perfect to be effective. "I'm not aware of any solution that is 100% bulletproof," Yassif says. The aim is to "make it harder to exploit this technology to cause harm, shrink the number of people that have the capacity to actually exploit this and do something really dangerous, increase the odds of getting caught, increase the odds of failure. That's what success looks like." According to Demmitt, "biosecurity screening does a good job stopping accidental or casual misuse. By forcing folks to go through bigger, pricier hoops, it prevents many would-be dabblers from drifting into dangerous territory." And there are more technical hurdles facing bad actors. Rarely is DNA itself a danger; people need to engineer sequences into cells or viruses to manufacture toxins or produce self-replicating pathogens. That requires biological know-how and equipment beyond many people's means. Even for specialists, there's a huge gap between designing a protein or virus and knowing its effect in people. That's why pandemic prediction is so difficult. Scientists find viruses in the wild that seem dangerous, but few infect people, fewer spread between them and even fewer make them sick. Whatever the chances of someone designing something deadly, specialists say we should remain vigilant, just as in cybersecurity -- but the cat-and-mouse games are different in one regard. "In the cyber world, you have a lot of people looking to exploit these systems," Diggans says, "from the 'script kiddie' teenagers looking to do it for fun, all the way to the multinational crime syndicates." He continues: "It is vanishingly rare to have anyone who wants to exploit biotechnology for nefarious purposes. That is both good -- because we don't want people exploiting biotech -- but it is also hard, because it gives us very few signals against which to build defences." In March, the US National Academies of Sciences, Engineering, and Medicine recommended that more research into methodologies for nucleic-acid-synthesis screening is needed. As that happens, the field can continue to take lessons from cybersecurity specialists, who have been going toe-to-toe with bad actors for decades. "What stands out to me" in the new paper, Haydon says, "is the way they wove in practices and precedents from cybersecurity", such as letting providers build patches before publicizing their findings. As the field develops, providers will need to keep upping their game. As a "Microsoft person", Horvitz is reminded of the Windows update model. "This will never be ending," he says.
[2]
Made to order bioweapon? AI-designed toxins slip through safety checks used by companies selling genes
Microsoft bioengineer Bruce Wittmann normally uses artificial intelligence (AI) to design proteins that could help fight disease or grow food. But last year, he used AI tools like a would-be bioterrorist: creating digital blueprints for proteins that could mimic deadly poisons and toxins such as ricin, botulinum, and Shiga. Wittmann and his Microsoft colleagues wanted to know what would happen if they ordered the DNA sequences that code for these proteins from companies that synthesize nucleic acids. Borrowing a military term, the researchers called it a "red team" exercise, looking for weaknesses in biosecurity practices in the protein engineering pipeline. The effort grew into a collaboration with many biosecurity experts, and according to their new paper, published today in Science, one key guardrail failed. DNA vendors typically use screening software to flag sequences that might be used to cause harm. But the researchers report that this software failed to catch many of their AI-designed genes -- one tool missed more than 75% of the potential toxins. Scientists involved in the exercise kept these vulnerabilities secret until the screening software was upgraded -- but even now, it's not foolproof, they warn. Jaime Yassif, vice president for global biological policy and programs at the Nuclear Threat Initiative, says the study is a model for the future. "It's just the beginning," she says. "AI capabilities are going to evolve and be able to design more and more complex living systems, and our DNA synthesis screening capabilities are going to have to continue to evolve to keep up with that." In their experiment, the Microsoft researchers selected 72 different proteins that are subject to legal controls, such as ricin, a bacterial toxin already used in several terrorist attacks. Traces of ricin were detected on letters sent to top U.S. officials in 2013 and 2018. Using specialized AI protein design tools, Wittmann came up with more than 70,000 DNA sequences that would generate variant forms of these proteins. Computer models suggested that at least some of these alternatives would also be toxic. "The knowledge that I had access to, and stewardship over these proteins was, on a human level, a notable burden," Wittmann says. Wittmann didn't actually create the proteins or test them for toxicity; that would have required ordering the genes from DNA vendors and inserting them into bacteria or yeast to produce the proteins of interest. And doing so could be considered a violation of the Biological Weapons Convention, which bans development or production of such weapons. Instead, he asked four suppliers of biosecurity screening systems used by DNA synthesis labs to run these sequences through their software. The tools failed to flag many of these sequences as problematic. Their performance varied widely. One tool flagged just 23% of the sequences. One of the screening tools flagged 70% of the sequences, and its developer chose not to make any changes to improve the software. (A sensitive screen that catches every potentially hazardous sequence would likely also flag innocuous ones, creating headaches and raising costs.) The other software suppliers rolled out upgrades. The whole process took "a few months," Wittmann says. "We were all very quiet about it," Yassif says. "It was a good example of the community being very responsible." After the upgrades, the systems flagged 72% of Wittmann's AI-generated sequences, on average, including 97% of the sequences that models rated most likely to generate toxins. The study's authors, with the consent of Science, are withholding some details about the AI-generated DNA sequences and the industry's software screening fixes. There will be "managed access" to this information, says Tessa Alexanian of the International Biosecurity and Biosafety Initiative for Science (IBBIS), a nonprofit group that developed one of the four screening software systems. Its experts will review requests for access to the information. Screening software isn't the only biosecurity guardrail that needs strengthening, Yassif says. Some DNA vendors, accounting for perhaps 20% of the market, don't screen their orders at all, she notes. She also argues that additional safeguards should be built into AI protein design tools themselves. There's little indication, so far, that rogue actors are trying to acquire illicit synthetic DNA. "I've been doing this for 10 years, and the number of times we've had to refer an issue to law enforcement -- I have more fingers on one hand," says James Diggans, vice president of policy and biosecurity at Twist Bioscience, a DNA synthesis company. "The real number of people who are trying to create misuse may be very close to zero." Drew Endy, a synthetic biology researcher at Stanford University, says improving the screening software is fine, but it consumes too much attention compared to a much bigger biosecurity risk: nations' possible operation of clandestine bioweapons programs. "I wish people would wake up a little bit," he says. "Today, nations are accusing one another of having offensive bioweapons programs. We accuse Russian and North Korea. China and Russia accuse the United States. This is the historical pattern that happened 100 years ago that led to actual bioweapons programs. We have to de-escalate this."
[3]
Microsoft says AI can create "zero day" threats in biology
"The patch is incomplete, and the state of the art is changing. But this isn't a one-and-done thing. It's the start of even more testing," says Adam Clore, director of technology R&D at Integrated DNA Technologies, a large manufacturer of DNA, who is a coauthor on the Microsoft report. "We're in something of an arms race." To make sure nobody misuses the research, the researchers say, they're not disclosing some of their code and didn't reveal what toxic proteins they asked the AI to redesign. However, some dangerous proteins are well known, like ricin -- a poison found in castor beans -- and the infectious prions that are the cause of mad-cow disease. "This finding, combined with rapid advances in AI-enabled biological modeling, demonstrates the clear and urgent need for enhanced nucleic acid synthesis screening procedures coupled with a reliable enforcement and verification mechanism," says Dean Ball, a fellow at the Foundation for American Innovation, a think tank in San Francisco. Ball notes that the US government already considers screening of DNA orders a key line of security. Last May, in an executive order on biological research safety, President Trump called for an overall revamp of that system, although so far the White House hasn't released new recommendations. Others doubt that commercial DNA synthesis is the best point of defense against bad actors. Michael Cohen, an AI-safety researcher at the University of California, Berkeley, believes there will always be ways to disguise sequences and that Microsoft could have made its test harder. "The challenge appears weak, and their patched tools fail a lot," says Cohen. "There seems to be an unwillingness to admit that sometime soon, we're going to have to retreat from this supposed choke point, so we should start looking around for ground that we can actually hold." Cohen says biosecurity should probably be built into the AI systems themselves -- either directly or via controls over what information they give. But Clore says monitoring gene synthesis is still a practical approach to detecting biothreats, since the manufacture of DNA in the US is dominated by a few companies that work closely with the government. By contrast, the technology used to build and train AI models is more widespread. "You can't put that genie back in the bottle," says Clore. "If you have the resources to try to trick us into making a DNA sequence, you can probably train a large language model."
[4]
AI-designed proteins test biosecurity safeguards
New fixes to monitoring software boosts its ability to catch AI-altered toxic proteins New patches to biosecurity screening software can make it harder to produce potentially harmful proteins using artificial intelligence. Around the world, this software monitors processes to artificially make proteins, ensuring that people with bad intentions aren't producing dangerous proteins, such as toxins. Making slight tweaks with AI to known toxins or viral proteins can bypass the safeguards, researchers report in the Oct. 2 Science. But reinforcing gaps in screening can boost the programs' ability to flag risky AI-designed proteins. "AI advances are fueling breakthroughs in biology and medicine," Eric Horvitz, chief scientific officer at Microsoft in Redmond, Wash., said at a Sept. 30 news briefing. "Yet with new power comes responsibility for vigilance and thoughtful risk management." Proteins are the workhorses of biology. The molecules perform cellular tasks such as assembling cells and transporting cargo throughout the body. With AI, researchers are unlocking ways to fine-tune existing proteins to carry out specific tasks, to design new proteins or to generate new organisms. AI can generate digital blueprints for proteins by determining the amino acids needed to make them, but the technology can't construct physical proteins from thin air. DNA manufacturers string together the appropriate genetic letters and ship the synthetic genes to research labs. Computer programs screen the orders to make sure that the genes don't make hazardous proteins. Horvitz and colleagues simulated tests for biosecurity screening models to find weaknesses that could let AI-generated proteins slip by filters. The team generated roughly 76,000 blueprints for 72 harmful proteins, including ricin, botulinum neurotoxin and ones that help viruses infect people. While the biosecurity screens flagged the DNA for nearly all proteins in their original forms, many AI-adjusted versions snuck through. Software patches helped, even picking up genes after they'd been broken down into fragments. The models failed to flag about 3 percent of variants. The work was done entirely on computers, meaning that the team did not make physical proteins in the lab, and it's unclear if the AI-generated variants retained their function. In reality, biosecurity screens flagging orders for concerning proteins "is an incredibly rare thing," James Diggans, vice president of policy and biosecurity at Twist Bioscience, a DNA synthesis company based in San Francisco, said at the news briefing. While cybersecurity threats happen all the time, "close to zero" people have tried to produce malicious proteins, Diggans said. "These systems are an important bulwark against [threats], but we should all find comfort in the fact that this is not a common scenario."
[5]
AI bioweapon risk laid bare by protein security screening flaw
Bioterrorism threats are rising because of advances in artificial intelligence and synthetic biology, scientists have warned, after researchers found a "striking vulnerability" in software that guards access to genetic material for making deadly proteins. An international team rolled out patches to close the loophole but said it was the first "zero day" of AI and biosecurity -- a term used in cyber hacking to describe a blind spot unknown to the software developer. The news highlights the growing urgency to deal with potential threats unleashed by the use of AI as it helps deepen and accelerate the understanding of living systems and how to change them. Experts are seeking to prevent the creation of bioweapons and synthetic organisms that could threaten life on Earth. "AI-powered protein design is one of the most exciting frontiers of science [and] we're already seeing advances in medicine and public health," said Eric Horvitz, Microsoft's chief scientific officer and senior author of the latest research, published in Science on Thursday. "Yet, like many powerful technologies, these same tools can also be misused." The Science paper researchers carried out a test on biosecurity software used to screen customer orders by companies that sell synthetic nucleic acids. These are deployed by the clients to build DNA that instructs the manufacture of desired proteins, the building blocks of life. The biosecurity screening is designed to block the sale of materials that could be used to make harmful proteins. The researchers used open-source AI protein design software to generate computational renderings of more than 75,000 variants of dangerous proteins with structural tweaks -- a kind of biochemical disguise. While the screening tools worked well for flagging naturally-occurring proteins of concern, they did not spot some of the altered ones, the scientists found. Even after all but one of the companies applied the software patches, about 3 per cent of the protein variants most likely to retain hazardous functionality still passed the monitoring undetected. The scientists worked with organisations including the International Gene Synthesis Consortium and US authorities to address the problem. The research comes after some leading scientists have called for a systematic assessment of biosecurity screening software and improved global governance of AI-boosted protein synthesis. High-profile biologists are also pushing for an international agreement to prevent the creation of potentially deadly manufactured "mirror" microbes, should it become technologically possible to make them. Horvitz said there had been an "intensity of reflection, study and methodology" about the prospect that large language models could be used to further "malevolent actions with biology". Microsoft had incorporated such possibilities in its product safety reviews and had a "growing set of practices" about "red-teaming", or searching for potential vulnerabilities. The Science study highlighted a "pressing issue in protein engineering and biosafety", said Francesco Aprile, associate professor in biological chemistry at Imperial College London. "By introducing targeted improvements to existing software, the authors significantly enhance detection and flagging," Aprile said. "This work provides a practical, timely safeguard that strengthens current DNA synthesis screening, and establishes a solid foundation for continued optimisation." Those defences must be strengthened soon because of the fast pace of technical improvements in the field, said Natalio Krasnogor, professor of computing science and synthetic biology at Newcastle University. While the aspiring bioterrorists of today would need significant expertise, time and money to actually make harmful proteins, those barriers were likely to shrink. "We do need as a society take this seriously now," Krasnogor said, "before additional advances in AI make the validation and experimental production of viable synthetic toxins much easier and cheaper to deploy than it is today."
[6]
How AI is making it easier to design new toxins without being detected
In October 2023, two scientists at Microsoft discovered a startling vulnerability in a safety net intended to prevent bad actors from using artificial intelligence tools to concoct hazardous proteins for warfare or terrorism. Those gaping security holes and how they were discovered were kept confidential until Thursday, when a report in the journal Science detailed how researchers generated thousands of AI-engineered versions of 72 toxins that escaped detection. The research team, a group of leading industry scientists and biosecurity experts, designed a patch to fix this problem found in four different screening methods. But they warn that experts will have to keep searching for future breaches in this safety net. "This is like a Windows update model for the planet. We will continue to stay on it and send out patches as needed, and also define the research processes and best practices moving forward to stay ahead of the curve as best we can," Eric Horvitz, chief scientific officer of Microsoft and one of the leaders of the work, said at a press briefing. The team considered the incident the first AI and biosecurity "zero day" -- borrowing a term from the cyber-world for defense gaps software developers don't know about that leave them susceptible to an attack. In recent years, researchers have been using AI to design bespoke proteins. The work has opened up vast potential across many fields of science. With these tools, scientists can create proteins to degrade plastic pollution, fight disease or make crops more resilient. But with possibility comes risk. That's why in October 2023, the Microsoft scientists embarked on an initial "adversarial" pilot study, in advance of a protein engineering biosecurity conference. The researchers never manufactured any of the proteins but created digital versions as part of the study. Outside biosecurity experts applauded the study and the patch, but said that this is not an area where one single approach to biosecurity is sufficient. "What's happening with AI-related science is that the front edge of the technology is accelerating much faster than the back end ... in managing the risks," said David Relman, a microbiologist at Stanford University School of Medicine. "It's not just that we have a gap -- we have a rapidly widening gap, as we speak. Every minute we sit here talking about what we need to do about the things that were just released, we're already getting further behind." How toxic ricin was a test case for detection Proteins are the building blocks of life -- strings of amino acids that perform crucial functions in cells. They can build muscles, fend off pathogens and carry out chemical reactions necessary for life. Proteins can be spelled out as a sequence of letters, but they fold and twist into 3D shapes. Their form is key to their function. Predicting the structure of proteins was, for decades, a major challenge in science. The winners of last year's chemistry Nobel Prize shared the award for work that allowed scientists to predict protein structure and use AI to custom design proteins with different shapes and functions. Those functions can be positive -- biosensors to detect environmental toxins or used to diagnose a disease. They can also be harmful. As their test case, Horvitz and his collaborator Bruce Wittmann used AI tools to initially "paraphrase" parts of the code of ricin, a deadly poison naturally found in castor beans. In digital form, they created tens of thousands of AI-generated proteins that were spelled differently than the original, but would probably still be toxic. Translating these digital concepts into real-life proteins relies on DNA synthesis companies, which create strands of DNA that scientists can study in the lab and use to generate the protein of interest. The industry standard is for DNA synthesis companies to deploy biosecurity software designed to guard against nefarious activity by flagging proteins of concern -- for example, known toxins or components of pathogens. When the researchers tested two major companies' biosecurity screening techniques, they found that "up to 100 percent" of the AI-generated ricin-like proteins evaded detection. Because the new proteins no longer looked like ricin, they were not flagged. Once they discovered this vulnerability, Horvitz and Wittmann brought in more collaborators and expanded their research to dozens of toxins and components of viruses. Again, they used AI techniques to "paraphrase" parts of their code while retaining their harmful structure, creating more than 70,000 synthetic versions. Screening programs were good at screening out the original toxins, but let thousands of the new versions slip by. Once the researchers discovered the scale of the problem, they devised a patch. "This is a really valuable study, in that it shows there is a problem -- and that it shows AI is going to change the nature of the problem. But it's not an insoluble problem," said Tom Inglesby, director of the Johns Hopkins Center for Health Security at the Bloomberg School of Public Health, who was not involved in the work. Evolving regulatory landscape Under a federal framework that is being updated, researchers who receive federal funding are required to place orders with DNA synthesis companies that use biosecurity screening software. What worries many biosecurity experts is how the system still largely relies on voluntary compliance, and many gaps could allow people to make AI-designed toxins without anyone noticing. Not only can the screening software itself be tricked, as shown in the new study, but not all companies deploy the software. Another challenge is that not all synthesis occurs at large companies. Benchtop devices can be used to synthesize short strands of DNA, and these could be patched together to create proteins. And more fundamentally, while some proteins are toxic because they are similar to existing ones, people could also design entirely new kinds of toxins that could escape notice. A different approach, biosecurity experts say, is to ensure AI software itself is imbued with safeguards before digital ideas are at the cusp of being brought into labs for research and experimentation. Tessa Alexanian, a biosecurity expert at the International Biosecurity and Biosafety Initiative for Science, a Swiss nonprofit, said that 180 AI developers signed a series of commitments last year, including a vow to support the development of new strategies to add biosecurity screening earlier in the process, before proteins are being made. Some think the clearest path forward is a registry to deter bad actors. "The only surefire way to avoid problems is to log all DNA synthesis, so if there is a worrisome new virus or other biological agent, the sequence can be cross-referenced with the logged DNA database to see where it came from," David Baker, who shared the Nobel Prize in chemistry for his work on proteins, said in an email.
[7]
AI designs for dangerous DNA can slip past biosecurity measures, study shows
Major biotech companies that churn out made-to-order DNA for scientists have protections in place to keep dangerous biological material out of the hands of would-be evil-doers. They screen their orders to catch anyone trying to buy, say, smallpox or anthrax genes. But now, a new study in the journal Science has demonstrated how AI could be used to easily circumvent those biosafety processes. A team of AI researchers found that protein-design tools could be used to "paraphrase" the DNA codes of toxic proteins, "re-writing them in ways that could preserve their structure, and potentially their function," says Eric Horvitz, Microsoft's chief scientific officer. The computer scientists used an AI program to generate DNA codes for more than 75,000 variants of hazardous proteins - and the firewalls used by DNA manufacturers weren't consistently able to catch them. "To our concern," says Horvitz, "these reformulated sequences slipped past the biosecurity screening systems used worldwide by DNA synthesis companies to flag dangerous orders." A fix quickly got written that and slapped onto the biosecurity screening software. But it's not perfect -- it still wasn't able to detect a small fraction of the variants. And it's just the latest episode showing how AI is revving up long-standing concerns about the potential misuse of powerful biological tools. "AI-powered protein design is one of the most exciting frontiers in science. We're already seeing advances in medicine and public health," says Horvitz. "Yet like many powerful technologies, these same tools can often be misused." For years, biologists have worried that their ever-improving DNA tools might be harnessed to design potent biothreats, like more virulent viruses or easy-to-spread toxins. They've even debated whether it's really wise to openly publish certain experimental results, even though open discussion and independent replication has been the lifeblood of science. The researchers and the journal who published this new study decided to hold some of their information back, and will restrict who gets access to their data and software. They enlisted a third party, a non-profit called the International Biosecurity and Biosafety Initiative for Science, to make decisions about who has a legitimate need to know. "This is the first time such a model has been employed to manage risks of sharing hazardous information in a scientific publication," says Horvitz. Scientists who have been worried about future biosecurity threats for some time praised this work. "My overall reaction was favorable," says Arturo Casadevall, a microbiologist and immunologist at Johns Hopkins University. "Here we have a system in which we are identifying vulnerabilities. And what you're seeing is an attempt to correct the known vulnerabilities." The trouble is, says Casadevall, "what vulnerabilities don't we know about that will require future corrections?" He notes that this team did not do any lab work to actually generate any of the proteins designed by AI, to see if they would truly mimic the activity of the biological original threats. Such work would be an important reality check as society grapples with this kind of emerging threat from AI, says Casadevall, but would be tricky to do, as it might be precluded by international treaties prohibiting the development of biological weapons. This isn't the first time scientists have explored the potential for malevolent use of AI in a biological setting. For example, a few years ago, another team wondered if AI could be used to generate novel molecules that would have the same properties as nerve agents. In less than six hours, the AI tool dutifully concocted 40,000 molecules that met the requested criteria. It not only came up with known chemical warfare agents like the notorious one called VX, but also designed many unknown molecules that looked plausible and were predicted to be more toxic. "We had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules," the researchers wrote. That team also didn't openly publish the chemical structures that the AI tool had devised, or create them in the lab, "because they thought they were way too dangerous," points out David Relman, a researcher at Stanford University. "They simply said, we're telling you all about this as a warning." Relman thinks this latest study, showing how AI could be used to evade security screening and finding a way to address that, is laudable. At the same time, he says, it just illustrates that there's an enormous problem brewing. "I think it leaves us dangling and wondering, 'Well, what exactly are we supposed to do?'" he says. "How do we get ahead of a freight train that is just evermore accelerating and racing down the tracks, in danger of careening off the tracks?" Despite concerns like these, some biosecurity experts see reasons to be reassured. Twist Bioscience is a major provider of made-to-order DNA, and in the past ten years, it's had to refer orders to law enforcement fewer than five times, says James Diggans, the head of policy and biosecurity at Twist Bioscience and chair of the board of directors at the International Gene Synthesis Consortium, an industry group. "This is an incredibly rare thing," he says. "In the cybersecurity world, you have a host of actors that are trying to access systems. That is not the case in biotech. The real number of people who are really trying to create misuse may be very close to zero. And so I think these systems are an important bulwark against that, but we should all find comfort in the fact that this is not a common scenario."
[8]
Microsoft on AI in Biology: Understanding the risks of zero-day threats
Zero-day vulnerabilities show urgent need for stronger AI biosecurity safeguards When Microsoft's chief scientific officer Eric Horvitz and his team describe a "zero-day" in biology, they are deliberately borrowing a term from cybersecurity. A zero-day vulnerability refers to an unknown flaw in software that hackers can exploit before anyone has time to patch it. But here, the flaw isn't in computer code - it's in the global biosecurity systems that are supposed to detect and prevent the misuse of synthetic DNA. And the exploit, as Microsoft researchers discovered, comes from AI. In a new study, Microsoft scientists revealed that artificial intelligence can help generate genetic sequences that evade current screening software. These systems, widely used by DNA synthesis companies and research labs, compare incoming orders against a database of known pathogens and toxins. The idea is simple: if someone tries to order a dangerous sequence - say, a segment of anthrax or a toxic protein - the system raises a red flag. But with the help of generative AI, the researchers showed that harmful designs could be rewritten in ways that still function biologically but no longer look suspicious to the software. Also read: Perplexity's Comet AI browser now free for all users, gets new Background Assistants feature The finding is being described as the first real "zero-day" in biosecurity. Much like cybercriminals who use new malware to slip past firewalls, AI was able to paraphrase dangerous code in protein form, creating sequences that existing screening methods failed to recognize. According to the researchers, this breakthrough isn't just theoretical: it demonstrates a fundamental weakness in how the world currently guards against biological misuse. While the Microsoft team quickly developed patches and proposed improvements to strengthen defenses, the deeper message is clear. As AI models become more powerful and more accessible, defensive systems will have to keep evolving just as quickly. What was once an unlikely scenario, AI accelerating the design of harmful biological agents. is now a tangible risk. For decades, biosecurity experts have relied on the assumption that creating bioweapons requires both advanced expertise and specialized equipment. The tacit knowledge needed to turn genetic code into a functional threat has acted as a natural barrier. But large AI models are starting to erode that barrier by guiding even non-specialists through steps that once demanded years of training. Also read: Vibe working explained: Microsoft's AI agent for Excel, Word, and PowerPoint At the same time, DNA synthesis is becoming faster, cheaper, and more distributed globally. If AI can help generate malicious code that evades standard filters, the result could be a dangerous widening of access to biothreat capabilities. This is especially concerning given that existing international safeguards remain voluntary and unevenly enforced. None of this means AI in biology is inherently bad. In fact, many of the same tools that can help design harmful sequences are revolutionizing drug discovery, protein engineering, and vaccine development. AI can speed up the search for cancer treatments, optimize enzymes for clean energy, and even predict the structure of proteins that were previously unsolvable puzzles. But the dual-use nature of the technology, equally capable of breakthroughs and biothreats, makes it uniquely challenging to regulate. What Microsoft's zero-day demonstration shows is that ignoring the problem is not an option. The tools are too powerful, and the stakes too high. Microsoft's researchers have urged for a "defense-in-depth" strategy: not just relying on sequence matching, but combining multiple approaches such as functional prediction, structure analysis, and even AI red-teaming to identify hidden threats. They also argue for stronger international coordination, noting that pathogens do not respect borders and neither do AI models. Governments and research institutions are beginning to take note. Discussions are underway on whether access to powerful biological design models should be gated, whether DNA synthesis should come with stricter oversight, and how to build rapid-response systems capable of spotting new threats. Just as the internet forced the world to invent cybersecurity, the rise of AI-assisted biology is pushing us toward a new field: bio-AI security. The Microsoft team's discovery may have closed one loophole, but it also underscored how many more may be waiting. The challenge now is not simply to react to each new exploit, but to build systems resilient enough to anticipate them. That means recognizing AI as both a catalyst for progress and a potential amplifier of risk. And it means preparing for a world where the next "zero-day" may not be in a line of computer code, but in the blueprint of life itself.
Share
Share
Copy Link
A Microsoft-led study reveals how AI-generated protein variants can bypass existing biosecurity screening software, highlighting potential bioterrorism risks and prompting urgent updates to safety measures in synthetic biology.
A groundbreaking study led by Microsoft researchers has uncovered a significant vulnerability in the biosecurity screening process used by DNA synthesis companies. The research, published in Science on October 2, 2025, demonstrates how artificial intelligence (AI) can be used to design protein variants that mimic dangerous toxins while evading detection by current screening software
1
2
.Source: Science
Microsoft's chief scientific officer, Eric Horvitz, and applied scientist Bruce Wittmann spearheaded this 'red team' exercise, borrowing a concept from cybersecurity. They used AI tools to generate over 70,000 DNA sequences encoding variants of 72 controlled proteins, including deadly toxins like ricin and botulinum
2
.Source: Digit
The team tested these AI-generated sequences against four biosecurity screening systems used by DNA synthesis companies. The results were alarming: one tool missed more than 75% of the potential toxins, while others showed varying degrees of vulnerability
2
3
.Source: Science News
Upon discovering these vulnerabilities, the researchers worked closely with biosecurity experts, government officials, and the International Gene Synthesis Consortium (IGSC) to address the issue. Most software providers quickly rolled out upgrades, significantly improving their detection capabilities
1
4
.After the upgrades, the screening systems' performance improved dramatically, flagging an average of 72% of the AI-generated sequences, including 97% of those most likely to generate toxins
2
4
.Related Stories
This study highlights the urgent need for enhanced nucleic acid synthesis screening procedures and reliable enforcement mechanisms. It underscores the potential risks associated with AI-enabled biological modeling and the importance of staying ahead in what some experts describe as an 'arms race'
3
5
.While the current threat level appears low, with few attempts to acquire illicit synthetic DNA reported, experts emphasize the need for ongoing vigilance. Some argue that biosecurity measures should be built into AI systems themselves, as the technology for building and training AI models becomes more widespread
3
5
.The study has prompted calls for improved global governance of AI-boosted protein synthesis. Some experts advocate for an international agreement to prevent the creation of potentially deadly manufactured microbes. The U.S. government has already recognized the importance of DNA order screening, with President Trump calling for a revamp of the system in a recent executive order
3
5
.Summarized by
Navi
[3]
[4]
[5]