2 Sources
2 Sources
[1]
Experts Alarmed That AI Is Now Producing Functional Viruses
In real world experiments, a team of Stanford researchers demonstrated that a virus with AI-written DNA could target and kill specific bacteria, they announced in a study last week. It opened up a world of possibilities where artificial viruses could be used to cure diseases and fight infections. But experts say it also opened a Pandora's box. Bad actors could just as easily use AI to crank out novel bioweapons, keeping doctors and governments on the backfoot with the outrageous pace at which these viruses can be designed, warn Tal Feldman, a Yale Law School student who formerly built AI models for the federal government, and Jonathan Feldman, a computer science and biology researcher at Georgia Tech (no word on whether the two are related). "There is no sugarcoating the risks," the pair warned in a piece for the Washington Post. "We're nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be -- because that's the world we're now living in." In the study, the Stanford researchers used an AI model called Evo to invent DNA for a bacteriophage, a virus that infects bacteria. Unlike a general purpose large language model like ChatGPT, which is trained on written language, Evo was exclusively trained on millions of bacteriophage genomes. They focused on an extensively studied phage called phiX174, which is known to infect strains of the bacteria E. coli. Using the EVO AI model, the team came up with 302 candidate genomes based on phiX174 and put them to the test by using the designs to chemically assemble new viruses. Sixteen of them worked, infecting and killing the E. coli strains. Some of them were even deadlier than the natural form of the virus. But "while the Stanford team played it safe, what's to stop others from using open data on human pathogens to build their own models?" the two Feldmans warned. "If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can't stop novel AI-generated threats. The real challenge is to outpace them." That means using the same AI tech to design antibodies, antivirals, and vaccines. This work is already being done to some extent, but the vast amounts of data needed to accelerate such pioneering research "is siloed in private labs, locked up in proprietary datasets or missing entirely." "The federal government should make building these high-quality datasets a priority," the duo opined. From there, the federal government would need to build the necessary infrastructure to manufacture these AI-designed medicines, since the "private sector cannot justify the expense of building that capacity for emergencies that may never arrive," they argue. Finally, the Food and Drug Administration's sluggish and creaking regulatory framework would need an overhaul. (Perhaps in a monkey's paw of such an overhaul, the FDA said it's using AI to speed-run the approval of medications.) "Needed are new fast-tracking authorities that allow provisional deployment of AI-generated countermeasures and clinical trials, coupled with rigorous monitoring and safety measures," they said. The serious risks posed by AI virus generation shouldn't be taken lightly. Yet, it's worth noting that the study in question hasn't made it out of peer review yet and we still don't have a full picture of how readily someone could replicate the work the scientists did. But with agencies like the Centers for Disease Control and Prevention being gutted, and vaccines and other medical interventions being attacked by a health-crank riddled administration, there's no denying that the country's medical policy and infrastructure is in a bad place. That said, when you consider that the administration is finding any excuse to rapidly deploy AI in every corner of the government, it's worth treading lightly when we ask for more.
[2]
AI Has Designed Living Genomes -- And They Worked in the Lab
The work hints at custom-designed phage therapies -- and raises urgent questions about governing generative biology. This week, while some headlines focused on the unsettling claim that an AI system had designed a working virus, a quieter preprint out of Stanford and the Arc Institute hinted at something even more momentous -- and, depending on your outlook, more alarming. Researchers there reported the first generative design of entire living genomes: 16 synthetic bacteriophages -- viruses that infect bacteria -- dreamed up by artificial intelligence, built in the lab, and proven to replicate, evolve, and outcompete their natural ancestor. The team used "genome language models" named Evo 1 and Evo 2, cousins to the large language models behind ChatGPT, but trained on billions of base pairs of viral DNA instead of words. These systems didn't merely mutate existing viruses; they composed new genomes from scratch, balancing thousands of interdependent genes, promoters, and regulatory motifs -- tasks that have long defied human bioengineers. Of 302 AI-generated genomes tested, 16 came to life, producing functional phages capable of infecting E. coli and, in some cases, outperforming the wild-type ΦX174 virus that inspired them. Why it matters The achievement, if replicated, represents a milestone in synthetic biology on par with Craig Venter's 2010 creation of a minimal bacterial cell. Until now, AI tools could design individual proteins or short genetic circuits; composing an entire, viable genome had remained out of reach. This study demonstrates that machine learning can capture the grammar of life at genome scale -- assembling sequences complex enough to fold, self-organize, and reproduce. Practically, that could transform phage therapy, a century-old antibacterial strategy now resurging amid the antibiotic resistance crisis. The researchers mixed their sixteen AI-built phages into a "cocktail" that swiftly overcame resistance in E. coli strains that defeated the natural ΦX174. In principle, the same approach could yield custom viral treatments for drug-resistant infections, or tailor phages to target pathogens in agriculture, aquaculture, or wastewater. Beyond medicine, genome-scale generative design might open new industrial frontiers: phages that program microbiomes, microbes that manufacture green chemicals, or viruses that act as nanoscale couriers inside living tissues. Every application once constrained by evolutionary happenstance could, in theory, be authored like code. Context and caution That promise is inseparable from peril. The Washington Post's report -- that another AI autonomously generated a working pathogen -- captured public unease that tools capable of designing life might design the wrong kind. The Stanford-Arc study, though carefully contained, shows how close we are to that threshold. Its authors emphasize safety: They worked only with non-pathogenic E. coli at approved biosafety levels, fine-tuned models on limited viral families, and built filters to block human-virus sequences. Still, the line between could and should is narrowing. The experiments also underscore how unpredictable biology remains. Most AI-generated genomes were duds; others survived by accident of molecular compatibility. Even the successful ones evolved unexpected traits -- like swapping a structural gene previously thought lethal -- suggesting that AI can navigate evolutionary shortcuts humans don't yet understand. That creative unpredictability is both the source of innovation and the seed of risk. The bigger picture In less than a decade, language models have gone from writing essays to writing evolution itself. The leap from text to test tube collapses the distance between simulation and creation, forcing regulators and researchers to confront a new reality: AI no longer just predicts biology -- it invents it. As antibiotic pipelines dry up and pandemics loom, designing beneficial viruses may be one of humanity's best tools, and greatest temptations. What this paper suggests is not simply that AI can build life, but that it can out-evolve it. Whether society can keep pace is now the more pressing experiment.
Share
Share
Copy Link
Stanford researchers successfully create functional viruses using AI-generated DNA, opening new possibilities for medical treatments while raising alarm about potential misuse and biosecurity risks.
In a groundbreaking study, Stanford researchers have successfully created functional viruses using AI-generated DNA, marking a significant milestone in synthetic biology
1
2
. The team used an AI model called Evo, trained on millions of bacteriophage genomes, to design DNA for viruses capable of infecting and killing specific bacteria.The researchers focused on the well-studied phage phiX174, known to infect E. coli strains. Using the Evo AI model, they generated 302 candidate genomes based on phiX174. When these designs were chemically assembled into new viruses, 16 of them successfully infected and killed E. coli strains, with some proving even deadlier than the natural form of the virus
1
.This achievement represents a significant leap in synthetic biology, comparable to Craig Venter's 2010 creation of a minimal bacterial cell. The AI-designed genomes demonstrated the ability to fold, self-organize, and reproduce, showcasing the potential of machine learning to capture the complex 'grammar of life' at a genome scale
2
.The success of AI-designed viruses opens up exciting possibilities in various fields:
Phage Therapy: The researchers created a 'cocktail' of AI-built phages that quickly overcame resistance in E. coli strains, suggesting potential applications in treating drug-resistant infections
2
.Agriculture and Environmental Management: Custom-designed phages could target pathogens in agriculture, aquaculture, or wastewater treatment
2
.Industrial Applications: The technology could lead to the development of phages that program microbiomes, microbes for green chemical manufacturing, or viruses acting as nanoscale couriers in living tissues
2
.Related Stories
While the potential benefits are significant, experts have raised alarm about the biosecurity implications of this technology:
Bioweapon Risks: There are concerns that bad actors could use similar AI technology to design novel bioweapons, potentially outpacing current medical responses
1
.Regulatory Challenges: The rapid pace of AI-generated biological innovations may require an overhaul of existing regulatory frameworks, such as the FDA's approval process for new medications
1
.Unpredictability: The study revealed that AI can navigate evolutionary shortcuts not yet understood by humans, highlighting the unpredictable nature of this technology
2
.Experts emphasize the need for proactive measures to address potential risks:
Data Sharing: Building high-quality datasets for AI-driven medical research should be a priority for the federal government
1
.Infrastructure Development: There's a need for infrastructure to manufacture AI-designed medicines rapidly in case of emergencies
1
.Regulatory Reform: New fast-tracking authorities for AI-generated countermeasures, coupled with rigorous monitoring and safety measures, are suggested
1
.As AI continues to advance in the field of synthetic biology, the scientific community and policymakers face the challenge of harnessing its potential while mitigating risks. The ability of AI to not just predict but invent biology necessitates a new approach to regulation and ethical considerations in this rapidly evolving field.🟡agnet_image_data_id=🟡None
Summarized by
Navi