3 Sources
3 Sources
[1]
AI generated its first working genome: a tiny bacteria killer
AI-designed bacteriophages could one day be used in phage therapy Artificial intelligence can dash off more than routine emails. It has now written tiny working genomes. Two AI models designed the blueprints for 16 viruses capable of attacking Escherichia coli in lab dishes, researchers report September 17 in a paper posted to bioRxiv.org. A mixture of these AI-generated bacteriophages stopped virus-resistant E. coli strains from growing, suggesting that the technique could help scientists design therapies capable of taking on tough-to-treat microbial infections. The work has not yet been peer-reviewed. It's the first time that AI has successfully generated an entire genome, says Brian Hie, a computational biologist at Stanford University and the Arc Institute in Palo Alto, Calif. And while it's debatable whether viruses are alive or not, the work is a step toward using the technology to design living organisms. AI models have already been used to devise individual genes and proteins. Creating an entire genetic blueprint from scratch, however, adds an extra layer of complexity because numerous genes and proteins need to work together, Hie says. Hie and colleagues turned to two of their own AI models, called Evo 1 and Evo 2, to see if they could create genomes for bacteria-killing viruses. The models were trained on billions of pairs of the genetic alphabet's basic units, A, C, G and T's, from phage genomes the way ChatGPT was trained on novels and internet posts. The team used a bacteriophage called ΦX174 -- which in 1977 became the first DNA-based genome ever sequenced -- as a guide to help the AI design a similar genome. Because ΦX174 has been so well-studied, "if the AI was making novel mutations to the phage, we would be able to see how novel they are," Hie says. What's more, bacteriophages don't infect people, so it was safe to work with in the lab. Out of concern that the AI might design viruses that could harm people, the team did not train the models on any examples of viral pathogens. Evo 1 and Evo 2 generated roughly 300 potential phage genomes. Of those, 16 produced viable viruses that could infect E. coli. Some of the phages even killed E. coli more quickly than ΦX174 did. And although ΦX174 couldn't kill three phage-resistant strains of E. coli on its own, cocktails of AI-generated phages rapidly evolved to overcome the bacteria's resistance to infection. The findings suggest that AI could help researchers develop viruses to use in phage therapy, a potential option to treat antibiotic-resistant bacterial infections. In such cases, "the need to find a phage that targets the bacterial strain would be very urgent," says Kimberly Davis, a microbiologist at Johns Hopkins Bloomberg School of Public Health who wasn't involved in the work. "Utilizing AI could be a powerful way of rapidly generating a phage match to treat patients." Davis notes that "the use of AI-generated phages would need to be tightly controlled." For instance, extensive testing could make sure that such phages don't interact with or harm other microbes. AI-generated phages would ideally not only kill just one bad type of bacteria while sparing good bacteria that keep people healthy, Hie says, but might also evolve in ways that keep up with virus-resistant bacteria. Using AI to design entire organisms could also speed up microbial manufacturing processes such as antibiotic production or cultivate microbes that degrade plastic. And AI has the potential to help researchers make sense of genomes that are even more complex and develop new treatments for complicated diseases, Hie says. The human genome is more than half a million times the size of ΦX174's genome, "so there's a lot of work to go."
[2]
Experts Alarmed That AI Is Now Producing Functional Viruses
In real world experiments, a team of Stanford researchers demonstrated that a virus with AI-written DNA could target and kill specific bacteria, they announced in a study last week. It opened up a world of possibilities where artificial viruses could be used to cure diseases and fight infections. But experts say it also opened a Pandora's box. Bad actors could just as easily use AI to crank out novel bioweapons, keeping doctors and governments on the backfoot with the outrageous pace at which these viruses can be designed, warn Tal Feldman, a Yale Law School student who formerly built AI models for the federal government, and Jonathan Feldman, a computer science and biology researcher at Georgia Tech (no word on whether the two are related). "There is no sugarcoating the risks," the pair warned in a piece for the Washington Post. "We're nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be -- because that's the world we're now living in." In the study, the Stanford researchers used an AI model called Evo to invent DNA for a bacteriophage, a virus that infects bacteria. Unlike a general purpose large language model like ChatGPT, which is trained on written language, Evo was exclusively trained on millions of bacteriophage genomes. They focused on an extensively studied phage called phiX174, which is known to infect strains of the bacteria E. coli. Using the EVO AI model, the team came up with 302 candidate genomes based on phiX174 and put them to the test by using the designs to chemically assemble new viruses. Sixteen of them worked, infecting and killing the E. coli strains. Some of them were even deadlier than the natural form of the virus. But "while the Stanford team played it safe, what's to stop others from using open data on human pathogens to build their own models?" the two Feldmans warned. "If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can't stop novel AI-generated threats. The real challenge is to outpace them." That means using the same AI tech to design antibodies, antivirals, and vaccines. This work is already being done to some extent, but the vast amounts of data needed to accelerate such pioneering research "is siloed in private labs, locked up in proprietary datasets or missing entirely." "The federal government should make building these high-quality datasets a priority," the duo opined. From there, the federal government would need to build the necessary infrastructure to manufacture these AI-designed medicines, since the "private sector cannot justify the expense of building that capacity for emergencies that may never arrive," they argue. Finally, the Food and Drug Administration's sluggish and creaking regulatory framework would need an overhaul. (Perhaps in a monkey's paw of such an overhaul, the FDA said it's using AI to speed-run the approval of medications.) "Needed are new fast-tracking authorities that allow provisional deployment of AI-generated countermeasures and clinical trials, coupled with rigorous monitoring and safety measures," they said. The serious risks posed by AI virus generation shouldn't be taken lightly. Yet, it's worth noting that the study in question hasn't made it out of peer review yet and we still don't have a full picture of how readily someone could replicate the work the scientists did. But with agencies like the Centers for Disease Control and Prevention being gutted, and vaccines and other medical interventions being attacked by a health-crank riddled administration, there's no denying that the country's medical policy and infrastructure is in a bad place. That said, when you consider that the administration is finding any excuse to rapidly deploy AI in every corner of the government, it's worth treading lightly when we ask for more.
[3]
AI Has Designed Living Genomes -- And They Worked in the Lab
The work hints at custom-designed phage therapies -- and raises urgent questions about governing generative biology. This week, while some headlines focused on the unsettling claim that an AI system had designed a working virus, a quieter preprint out of Stanford and the Arc Institute hinted at something even more momentous -- and, depending on your outlook, more alarming. Researchers there reported the first generative design of entire living genomes: 16 synthetic bacteriophages -- viruses that infect bacteria -- dreamed up by artificial intelligence, built in the lab, and proven to replicate, evolve, and outcompete their natural ancestor. The team used "genome language models" named Evo 1 and Evo 2, cousins to the large language models behind ChatGPT, but trained on billions of base pairs of viral DNA instead of words. These systems didn't merely mutate existing viruses; they composed new genomes from scratch, balancing thousands of interdependent genes, promoters, and regulatory motifs -- tasks that have long defied human bioengineers. Of 302 AI-generated genomes tested, 16 came to life, producing functional phages capable of infecting E. coli and, in some cases, outperforming the wild-type ΦX174 virus that inspired them. Why it matters The achievement, if replicated, represents a milestone in synthetic biology on par with Craig Venter's 2010 creation of a minimal bacterial cell. Until now, AI tools could design individual proteins or short genetic circuits; composing an entire, viable genome had remained out of reach. This study demonstrates that machine learning can capture the grammar of life at genome scale -- assembling sequences complex enough to fold, self-organize, and reproduce. Practically, that could transform phage therapy, a century-old antibacterial strategy now resurging amid the antibiotic resistance crisis. The researchers mixed their sixteen AI-built phages into a "cocktail" that swiftly overcame resistance in E. coli strains that defeated the natural ΦX174. In principle, the same approach could yield custom viral treatments for drug-resistant infections, or tailor phages to target pathogens in agriculture, aquaculture, or wastewater. Beyond medicine, genome-scale generative design might open new industrial frontiers: phages that program microbiomes, microbes that manufacture green chemicals, or viruses that act as nanoscale couriers inside living tissues. Every application once constrained by evolutionary happenstance could, in theory, be authored like code. Context and caution That promise is inseparable from peril. The Washington Post's report -- that another AI autonomously generated a working pathogen -- captured public unease that tools capable of designing life might design the wrong kind. The Stanford-Arc study, though carefully contained, shows how close we are to that threshold. Its authors emphasize safety: They worked only with non-pathogenic E. coli at approved biosafety levels, fine-tuned models on limited viral families, and built filters to block human-virus sequences. Still, the line between could and should is narrowing. The experiments also underscore how unpredictable biology remains. Most AI-generated genomes were duds; others survived by accident of molecular compatibility. Even the successful ones evolved unexpected traits -- like swapping a structural gene previously thought lethal -- suggesting that AI can navigate evolutionary shortcuts humans don't yet understand. That creative unpredictability is both the source of innovation and the seed of risk. The bigger picture In less than a decade, language models have gone from writing essays to writing evolution itself. The leap from text to test tube collapses the distance between simulation and creation, forcing regulators and researchers to confront a new reality: AI no longer just predicts biology -- it invents it. As antibiotic pipelines dry up and pandemics loom, designing beneficial viruses may be one of humanity's best tools, and greatest temptations. What this paper suggests is not simply that AI can build life, but that it can out-evolve it. Whether society can keep pace is now the more pressing experiment.
Share
Share
Copy Link
Stanford researchers have successfully used AI to design and create functional bacteriophage genomes, marking a significant milestone in synthetic biology. This achievement opens new possibilities for phage therapy and raises important biosafety concerns.
In a groundbreaking development, researchers from Stanford University and the Arc Institute have successfully used artificial intelligence to design and create functional bacteriophage genomes. This achievement marks the first time AI has generated entire working genomes, representing a significant milestone in the field of synthetic biology
1
3
.Source: Futurism
The research team employed two AI models, Evo 1 and Evo 2, which were trained on billions of base pairs of viral DNA. These models generated approximately 300 potential phage genomes, of which 16 produced viable viruses capable of infecting Escherichia coli (E. coli) bacteria
1
.One of the most promising applications of this technology is in the field of phage therapy, which could offer a solution to the growing problem of antibiotic-resistant bacterial infections. The AI-generated phages demonstrated the ability to rapidly evolve and overcome bacterial resistance, suggesting potential use in treating challenging infections
1
2
.Kimberly Davis, a microbiologist at Johns Hopkins Bloomberg School of Public Health, highlighted the potential of AI in rapidly generating phage matches for urgent patient treatment. However, she also emphasized the need for tight control and extensive testing of AI-generated phages
1
.This achievement is being compared to Craig Venter's 2010 creation of a minimal bacterial cell in terms of its significance for synthetic biology. The ability of AI to compose new genomes from scratch, balancing thousands of interdependent genes and regulatory elements, opens up new possibilities for designing custom viral treatments and programming microbiomes
3
.Source: Science News
Related Stories
While the potential benefits are significant, experts have raised concerns about the biosafety implications of this technology. The ability to generate functional viruses using AI could potentially be misused to create biological weapons, necessitating careful regulation and oversight
2
.Tal Feldman and Jonathan Feldman, writing in the Washington Post, emphasized the need for proactive measures to address potential risks. They suggested prioritizing the development of high-quality datasets, building infrastructure for manufacturing AI-designed medicines, and overhauling regulatory frameworks to allow for faster deployment of AI-generated countermeasures
2
.The research team envisions potential applications beyond medicine, including microbial manufacturing processes and the cultivation of microbes for environmental purposes, such as plastic degradation. However, they acknowledge that significant work remains to be done, particularly in scaling up to more complex genomes like the human genome
1
3
.Source: Decrypt
As AI continues to advance in its ability to design and create biological entities, society faces the challenge of keeping pace with these developments. The leap from text to test tube collapses the distance between simulation and creation, forcing researchers and regulators to confront a new reality where AI not only predicts biology but invents it
3
.Summarized by
Navi
[1]
03 Oct 2025•Science and Research
15 Aug 2025•Science and Research
20 Feb 2025•Science and Research
1
Technology
2
Business and Economy
3
Business and Economy