AWS's Custom AI Chips Help Metagenomi Slash Costs in Gene-Editing Research

2 Sources

Share

Biotech startup Metagenomi has leveraged AWS's Inferentia 2 accelerators to significantly reduce costs and accelerate gene-editing research. The collaboration highlights the potential of custom AI chips in biotechnology applications.

News article

AWS's Inferentia 2 Accelerators Boost Gene-Editing Research

Gene editing startup Metagenomi has made significant strides in its research by leveraging Amazon Web Services' (AWS) custom AI chips, the Inferentia 2 accelerators. This collaboration has resulted in a remarkable 56% reduction in costs compared to using traditional Nvidia GPUs for their AI-driven gene therapy discovery process

1

.

The Science Behind Metagenomi's Approach

Metagenomi, founded in 2018, is utilizing the Nobel prize-winning CRISPR technology to develop targeted gene-editing therapies. Their approach aims to treat diseases at the genetic level rather than merely addressing symptoms. The key to their research lies in identifying specific enzymes that can bind to RNA sequences, cut DNA at precise locations, and fit within the chosen delivery mechanism

1

.

AI-Powered Protein Generation

To accelerate their research, Metagenomi employs a class of generative AI known as protein language models (PLMs), specifically Progen2. This model, developed by researchers at Salesforce, Johns Hopkins, and Columbia Universities in 2022, is capable of synthesizing novel protein sequences. With approximately 800 million parameters, Progen2 is relatively small compared to modern large language models, making it suitable for running on specialized hardware like AWS's Inferentia 2

1

.

AWS Inferentia 2 vs. Nvidia L40S

The trial conducted by Metagenomi compared AWS's Inferentia 2 accelerator with Nvidia's L40S GPU. While the L40S boasts higher raw performance metrics, Amazon's Inferentia 2 proved more cost-effective for Metagenomi's specific workload. The Inferentia 2, launched in 2023, features 32GB of HBM, 820 GB/s of memory bandwidth, and 190 teraFLOPS of 16-bit performance

1

.

Cost Savings and Increased Productivity

AWS's solution leverages its batch processing pipeline, AWS Batch, and spot instances to achieve significant cost reductions. The lower interruption rate of Inferentia 2 spot instances (5% compared to 20% for L40S) contributes to greater availability and efficiency. Chris Brown, VP of discovery at Metagenomi, emphasized that this cost-effectiveness translates directly into more science, enabling his team to perform multiple experiments daily or weekly instead of just one project per year

1

2

.

Broader Implications for AI in Biotechnology

This collaboration between Metagenomi and AWS represents one of the first major applications of Amazon's custom AI chips beyond chatbots and large language models. It demonstrates the potential for specialized AI accelerators in biotechnology research, particularly in tasks that don't require interactive performance but benefit from cost-effective batch processing

2

.

Future Prospects

The success of this partnership opens up new possibilities for the use of custom AI chips in various scientific fields. As biotechnology companies continue to harness the power of AI for drug discovery and gene therapy development, the demand for cost-effective and efficient computing solutions is likely to grow. This trend could reshape the landscape of AI hardware in scientific research, potentially challenging the dominance of traditional GPU manufacturers in certain niche applications.

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo