3 Sources
3 Sources
[1]
US, European regulators set principles for 'good AI practice' in drug development
Jan 14 (Reuters) - The U.S. Food and Drug Administration and the European Medicines Agency jointly issued principles for safe and responsible use of artificial intelligence in developing medicines, aiming to speed up innovation while safeguarding patients. The principles, issued on Wednesday, offer broad guidance on how AI should be used to generate and monitor evidence across a drug's lifecycle, from early research and clinical trials to manufacturing and safety surveillance. The move comes as regulators push to expand the use of AI in drug discovery and development to shorten timeline and reduce animal testing. The FDA has a generative AI tool, Elsa, aimed at improving efficiency across its operations, including scientific reviews. European guideline work is already under way, building on the EMA's 2024 AI reflection paper, and aligns with the agency's mission to promote safe and responsible use of AI. The joint initiative follows an FDA-EU bilateral meeting in 2024. "The guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation... to preserve our leading role in the global innovation race, while ensuring the highest level of patient safety," European Commissioner for Health and Animal Welfare Oliver Varhelyi said. Several drugmakers are increasingly relying on sophisticated AI models to design and discover new treatments and striking deals to gain the technical know-how. Earlier this week, AstraZeneca (AZN.L), opens new tab agreed to buy Boston-based Modella AI to accelerate oncology drug research, while AI chip giant Nvidia (NVDA.O), opens new tab and Eli Lilly (LLY.N), opens new tab said they would spend $1 billion building a new joint research lab in the San Francisco Bay area over five years. Reporting by Puyaan Singh in Bengaluru; Editing by Shilpi Majumdar Our Standards: The Thomson Reuters Trust Principles., opens new tab
[2]
US, European Regulators Set Principles for 'Good AI Practice' in Drug Development
Jan 14 (Reuters) - The U.S. Food and Drug Administration and the European Medicines Agency jointly issued principles for safe and responsible use of artificial intelligence in developing medicines, aiming to speed up innovation while safeguarding patients. The principles, issued on Wednesday, offer broad guidance on how AI should be used to generate and monitor evidence across a drug's lifecycle, from early research and clinical trials to manufacturing and safety surveillance. The move comes as regulators push to expand the use of AI in drug discovery and development to shorten timeline and reduce animal testing. The FDA has a generative AI tool, Elsa, aimed at improving efficiency across its operations, including scientific reviews. European guideline work is already under way, building on the EMA's 2024 AI reflection paper, and aligns with the agency's mission to promote safe and responsible use of AI. The joint initiative follows an FDA-EU bilateral meeting in 2024. "The guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation... to preserve our leading role in the global innovation race, while ensuring the highest level of patient safety," European Commissioner for Health and Animal Welfare Oliver Varhelyi said. Several drugmakers are increasingly relying on sophisticated AI models to design and discover new treatments and striking deals to gain the technical know-how. Earlier this week, AstraZeneca agreed to buy Boston-based Modella AI to accelerate oncology drug research, while AI chip giant Nvidia and Eli Lilly said they would spend $1 billion building a new joint research lab in the San Francisco Bay area over five years. (Reporting by Puyaan Singh in Bengaluru; Editing by Shilpi Majumdar)
[3]
US, European regulators set principles for 'good AI practice' in drug development
The principles, issued on Wednesday, offer broad guidance on how AI should be used to generate and monitor evidence across a drug's lifecycle, from early research and clinical trials to manufacturing and safety surveillance. The US Food and Drug Administration and the European Medicines Agency jointly issued principles for safe and responsible use of artificial intelligence in developing medicines, aiming to speed up innovation while safeguarding patients. The principles, issued on Wednesday, offer broad guidance on how AI should be used to generate and monitor evidence across a drug's lifecycle, from early research and clinical trials to manufacturing and safety surveillance. The move comes as regulators push to expand the use of AI in drug discovery and development to shorten timeline and reduce animal testing. The FDA has a generative AI tool, Elsa, aimed at improving efficiency across its operations, including scientific reviews. European guideline work is already under way, building on the EMA's 2024 AI reflection paper, and aligns with the agency's mission to promote safe and responsible use of AI. The joint initiative follows an FDA-EU bilateral meeting in 2024. "The guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation... to preserve our leading role in the global innovation race, while ensuring the highest level of patient safety," European Commissioner for Health and Animal Welfare Oliver Varhelyi said. Several drugmakers are increasingly relying on sophisticated AI models to design and discover new treatments and striking deals to gain the technical know-how. Earlier this week, AstraZeneca agreed to buy Boston-based Modella AI to accelerate oncology drug research, while AI chip giant Nvidia and Eli Lilly said they would spend $1 billion building a new joint research lab in the San Francisco Bay area over five years.
Share
Share
Copy Link
The U.S. Food and Drug Administration and European Medicines Agency have jointly issued principles for Good AI Practice in drug development. These regulatory principles aim to guide the safe and responsible use of AI across the entire drug lifecycle while speeding up innovation and reducing animal testing.
The U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have jointly issued principles establishing a framework for Good AI Practice in pharmaceutical innovation
1
. Released on Wednesday, these regulatory principles mark a significant step in EU-US cooperation to guide the safe and responsible use of AI in developing medicines while maintaining patient safety standards2
.
Source: Reuters
The principles offer broad guidance on how artificial intelligence should be deployed to generate and monitor evidence across the entire drug lifecycle, spanning early research, clinical trials, manufacturing, and safety surveillance
3
. This comprehensive approach aims to streamline the drug lifecycle from discovery through post-market monitoring.Regulators are pushing to expand AI integration in medicine to shorten development timelines and reduce animal testing requirements
1
. The FDA has already implemented a generative AI tool called Elsa, designed to improve efficiency across its operations, including scientific reviews2
. Meanwhile, European guideline work builds on the EMA's 2024 AI reflection paper, aligning with the agency's mission to promote responsible AI adoption3
."The guiding principles of good AI practice in drug development are a first step of a renewed EU-US cooperation... to preserve our leading role in the global innovation race, while ensuring the highest level of patient safety," said European Commissioner for Health and Animal Welfare Oliver Varhelyi
1
.Related Stories
The joint initiative follows an FDA-EU bilateral meeting in 2024 and comes as drugmakers increasingly rely on sophisticated AI models to design and discover new treatments
2
. Major pharmaceutical players are striking deals to gain technical know-how in drug discovery and accelerate medicine innovation.Earlier this week, AstraZeneca agreed to acquire Boston-based Modella AI to accelerate oncology drug research
3
. In another major development, AI chip giant Nvidia and Eli Lilly announced they would spend $1 billion building a new joint research lab in the San Francisco Bay area over five years1
. These investments signal the industry's commitment to AI integration in medicine and suggest that regulatory clarity will likely accelerate adoption across the pharmaceutical sector. As these principles take shape, stakeholders should watch for detailed implementation guidelines and how they impact the pace of AI-driven drug development approvals.Summarized by
Navi
1
Policy and Regulation

2
Technology

3
Policy and Regulation
