Curated by THEOUTPOST
On Thu, 8 May, 12:06 AM UTC
8 Sources
[1]
OpenAI and the FDA are reportedly discussing AI for drug evaluations
OpenAI has met with officials from the U.S. Food and Drug Administration to discuss the agency's use of AI to speed up drug evaluations, Wired reported on Wednesday. According to the report, OpenAI and the FDA have discussed a project called cderGPT, which seems to be an AI tool for the Center for Drug Evaluation (CDE). The CDE regulates over-the-counter and prescription drugs in the U.S. Associates from Elon Musk's DOGE have reportedly been part of the talks as well. It's not uncommon for drug development processes to take more than a decade to complete. OpenAI's work with the FDA aims to accelerate a small portion of that timeline, towards the end, per Wired. AI has long been touted as a potential accelerant that could be used throughout drug development, making some notoriously slow steps more efficient. That said, there's unanswered questions around how to control for the unreliability of AI models.
[2]
OpenAI and the FDA Are Holding Talks About Using AI In Drug Evaluation
High-ranking OpenAI employees have met with the FDA multiple times in recent weeks to discuss AI and a project called cderGPT. The Food and Drug Administration (FDA) has been meeting with OpenAI to discuss the agency's use of AI, according to sources with knowledge of the meetings. The meetings appear to be part of a broader effort at the FDA to use this technology to speed up the drug approval process. "Why does it take over 10 years for a new drug to come to market?" wrote FDA commissioner Marty Makary on X on Wednesday. "Why are we not modernized with AI and other things? We've just completed our first AI-assisted scientific review for a product and that's just the beginning." The remarks followed an annual meeting of the American Hospital Association earlier this week, where Makary spoke about AI's potential to aid in the approval of new treatments for diabetes and certain types of cancer. Makary did not specify that OpenAI was part of this initiative. But sources close to the project say a small team from OpenAI has met with the FDA and two associates of Elon Musk's so-called Department of Government Efficiency multiple times in recent weeks. The group has discussed a project called cderGPT, which likely stands for Center for Drug Evaluation, which regulates over the counter and prescription drugs in the US, and Research GPT. Jeremy Walsh, who was recently named as the FDA's first-ever AI officer, has led the discussions. So far, no contract has been signed. OpenAI declined to comment. Walsh has also met with Peter Bowman-Davis, an undergraduate student on leave from Yale who currently serves as the acting chief AI officer at the Department of Health and Human Services, to discuss the FDA's AI ambitions. Politico first reported the appointment of Bowman-Davis, who is part of Andreessen Horowitz's American Dynamism team. When reached via email on Wednesday, Robert Califf, who served as FDA commissioner from 2016 to 2017 and again from 2022 through January, said the agency's review teams have been using AI for several years now. "It will be interesting to hear the details of which parts of the review were 'AI assisted' and what that means," he says. "There has always been a quest to shorten review times and a broad consensus that AI could help." Before Califf departed the agency, he said the FDA was considering the various ways AI could be used in internal operations. "Final reviews for approval are only one part of a much larger opportunity," he says. To be clear, using AI to assist in final drug reviews would represent a chance to compress just a small part of the notoriously long drug development timeline. The vast majority of drugs fail before ever coming up for FDA review. Rafael Rosengarten, CEO of Genialis, a precision oncology company, and a cofounder and board member of the Alliance for AI in Healthcare, says he's in favor of automating certain tasks related to the drug review process but says there should be policy guidance around what kind of data is used to train AI models and what kind of model performance is considered acceptable. "These machines are incredibly adept at learning information, but they have to be trained in a way so they're learning what we want them to learn," he says.
[3]
OpenAI and US FDA hold talks about using AI in drug evaluation, Wired reports
May 7 (Reuters) - OpenAI and the U.S. Food and Drug Administration have been meeting to discuss the health regulator's use of AI, technology news platform Wired reported on Wednesday, citing sources with knowledge of the meetings. Sources close to the project say a small team from OpenAI has met with the FDA and two associates of Elon Musk's so-called Department of Government Efficiency multiple times in recent weeks, according to the report. Reporting by Christy Santhosh in Bengaluru; Editing by Leroy Leo Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Boards, Policy & RegulationRegulatory OversightRegulatory
[4]
FDA's plan to roll out AI agencywide raises questions
Why it matters: The plan raises urgent questions about what's being done to secure the vast amount of proprietary company data that's part of the process and whether sufficient guardrails are in place. Driving the news: The FDA is racing to roll out generative AI across all its centers to augment employees' work following a successful pilot, officials said. The big picture: Trump's overhaul of federal AI policy -- ditching Biden-era guardrails in favor of speed and dominance -- has turned the government into a tech testing ground. Several experts told Axios the integration of AI at the FDA is a good move, but the speed of the rollout and lack of specifics raise multiple questions. Zoom in: Another key question is which models are being used to train the AI, and what inputs are being provided for specialized fine tuning, Eric Topol, founder of the Scripps Research Translational Institute, told Axios. Last week, Wired reported the FDA was in discussions with OpenAI about a project called cderGPT, which it said seems to be an AI tool for the Center for Drug Evaluation and Research (CDER). The bottom line: As the Trump administration turns federal agencies into AI proving grounds, the FDA's rapid deployment will be an early test of whether innovation can be balanced with risks.
[5]
The FDA Will Use AI to Accelerate Approving Drugs
The Food and Drug Administration just announced that it will immediately start using AI across all of its centers, after completing a new generative AI pilot for scientific reviewers. Supposedly, the AI tool will speed up the FDA's drug review process by reducing the time its scientists have to spend doing tedious, repetitive tasks -- though, given AI's track record of constantly hallucinating, these claims warrant plenty of scrutiny. "This is a game-changer technology that has enabled me to perform scientific review tasks in minutes that used to take three days," said Jinzhong Liu, a deputy director in the FDA's Center for Drug Evaluation and Research (CDER), in a statement. FDA commissioner Martin Makary has directed that all FDA centers should achieve full AI integration by June 30, a questionably aggressive timeline. "By that date, all centers will be operating on a common, secure generative AI system integrated with FDA's internal data platforms," the agency said in its announcement. The announcement comes just a day after Wired reported that the FDA and OpenAI were holding talks to discuss the agency's use of AI. Notably, the FDA's new statement makes no mention of OpenAI or its potential involvement. Behind the scenes, however, Wired sources say that a team from the ChatGPT maker met with the FDA and two associates from Elon Musk's so-called Department of Government Efficiency multiple times in recent weeks, to discuss a project called "cderGPT." The name is almost certainly a reference to the FDA's abovementioned CDER, which regulates drugs sold in the US. This may have been a long time coming. Wired notes that the FDA sponsored a fellowship in 2023 to develop large language models for internal use. And according to Robert Califf, who served as FDA commissioner between 2016 and 2017, the agency review teams have already been experimenting with AI for several years. "It will be interesting to hear the details of which parts of the review were 'AI assisted' and what that means," Califf told Wired. "There has always been a quest to shorten review times and a broad consensus that AI could help." The agency was considering using AI in other aspects of its operations, too. "Final reviews for approval are only one part of a much larger opportunity," Califf added. Makary, who was appointed commissioner by president Donald Trump, has frequently expressed his enthusiasm for the technology. "Why does it take over ten years for a new drug to come to market?" he tweeted on Wednesday. "Why are we not modernized with AI and other things?" The FDA news parallels a broader trend of AI adoption in federal agencies during the Trump administration. In March, OpenAI announced a version of its chatbot called ChatGPT Gov designed to be secure enough to process sensitive government information. Musk has pushed to fast-track the development of another AI chatbot for the US General Services Administration, while using the technology to try to rewrite the Social Security computer system. Yet, the risks of using the technology in a medical context are concerning, to say the least. Speaking to Wired, an ex-FDA staffer who has tested ChatGPT as a clinical tool pointed out the chatbot's proclivity for making up convincing-sounding lies -- a problem that won't go away anytime soon. "Who knows how robust the platform will be for these reviewers' tasks," the former FDA employee told the magazine.
[6]
OpenAI and xAI talking AI drug evaluation with FDA
OpenAI has met with officials from the U.S. Food and Drug Administration (FDA) to discuss using AI to speed up drug evaluations, according to a report by Wired. The discussions centered around a project called cderGPT, an AI tool designed for the Center for Drug Evaluation (CDE), which regulates over-the-counter and prescription drugs in the U.S. Associates from Elon Musk's xAI were also reportedly part of the talks. OpenAI's collaboration with the FDA aims to accelerate the drug development process, which often takes over a decade to complete. AI is seen as a potential tool to make some of the slower steps in drug development more efficient, particularly toward the end of the timeline. The use of AI in drug development has been touted as a potential accelerant, but questions remain about how to control for the unreliability of AI models. The cderGPT project is part of a broader effort to leverage AI to streamline the notoriously slow drug evaluation process.
[7]
FDA Explores AI for Drug Approvals, Faces Oversight Questions
The US Food and Drug Administration (FDA) is exploring how artificial intelligence could accelerate drug approvals. A recent WIRED report revealed that the agency met multiple times with OpenAI to discuss this possibility, including a project referred to as cderGPT, a likely reference to the Center for Drug Evaluation and Research. FDA Commissioner Marty Makary recently announced that the agency had completed its first "AI-assisted scientific review." He didn't explain which tools were involved, how the AI influenced the decision, or what kind of safeguards were in place. OpenAI declined to comment, and the agency hasn't signed any formal agreement yet. Even so, these discussions point to growing interest in using generative AI in regulatory work. India has already begun confronting similar questions. The Indian Council of Medical Research (ICMR) has issued ethical guidelines for deploying AI in healthcare. Separately, a recent report from the Centre for Internet and Society (CIS) flags deep concerns about how healthcare AI is being built, validated, and audited in practice. Together, these two frameworks help assess what's at stake as the FDA moves ahead. FDA officials appear to be treating this AI collaboration as an internal research project. Jeremy Walsh, the agency's first AI officer, has led the effort. He's worked closely with Peter Bowman-Davis, the acting AI officer at the Department of Health and Human Services. The meetings with OpenAI explored how AI could assist in reviewing treatments for cancer, diabetes, and other diseases. Former FDA Commissioner Robert Califf acknowledged that the agency has experimented with AI in the past, but said the current use case needs clarification. What does "AI-assisted" actually mean in this context? That question also came up in conversations with the industry. Rafael Rosengarten, CEO of precision oncology company Genialis, said AI might help with basic administrative tasks, but warned that regulators need to test models rigorously and train them on relevant data before they play a role in critical reviews. Others raised similar concerns. A former FDA staffer pointed out that tools like ChatGPT often generate confident but incorrect answers. That tendency becomes more dangerous when agencies start using them in scientific evaluations. The FDA has not said which model it used, how it trained or validated it, or whether it relied on proprietary or public data. India's ethical guidelines already set expectations for how AI should function in medical settings. The ICMR calls for transparency around how AI models work, informed consent from patients, and thorough testing to make sure AI systems are accurate and fair. It also recommends monitoring AI tools even after deployment to catch unforeseen issues. Some of the key requirements include: These principles aim to build trust in how AI is used in public health. By contrast, the FDA's pilot appears to be moving forward without public guidelines or visible oversight. The ICMR's guidelines offer a strong ethical baseline, but India still lacks the regulatory tools to make them effective. The CIS report shows that most AI audits in India are voluntary and follow no standard format. It also warns that developers often use datasets sourced from outside the country, which can introduce bias when they apply those AI models in local contexts. The report highlights several gaps: While India has put ethical guidance in place, it hasn't created the compliance infrastructure needed to support it. The US, meanwhile, is pushing forward on AI deployment without establishing those principles at all. The FDA has begun using AI in drug reviews but has not clarified what role the AI model played, how it tested the system, or whether it notified people affected by the decision. It has held repeated meetings with OpenAI, yet hasn't released any public roadmap for oversight. India has laid out what responsible AI in healthcare should look like. It calls for transparency, consent, validation, and long-term monitoring. But in practice, most AI tools in Indian healthcare aren't held to these standards. Audits are informal. Datasets are imported. Regulations are still catching up. Both countries are responding to the same question: how does one govern AI in life-or-death decisions? The FDA is piloting new AI tools behind closed doors. India has published ethical rules but isn't enforcing them. Each approach reveals a different side of the same problem.
[8]
OpenAI and US FDA hold talks about using AI in drug evaluation, Wired reports
(Reuters) -OpenAI and the U.S. Food and Drug Administration have been meeting to discuss the health regulator's use of AI, technology news platform Wired reported on Wednesday, citing sources with knowledge of the meetings. Sources close to the project say a small team from OpenAI has met with the FDA and two associates of Elon Musk's so-called Department of Government Efficiency multiple times in recent weeks, according to the report. (Reporting by Christy Santhosh in Bengaluru; Editing by Leroy Leo)
Share
Share
Copy Link
The FDA is in talks with OpenAI to implement AI in drug evaluation processes, aiming to speed up approvals. This move raises questions about data security and the reliability of AI in critical healthcare decisions.
The U.S. Food and Drug Administration (FDA) is reportedly in discussions with OpenAI to integrate artificial intelligence into its drug evaluation processes. This collaboration aims to accelerate the notoriously lengthy drug approval timeline, which can often exceed a decade 12.
At the center of these talks is a project called cderGPT, likely referring to an AI tool for the Center for Drug Evaluation and Research (CDER), which regulates over-the-counter and prescription drugs in the U.S. 12. The discussions have involved a small team from OpenAI, FDA officials, and associates from Elon Musk's Department of Government Efficiency 2.
Jeremy Walsh, the FDA's first-ever AI officer, has been leading these discussions. Additionally, Peter Bowman-Davis, an undergraduate student on leave from Yale serving as the acting chief AI officer at the Department of Health and Human Services, has been involved in conversations about the FDA's AI ambitions 2.
FDA Commissioner Marty Makary has expressed enthusiasm for modernizing the drug approval process with AI, stating that the agency has already completed its first AI-assisted scientific review for a product 2. The FDA aims to achieve full AI integration across all its centers by June 30, an ambitious timeline that has raised some concerns 5.
While the integration of AI in drug evaluation processes holds promise, it also raises several questions:
Data Security: There are concerns about securing the vast amount of proprietary company data involved in the drug approval process 4.
AI Reliability: Given the critical nature of drug approvals, the potential for AI hallucinations and errors is a significant concern 5.
Training Data: Questions remain about which models are being used to train the AI and what inputs are being provided for specialized fine-tuning 4.
Regulatory Framework: There is a need for policy guidance on acceptable AI model performance and the types of data used for training 2.
The FDA's AI initiative is part of a larger trend of AI adoption in federal agencies during the Trump administration. This includes OpenAI's ChatGPT Gov, designed to process sensitive government information, and efforts to use AI in other departments such as the General Services Administration and Social Security 5.
Rafael Rosengarten, CEO of Genialis and a board member of the Alliance for AI in Healthcare, supports automating certain tasks in the drug review process but emphasizes the need for careful implementation 2. Former FDA Commissioner Robert Califf noted that the agency has been experimenting with AI for several years and sees broader opportunities beyond final reviews 25.
As the FDA moves forward with its AI integration plans, balancing innovation with potential risks will be crucial. The outcome of this initiative could set a precedent for AI use in critical government functions and healthcare decision-making processes.
Reference
The FDA's medical device division is undergoing significant changes as it grapples with regulating cutting-edge technologies like AI and brain-computer interfaces, while facing scrutiny over industry influence and ethical concerns.
2 Sources
2 Sources
A recent study reveals that nearly half of FDA-approved medical AI devices lack proper clinical validation data, raising concerns about their real-world performance and potential risks to patient care.
3 Sources
3 Sources
Daphne Koller, CEO of Insitro, explains how AI and machine learning could revolutionize drug discovery, potentially accelerating the development of new medicines and overcoming longstanding industry challenges.
4 Sources
4 Sources
Demis Hassabis, CEO of Google DeepMind and founder of Isomorphic Labs, announces that AI-designed drugs are expected to enter clinical trials by the end of 2025, potentially revolutionizing drug discovery and development.
7 Sources
7 Sources
An exploration of the challenges and opportunities in integrating AI into healthcare, focusing on building trust among medical professionals and ensuring patient safety through proper regulation and data integrity.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved