3 Sources
[1]
AI and human scientists collaborate to discover new cancer drug combinations
University of CambridgeJun 4 2025 An 'AI scientist', working in collaboration with human scientists, has found that combinations of cheap and safe drugs - used to treat conditions such as high cholesterol and alcohol dependence - could also be effective at treating cancer, a promising new approach to drug discovery. The research team, led by the University of Cambridge, used the GPT-4 large language model (LLM) to identify hidden patterns buried in the mountains of scientific literature to identify potential new cancer drugs. To test their approach, the researchers prompted GPT-4 to identify potential new drug combinations that could have a significant impact on a breast cancer cell line commonly used in medical research. They instructed it to avoid standard cancer drugs, identify drugs that would attack cancer cells while not harming healthy cells, and prioritise drugs that were affordable and approved by regulators. The drug combinations suggested by GPT-4 were then tested by human scientists, both in combination and individually, to measure their effectiveness against breast cancer cells. In the first lab-based test, three of the 12 drug combinations suggested by GPT-4 worked better than current breast cancer drugs. The LLM then learned from these tests and suggested a further four combinations, three of which also showed promising results. The results, reported in the Journal of the Royal Society Interface, represent the first instance of a closed-loop system where experimental results guided an LLM, and LLM outputs - interpreted by human scientists - guided further experiments. The researchers say that tools such as LLMs are not replacement for scientists, but could instead be supervised AI researchers, with the ability to originate, adapt and accelerate discovery in areas like cancer research. Often, LLMs such as GPT-4 return results that aren't true, known as hallucinations. But in scientific research, hallucinations can sometimes be a benefit, if they lead to new ideas that are worth testing. "Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn't thought of before," said Professor Ross King from Cambridge's Department of Chemical Engineering and Biotechnology, who led the research. "This can be useful in areas such as drug discovery, where there are many thousands of compounds to search through." Based on the prompts provided by the human scientists, GPT-4 selected drugs based on the interplay between biological reasoning and hidden patterns in the scientific literature. This is not automation replacing scientists, but a new kind of collaboration. Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner-rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach." Dr. Hector Zenil, co-author from King's College London The hallucinations - normally viewed as flaws - became a feature, generating unconventional combinations worth testing and validating in the lab. The human scientists inspected the mechanistic reasons the LLM found to suggest these combinations in the first place, feeding the system back and forth in multiple iterations. By exploring subtle synergies and overlooked pathways, GPT-4 helped identify six promising drug pairs, all tested through lab experiments. Among the combinations, simvastatin (commonly used to lower cholesterol) and disulfiram (used in alcohol dependence) stood out against breast cancer cells. Some of these combinations show potential for further research in therapeutic repurposing. These drugs, while not traditionally associated with cancer care, could be potential cancer treatments, although they would first have to go through extensive clinical trials. "This study demonstrates how AI can be woven directly into the iterative loop of scientific discovery, enabling adaptive, data-informed hypothesis generation and validation in real time," said Zenil. "The capacity of supervised LLMs to propose hypotheses across disciplines, incorporate prior results, and collaborate across iterations marks a new frontier in scientific research," said King. "An AI scientist is no longer a metaphor without experimental validation: it can now be a collaborator in the scientific process." The research was supported in part by the Alice Wallenberg Foundation and the UK Engineering and Physical Sciences Research Council (EPSRC). University of Cambridge Journal reference: Abdel-Rehim, A., et al. (2025) Scientific Hypothesis Generation by Large Language Models: Laboratory Validation in Breast Cancer Treatment. Journal of The Royal Society Interface. doi.org/10.1098/rsif.2024.0674.
[2]
'AI scientist' suggests combinations of widely available non-cancer drugs can kill cancer cells
An "AI scientist," working in collaboration with human scientists, has found that combinations of cheap and safe drugs -- used to treat conditions such as high cholesterol and alcohol dependence -- could also be effective at treating cancer, a promising new approach to drug discovery. The research team, led by the University of Cambridge, used the GPT-4 large language model (LLM) to identify hidden patterns buried in the mountains of scientific literature to identify potential new cancer drugs. To test their approach, the researchers prompted GPT-4 to identify potential new drug combinations that could have a significant impact on a breast cancer cell line commonly used in medical research. They instructed it to avoid standard cancer drugs, identify drugs that would attack cancer cells while not harming healthy cells, and prioritize drugs that were affordable and approved by regulators. The drug combinations suggested by GPT-4 were then tested by human scientists, both in combination and individually, to measure their effectiveness against breast cancer cells. In the first lab-based test, three of the 12 drug combinations suggested by GPT-4 worked better than current breast cancer drugs. The LLM then learned from these tests and suggested a further four combinations, three of which also showed promising results. The results, reported in the Journal of the Royal Society Interface, represent the first instance of a closed-loop system where experimental results guided an LLM, and LLM outputs -- interpreted by human scientists -- guided further experiments. The researchers say that tools such as LLMs are not replacements for scientists, but could instead be supervised AI researchers, with the ability to originate, adapt and accelerate discovery in areas like cancer research. Often, LLMs such as GPT-4 return results that aren't true, known as hallucinations. But in scientific research, hallucinations can sometimes be a benefit, if they lead to new ideas that are worth testing. "Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn't thought of before," said Professor Ross King from Cambridge's Department of Chemical Engineering and Biotechnology, who led the research. "This can be useful in areas such as drug discovery, where there are many thousands of compounds to search through." Based on the prompts provided by the human scientists, GPT-4 selected drugs based on the interplay between biological reasoning and hidden patterns in the scientific literature. "This is not automation replacing scientists, but a new kind of collaboration," said co-author Dr. Hector Zenil from King's College London. "Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner -- rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach." The hallucinations -- normally viewed as flaws -- became a feature, generating unconventional combinations worth testing and validating in the lab. The human scientists inspected the mechanistic reasons the LLM found to suggest these combinations in the first place, feeding the system back and forth in multiple iterations. By exploring subtle synergies and overlooked pathways, GPT-4 helped identify six promising drug pairs, all tested through lab experiments. Among the combinations, simvastatin (commonly used to lower cholesterol) and disulfiram (used in alcohol dependence) stood out against breast cancer cells. Some of these combinations show potential for further research in therapeutic repurposing. These drugs, while not traditionally associated with cancer care, could be potential cancer treatments, although they would first have to go through extensive clinical trials. "This study demonstrates how AI can be woven directly into the iterative loop of scientific discovery, enabling adaptive, data-informed hypothesis generation and validation in real time," said Zenil. "The capacity of supervised LLMs to propose hypotheses across disciplines, incorporate prior results, and collaborate across iterations marks a new frontier in scientific research," said King. "An AI scientist is no longer a metaphor without experimental validation: it can now be a collaborator in the scientific process."
[3]
"AI scientist" discovers common drugs that can kill cancer cells - Earth.com
While AI has already transformed fields like image recognition and translation, researchers are now exploring its potential in discovery-driven tasks, exploring how different drugs interact with cancer cells. One of the most exciting applications is hypothesis generation, something that was once thought to be the domain of human curiosity alone. A recent study, led by researchers from the University of Cambridge in partnership with King's College London and Arctoris Ltd, tested this idea. Could AI suggest treatments for breast cancer using drugs not originally meant to fight cancer? Could those suggestions lead to real, testable breakthroughs in the lab? The results suggest the answer might be yes. The research focused on GPT-4, a large language model (LLM) that is trained on vast amounts of internet text. The team designed prompts that asked GPT-4 to generate pairs of drugs that could work against MCF7 breast cancer cells but not harm healthy cells (MCF10A). They also restricted the model from using known cancer drugs and instructed it to prioritize options that were affordable and already approved for use in humans. "This is not automation replacing scientists, but a new kind of collaboration," said Dr. Hector Zenil from King's College London. "Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner, rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach." In its first round, GPT-4 proposed 12 unique drug combinations. Interestingly, all combinations included drugs not traditionally associated with cancer therapy. These included medications for conditions like high cholesterol, parasitic infections, and alcohol dependence. Even so, these combinations were not arbitrary. GPT-4 provided rationales for each pairing, often tying together biological pathways in unexpected ways. The next step was to test the drug pairs in the lab. Scientists measured two things: how well each combination attacked MCF7 cells and how much damage was caused to MCF10A cells. They also evaluated whether the drug pairs worked better together than separately, a property known as synergy. Three combinations stood out for having better results than standard cancer therapies. One involved simvastatin and disulfiram. Another paired dipyridamole with mebendazole. A third involved itraconazole and atenolol. These drug pairs were not only effective against MCF7 cells, but they worked without overly harming healthy cells. "Supervised LLMs offer a scalable, imaginative layer of scientific exploration, and can help us as human scientists explore new paths that we hadn't thought of before," said Professor Ross King from Cambridge's Department of Chemical Engineering and Biotechnology, who led the research. Following the first set of results, the researchers asked GPT-4 to analyze what had worked and suggest new ideas. They shared summaries of the lab findings and prompted the AI to propose four more drug combinations, including some involving known cancer drugs like fulvestrant. This time, the AI returned combinations such as disulfiram with quinacrine and mebendazole with quinacrine. Three of the four new suggestions again showed promising synergy scores. One of the most effective combinations was disulfiram with simvastatin, which achieved the highest synergy score of the entire study at over 10 on the HSA scale. The feedback loop, AI suggesting ideas, humans testing them, then feeding results back to the AI, represents a novel way of conducting science. The process no longer moves in one direction. Instead, it cycles, with both machine and human adjusting and improving as they learn from each iteration. Among the twelve original combinations, six showed positive synergy scores for MCF7 cancer cells. These included unusual pairings like furosemide and mebendazole or disulfiram and hydroxychloroquine. Importantly, eight of these twelve combinations had greater effects on MCF7 cells than on MCF10A cells, indicating good specificity. Some of the most toxic drugs to MCF7 cells included disulfiram, quinacrine, niclosamide, and dipyridamole. Disulfiram had the lowest IC50 value, meaning it required only a small dose to reduce cell viability. GPT-4's ability to find such effective non-cancer drugs and pair them meaningfully surprised even the researchers. "This study demonstrates how AI can be woven directly into the iterative loop of scientific discovery, enabling adaptive, data-informed hypothesis generation and validation in real time," said Zenil. GPT-4 sometimes makes mistakes. These are called hallucinations, statements not supported by its training data. In most cases, hallucinations are flaws. But in hypothesis generation, they can be productive. In this study, one such hallucination involved the claim that itraconazole affects cell membrane integrity in human cells. While this is true for fungi, human cells do not use the same biological pathway. Yet, this flawed idea still led to successful experiments. "The capacity of supervised LLMs to propose hypotheses across disciplines, incorporate prior results, and collaborate across iterations marks a new frontier in scientific research," said King. The research team believes that AI and lab automation could eventually reduce the cost of personalized medicine. Cancer treatment might one day involve a custom research project for each patient. Instead of general prescriptions, therapies could be tested and tailored in near real time. The cost of running a lab remains high, but AI like GPT-4 drastically cuts down the time and effort required to generate useful hypotheses. With further advances in robotics, the physical testing process might also become cheaper. "Our empirical results demonstrate that the GPT-4 succeeded in its primary task of forming novel and useful hypotheses," the authors concluded. This study shows that AI can do more than summarize or analyze. It can take part in generating new scientific knowledge. GPT-4 didn't just crunch numbers. It offered unexpected ideas, learned from outcomes, and improved its suggestions. The combinations it proposed still need clinical trials. They are far from becoming approved treatments. But their success in lab settings highlights the potential of repurposing safe, existing drugs for new uses, something that could save years of development time. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Share
Copy Link
Researchers at the University of Cambridge use GPT-4 to identify potential new cancer treatments, combining non-cancer drugs to effectively target breast cancer cells in a groundbreaking closed-loop system of AI-human collaboration.
In a groundbreaking study, researchers from the University of Cambridge have successfully employed artificial intelligence to identify promising new cancer drug combinations. The team, led by Professor Ross King, utilized the GPT-4 large language model (LLM) to analyze vast amounts of scientific literature and uncover hidden patterns that could lead to potential cancer treatments 1.
Source: News-Medical
The researchers instructed GPT-4 to identify potential drug combinations that could effectively target a specific breast cancer cell line while avoiding harm to healthy cells. They specifically directed the AI to focus on affordable, regulator-approved drugs not traditionally associated with cancer treatment 2.
This novel approach resulted in the AI suggesting 12 unique drug combinations in its first round. Remarkably, all of these combinations included medications not typically used in cancer therapy, such as drugs for high cholesterol, parasitic infections, and alcohol dependence 3.
Source: Medical Xpress
Human scientists then tested the AI-suggested drug combinations in laboratory experiments. The results were striking:
Among the most effective combinations were:
This study represents the first instance of a closed-loop system where experimental results guided an LLM, and the LLM's outputs - interpreted by human scientists - guided further experiments. Dr. Hector Zenil from King's College London emphasized that this approach is not about replacing scientists but creating a new kind of collaboration 2.
The process involved multiple iterations:
This iterative cycle demonstrates how AI can be integrated directly into the scientific discovery process, enabling adaptive, data-informed hypothesis generation and validation in real-time 3.
Source: Earth.com
While these findings are promising, it's important to note that the identified drug combinations would need to undergo extensive clinical trials before being considered for cancer treatment. However, the potential for repurposing existing, affordable drugs for cancer therapy is significant 1.
The research team believes that this AI-driven approach, combined with lab automation, could eventually reduce the cost of personalized medicine. In the future, cancer treatment might involve custom research projects for individual patients, with therapies tested and tailored in near real-time 3.
As Professor King concluded, "An AI scientist is no longer a metaphor without experimental validation: it can now be a collaborator in the scientific process" 2. This study marks a significant step forward in the integration of AI into scientific research, potentially accelerating discoveries in cancer treatment and beyond.
Summarized by
Navi
[2]
OpenAI appeals a court order requiring it to indefinitely store deleted ChatGPT conversations as part of The New York Times' copyright lawsuit, citing user privacy concerns and setting a precedent for AI data retention.
9 Sources
Technology
17 hrs ago
9 Sources
Technology
17 hrs ago
Anysphere, the company behind the AI coding assistant Cursor, has raised $900 million in funding, reaching a $9.9 billion valuation. The startup has surpassed $500 million in annual recurring revenue, making it potentially the fastest-growing software startup ever.
4 Sources
Technology
18 hrs ago
4 Sources
Technology
18 hrs ago
A multi-billion dollar deal to build one of the world's largest AI data center hubs in the UAE, involving major US tech companies, is far from finalized due to persistent security concerns and geopolitical complexities.
4 Sources
Technology
10 hrs ago
4 Sources
Technology
10 hrs ago
A new PwC study challenges common fears about AI's impact on jobs, showing that AI is actually creating jobs, boosting wages, and increasing worker value across industries.
2 Sources
Business and Economy
10 hrs ago
2 Sources
Business and Economy
10 hrs ago
Runway's AI Film Festival in New York highlights the growing role of artificial intelligence in filmmaking, showcasing innovative short films and sparking discussions about AI's impact on the entertainment industry.
5 Sources
Technology
10 hrs ago
5 Sources
Technology
10 hrs ago