Curated by THEOUTPOST
On Sat, 5 Apr, 12:07 AM UTC
2 Sources
[1]
How can science benefit from AI? Risks?
Researchers from chemistry, biology, and medicine are increasingly turning to AI models to develop new hypotheses. However, it is often unclear on which basis the algorithms come to their conclusions and to what extent they can be generalized. A publication by the University of Bonn now warns of misunderstandings in handling artificial intelligence. At the same time, it highlights the conditions under which researchers can most likely have confidence in the models. The study has now been published in the journal Cell Reports Physical Science. Adaptive machine learning algorithms are incredibly powerful. Nevertheless, they have a disadvantage: How machine learning models arrive at their predictions is often not apparent from the outside. Suppose you feed artificial intelligence with photos of several thousand cars. If you now present it with a new image, it can usually identify reliably whether the picture also shows a car or not. But why is that? Has it really learned that a car has four wheels, a windshield, and an exhaust? Or is its decision based on criteria that are actually irrelevant - such as the antenna on the roof? If this were the case, it could also classify a radio as a car. AI models are black boxes "AI models are black boxes," highlights Prof. Dr. Jürgen Bajorath. "As a result, one should not blindly trust their results and draw conclusions from them." The computational chemistry expert heads the AI in Life Sciences department at the Lamarr Institute for Machine Learning and Artificial Intelligence. He is also in charge of the Life Science Informatics program at the Bonn-Aachen International Center for Information Technology (b-it) at the University of Bonn. In the current publication, he investigated the question of when one can most likely rely on the algorithms. And vice versa: When not. The concept of "explainability" plays an important role in this context. Metaphorically speaking, this refers to efforts within AI research to drill a peephole into the black box. The algorithm should reveal the criteria that it uses as a basis - the four wheels or the antenna. "Opening the black box currently is a central topic in AI research," says Bajorath. " Some AI models are exclusively developed to make the results of others more comprehensible." Explainability, however, is only one aspect - the question of which conclusions might be drawn from the decision-making criteria chosen by a model is equally important. If the algorithm indicates that it has based its decision on the antenna, a human being knows immediately that this feature is poorly suited for identifying cars. Despite this, adaptive models are generally used to identify correlations in large data sets that humans might not even notice. We are then like aliens who do not know what makes a car: An alien would be unable to say whether or not an antenna is a good criterion. Chemical language models suggest new compounds "There is another question that we always have to ask ourselves when using AI procedures in science," stresses Bajorath, who is also a member of the Transdisciplinary Research Area (TRA) "Modelling": "How interpretable are the results?" Chemical language models currently are a hot topic in chemistry and pharmaceutical research. It is possible, for instance, to feed them with many molecules that have a certain biological activity. Based on these input data, the model then learns and ideally suggests a new molecule that also has this activity but a new structure. This is also referred to as generative modeling. However, the model can usually not explain why it comes to this solution. It is often necessary to subsequently apply explainable AI methods. Nonetheless, Bajorath warns against over-interpreting these explanations, that is, anticipating that features the AI considers important indeed cause the desired activity. "Current AI models understand essentially nothing about chemistry," he says. "They are purely statistical and correlative in nature and pay attention to any distinguishing features, regardless of whether these features might be chemically or biologically relevant or not." In spite of this, they may even be right in their assessment - so perhaps the suggested molecule has the desired capabilities. The reasons for this, however, can be completely different from what we would expect based on chemical knowledge or intuition. For evaluating potential causality between features driving predictions and outcomes of corresponding natural processes, experiments are typically required: The researchers must synthesize and test the molecule, as well as other molecules with the structural motif that the AI considers important. Plausibility checks are important Such tests are time-consuming and expensive. Bajorath thus warns against over-interpreting the AI results in the search for scientifically plausible causal relationships. In his view, a plausibility check based on a sound scientific rationale is of critical importance: Can the feature suggested by explainable AI actually be responsible for the desired chemical or biological property? Is it worth pursuing the AI's suggestion? Or is it a likely artifact, a randomly identified correlation such as the car antenna, which is not relevant at all for the actual function? The scientist emphasizes that the use of adaptive algorithms fundamentally has the potential to substantially advance research in many areas of science. Nevertheless, one must be aware of the strengths of these approaches - and particularly of their weaknesses.
[2]
Expert warns of misinterpretations in AI-generated research hypotheses
Researchers from chemistry, biology, and medicine are increasingly turning to AI models to develop new hypotheses. However, it is often unclear on which basis the algorithms come to their conclusions and to what extent they can be generalized. A publication by the University of Bonn now warns of misunderstandings in handling artificial intelligence. At the same time, it highlights the conditions under which researchers can most likely have confidence in the models. The study has now been published in the journal Cell Reports Physical Science. Adaptive machine learning algorithms are incredibly powerful. Nevertheless, they have a disadvantage: How machine learning models arrive at their predictions is often not apparent from the outside. Suppose you feed artificial intelligence with photos of several thousand cars. If you now present it with a new image, it can usually identify reliably whether the picture also shows a car or not. But why is that? Has it really learned that a car has four wheels, a windshield, and an exhaust? Or is its decision based on criteria that are actually irrelevant -- such as the antenna on the roof? If this were the case, it could also classify a radio as a car. AI models are black boxes "AI models are black boxes," highlights Prof. Dr. Jürgen Bajorath. "As a result, one should not blindly trust their results and draw conclusions from them." The computational chemistry expert heads the AI in Life Sciences department at the Lamarr Institute for Machine Learning and Artificial Intelligence. He is also in charge of the Life Science Informatics program at the Bonn-Aachen International Center for Information Technology (b-it) at the University of Bonn. In the current publication, he investigated the question of when one can most likely rely on the algorithms. And vice versa: When not. The concept of "explainability" plays an important role in this context. Metaphorically speaking, this refers to efforts within AI research to drill a peephole into the black box. The algorithm should reveal the criteria that it uses as a basis -- the four wheels or the antenna. "Opening the black box currently is a central topic in AI research," says Bajorath. " Some AI models are exclusively developed to make the results of others more comprehensible." Explainability, however, is only one aspect -- the question of which conclusions might be drawn from the decision-making criteria chosen by a model is equally important. If the algorithm indicates that it has based its decision on the antenna, a human being knows immediately that this feature is poorly suited for identifying cars. Despite this, adaptive models are generally used to identify correlations in large data sets that humans might not even notice. We are then like aliens who do not know what makes a car: An alien would be unable to say whether or not an antenna is a good criterion. Chemical language models suggest new compounds "There is another question that we always have to ask ourselves when using AI procedures in science," stresses Bajorath, who is also a member of the Transdisciplinary Research Area (TRA) "Modeling": "How interpretable are the results?" Chemical language models currently are a hot topic in chemistry and pharmaceutical research. It is possible, for instance, to feed them with many molecules that have a certain biological activity. Based on these input data, the model then learns and ideally suggests a new molecule that also has this activity but a new structure. This is also referred to as generative modeling. However, the model can usually not explain why it comes to this solution. It is often necessary to subsequently apply explainable AI methods. Nonetheless, Bajorath warns against over-interpreting these explanations, that is, anticipating that features the AI considers important indeed cause the desired activity. "Current AI models understand essentially nothing about chemistry," he says. "They are purely statistical and correlative in nature and pay attention to any distinguishing features, regardless of whether these features might be chemically or biologically relevant or not." In spite of this, they may even be right in their assessment -- so perhaps the suggested molecule has the desired capabilities. The reasons for this, however, can be completely different from what we would expect based on chemical knowledge or intuition. For evaluating potential causality between features driving predictions and outcomes of corresponding natural processes, experiments are typically required: The researchers must synthesize and test the molecule, as well as other molecules with the structural motif that the AI considers important. Plausibility checks are important Such tests are time-consuming and expensive. Bajorath thus warns against over-interpreting the AI results in the search for scientifically plausible causal relationships. In his view, a plausibility check based on a sound scientific rationale is of critical importance: Can the feature suggested by explainable AI actually be responsible for the desired chemical or biological property? Is it worth pursuing the AI's suggestion? Or is it a likely artifact, a randomly identified correlation such as the car antenna, which is not relevant at all for the actual function? The scientist emphasizes that the use of adaptive algorithms fundamentally has the potential to substantially advance research in many areas of science. Nevertheless, one must be aware of the strengths of these approaches -- and particularly of their weaknesses.
Share
Share
Copy Link
A study from the University of Bonn warns about potential misunderstandings in handling AI in scientific research, while highlighting conditions for reliable use of AI models in chemistry, biology, and medicine.
Researchers from various scientific disciplines are increasingly turning to artificial intelligence (AI) models to develop new hypotheses and advance their work. However, a recent study from the University of Bonn warns of potential misunderstandings in handling AI, while also highlighting the conditions under which researchers can most reliably use these models 12.
Prof. Dr. Jürgen Bajorath, head of the AI in Life Sciences department at the Lamarr Institute for Machine Learning and Artificial Intelligence, emphasizes that AI models are essentially "black boxes." This means that the basis for their conclusions and the extent to which they can be generalized are often unclear 1.
To illustrate this point, Bajorath uses the example of an AI trained to identify cars:
"Suppose you feed artificial intelligence with photos of several thousand cars. If you now present it with a new image, it can usually identify reliably whether the picture also shows a car or not. But why is that? Has it really learned that a car has four wheels, a windshield, and an exhaust? Or is its decision based on criteria that are actually irrelevant - such as the antenna on the roof?" 1
The concept of "explainability" has become a central topic in AI research. This refers to efforts to make the decision-making process of AI algorithms more transparent. However, Bajorath warns that explainability alone is not sufficient. The interpretation of the AI's decision-making criteria is equally important 2.
In chemistry and pharmaceutical research, chemical language models are becoming increasingly popular. These models can suggest new molecules with specific biological activities based on input data. However, Bajorath cautions against over-interpreting the explanations provided by these models 12.
"Current AI models understand essentially nothing about chemistry," he states. "They are purely statistical and correlative in nature and pay attention to any distinguishing features, regardless of whether these features might be chemically or biologically relevant or not." 2
Given the time-consuming and expensive nature of experimental validation, Bajorath emphasizes the critical importance of plausibility checks based on sound scientific rationale. Researchers must carefully consider whether the features suggested by explainable AI could actually be responsible for the desired chemical or biological properties 12.
While acknowledging the potential of adaptive algorithms to substantially advance research in many scientific areas, Bajorath stresses the need for awareness of both the strengths and weaknesses of these approaches 12.
The study, published in the journal Cell Reports Physical Science, serves as a reminder that while AI can be a powerful tool in scientific research, it should be used with caution and a clear understanding of its limitations. As AI continues to play an increasingly important role in various scientific disciplines, researchers must remain vigilant in their interpretation and application of AI-generated insights.
Reference
[1]
Researchers have developed a new framework that outlines how artificial intelligence can be used to accelerate biological discovery. This approach could revolutionize scientific research by enhancing hypothesis generation and experimental design.
2 Sources
2 Sources
As AI becomes increasingly integrated into various aspects of our lives, the need for transparency in AI systems grows. This article explores the concept of 'explainable AI' and its importance in building trust, preventing bias, and improving AI systems.
2 Sources
2 Sources
AI is transforming scientific research, offering breakthroughs and efficiency, but also enabling easier fabrication of data and papers. The scientific community faces the challenge of maximizing AI's benefits while minimizing risks of misconduct.
2 Sources
2 Sources
Google's announcement of an AI co-scientist tool based on Gemini 2.0 has sparked debate in the scientific community. While the company touts its potential to revolutionize research, many experts remain skeptical about its practical applications and impact on the scientific process.
3 Sources
3 Sources
AI is transforming scientific research, offering unprecedented speed and efficiency. However, it also raises concerns about accessibility, understanding, and the future of human-led science.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved