Curated by THEOUTPOST
On Thu, 20 Mar, 4:06 PM UTC
2 Sources
[1]
AI can be a powerful tool for scientists. But it can also fuel research misconduct
In February this year, Google announced it was launching "a new AI system for scientists". It said this system was a collaborative tool designed to help scientists "in creating novel hypotheses and research plans". It's too early to tell just how useful this particular tool will be to scientists. But what is clear is that artificial intelligence (AI) more generally is already transforming science. Last year for example, computer scientists won the Nobel Prize for Chemistry for developing an AI model to predict the shape of every protein known to mankind. Chair of the Nobel Committee, Heiner Linke, described the AI system as the achievement of a "50-year-old dream" that solved a notoriously difficult problem eluding scientists since the 1970s. But while AI is allowing scientists to make technological breakthroughs that are otherwise decades away or out of reach entirely, there's also a darker side to the use of AI in science: scientific misconduct is on the rise. AI makes it easy to fabricate research Academic papers can be retracted if their data or findings are found to no longer valid. This can happen because of data fabrication, plagiarism or human error. Paper retractions are increasing exponentially, passing 10,000 in 2023. These retracted papers were cited over 35,000 times. One study found 8% of Dutch scientists admitted to serious research fraud, double the rate previously reported. Biomedical paper retractions have quadrupled in the past 20 years, the majority due to misconduct. AI has the potential to make this problem even worse. For example, the availability and increasing capability of generative AI programs such as ChatGPT makes it easy to fabricate research. This was clearly demonstrated by two researchers who used AI to generate 288 complete fake academic finance papers predicting stock returns. While this was an experiment to show what's possible, it's not hard to imagine how the technology could be used to generate fictitious clinical trial data, modify gene editing experimental data to conceal adverse results or for other malicious purposes. Fake references and fabricated data There are already many reported cases of AI-generated papers passing peer-review and reaching publication - only to be retracted later on the grounds of undisclosed use of AI, some including serious flaws such as fake references and purposely fabricated data. Some researchers are also using AI to review their peers' work. Peer review of scientific papers is one of the fundamentals of scientific integrity. But it's also incredibly time-consuming, with some scientists devoting hundreds of hours a year of unpaid labour. A Stanford-led study found that up to 17% of peer reviews for top AI conferences were written at least in part by AI. In the extreme case, AI may end up writing research papers, which are then reviewed by another AI. This risk is worsening the already problematic trend of an exponential increase in scientific publishing, while the average amount of genuinely new and interesting material in each paper has been declining. AI can also lead to unintentional fabrication of scientific results. A well-known problem of generative AI systems is when they make up an answer rather than saying they don't know. This is known as "hallucination". We don't know the extent to which AI hallucinations end up as errors in scientific papers. But a recent study on computer programming found that 52% of AI-generated answers to coding questions contained errors, and human oversight failed to correct them 39% of the time. Maximising the benefits, minimising the risks Despite these worrying developments, we shouldn't get carried away and discourage or even chastise the use of AI by scientists. AI offers significant benefits to science. Researchers have used specialised AI models to solve scientific problems for many years. And generative AI models such as ChatGPT offer the promise of general-purpose AI scientific assistants that can carry out a range of tasks, working collaboratively with the scientist. These AI models can be powerful lab assistants. For example, researchers at CSIRO are already developing AI lab robots that scientists can speak with and instruct like a human assistant to automate repetitive tasks. A disruptive new technology will always have benefits and drawbacks. The challenge of the science community is to put appropriate policies and guardrails in place to ensure we maximise the benefits and minimise the risks. AI's potential to change the world of science and to help science make the world a better place is already proven. We now have a choice. Do we embrace AI by advocating for and developing an AI code of conduct that enforces ethical and responsible use of AI in science? Or do we take a backseat and let a relatively small number of rogue actors discredit our fields and make us miss the opportunity?
[2]
AI can be a powerful tool for scientists, but it can also fuel research misconduct
In February 2025, Google announced it was launching "a new AI system for scientists." It said this system was a collaborative tool designed to help scientists "in creating novel hypotheses and research plans." It's too early to tell just how useful this particular tool will be to scientists. But what is clear is that artificial intelligence (AI) more generally is already transforming science. Last year, for example, computer scientists won the Nobel Prize for Chemistry for developing an AI model to predict the shape of every protein known to mankind. Chair of the Nobel Committee, Heiner Linke, described the AI system as the achievement of a "50-year-old dream" that solved a notoriously difficult problem eluding scientists since the 1970s. But while AI is allowing scientists to make technological breakthroughs that are otherwise decades away or out of reach entirely, there's also a darker side to the use of AI in science: scientific misconduct is on the rise. AI makes it easy to fabricate research Academic papers can be retracted if their data or findings are found to be no longer valid. This can happen because of data fabrication, plagiarism or human error. Paper retractions are increasing exponentially, passing 10,000 in 2023. These retracted papers were cited over 35,000 times. One study found 8% of Dutch scientists admitted to serious research fraud, double the rate previously reported. Biomedical paper retractions have quadrupled in the past 20 years, the majority due to misconduct. AI has the potential to make this problem even worse. For example, the availability and increasing capability of generative AI programs such as ChatGPT makes it easy to fabricate research. This was clearly demonstrated by two researchers who used AI to generate 288 complete fake academic finance papers predicting stock returns. While this was an experiment to show what's possible, it's not hard to imagine how the technology could be used to generate fictitious clinical trial data, modify gene-editing experimental data to conceal adverse results or for other malicious purposes. Fake references and fabricated data There are already many reported cases of AI-generated papers passing peer-review and reaching publication -- only to be retracted later on the grounds of undisclosed use of AI, some including serious flaws such as fake references and purposely fabricated data. Some researchers are also using AI to review their peers' work. Peer review of scientific papers is one of the fundamentals of scientific integrity. But it's also incredibly time-consuming, with some scientists devoting hundreds of hours a year of unpaid labor. A Stanford-led study found that up to 17% of peer reviews for top AI conferences were written at least in part by AI. In the extreme case, AI may end up writing research papers, which are then reviewed by another AI. This risk is worsening the already problematic trend of an exponential increase in scientific publishing, while the average amount of genuinely new and interesting material in each paper has been declining. AI can also lead to unintentional fabrication of scientific results. A well-known problem of generative AI systems is when they make up an answer rather than saying they don't know. This is known as "hallucination." We don't know the extent to which AI hallucinations end up as errors in scientific papers. But a recent study on computer programming found that 52% of AI-generated answers to coding questions contained errors, and human oversight failed to correct them 39% of the time. Maximizing the benefits, minimizing the risks Despite these worrying developments, we shouldn't get carried away and discourage or even chastise the use of AI by scientists. AI offers significant benefits to science. Researchers have used specialized AI models to solve scientific problems for many years. And generative AI models such as ChatGPT offer the promise of general-purpose AI scientific assistants that can carry out a range of tasks, working collaboratively with the scientist. These AI models can be powerful lab assistants. For example, researchers at CSIRO are already developing AI lab robots that scientists can speak with and instruct like a human assistant to automate repetitive tasks. A disruptive new technology will always have benefits and drawbacks. The challenge of the science community is to put appropriate policies and guardrails in place to ensure we maximize the benefits and minimize the risks. AI's potential to change the world of science and to help science make the world a better place is already proven. We now have a choice. Do we embrace AI by advocating for and developing an AI code of conduct that enforces ethical and responsible use of AI in science? Or do we take a backseat and let a relatively small number of rogue actors discredit our fields and make us miss the opportunity?
Share
Share
Copy Link
AI is transforming scientific research, offering breakthroughs and efficiency, but also enabling easier fabrication of data and papers. The scientific community faces the challenge of maximizing AI's benefits while minimizing risks of misconduct.
Artificial Intelligence (AI) is rapidly transforming the landscape of scientific research, offering unprecedented opportunities for advancement while simultaneously presenting new challenges. In February 2025, Google announced the launch of "a new AI system for scientists," designed to assist in creating novel hypotheses and research plans 12. This development underscores the growing integration of AI into scientific processes.
The potential of AI in science was dramatically illustrated when computer scientists won the Nobel Prize for Chemistry for developing an AI model capable of predicting the shape of every known protein. This achievement, described as a "50-year-old dream" by Nobel Committee Chair Heiner Linke, solved a problem that had eluded scientists since the 1970s 12.
While AI is enabling remarkable scientific breakthroughs, it's also contributing to an increase in scientific misconduct. Paper retractions have risen exponentially, surpassing 10,000 in 2023, with these retracted papers being cited over 35,000 times 12. A study found that 8% of Dutch scientists admitted to serious research fraud, double the previously reported rate 12.
The increasing capabilities of generative AI programs like ChatGPT have made it easier to fabricate research. In a striking demonstration, researchers used AI to generate 288 complete fake academic finance papers predicting stock returns 12. This experiment highlights the potential for AI to be misused in generating fictitious clinical trial data or modifying gene editing experimental results.
AI is also being used in the peer review process, a cornerstone of scientific integrity. A Stanford-led study revealed that up to 17% of peer reviews for top AI conferences were written, at least in part, by AI 12. This trend raises concerns about the quality and integrity of the peer review process.
AI can also lead to unintentional fabrication of scientific results through "hallucinations" - instances where AI systems generate false information. A study on computer programming found that 52% of AI-generated answers to coding questions contained errors, with human oversight failing to correct them 39% of the time 12.
Despite these challenges, AI offers significant benefits to science. Researchers have long used specialized AI models to solve complex scientific problems. Generative AI models like ChatGPT promise to serve as general-purpose scientific assistants, capable of performing a wide range of tasks 12.
The scientific community now faces a critical choice: to embrace AI by developing and advocating for an AI code of conduct that enforces ethical and responsible use in science, or to risk letting a small number of bad actors discredit entire fields of research 12. The challenge lies in implementing appropriate policies and guardrails to maximize the benefits of AI while minimizing its risks in scientific research.
Reference
[1]
AI is transforming scientific research, offering unprecedented speed and efficiency. However, it also raises concerns about accessibility, understanding, and the future of human-led science.
3 Sources
3 Sources
Google's announcement of an AI co-scientist tool based on Gemini 2.0 has sparked debate in the scientific community. While the company touts its potential to revolutionize research, many experts remain skeptical about its practical applications and impact on the scientific process.
3 Sources
3 Sources
A significant portion of research papers may already be co-authored by AI, raising questions about authorship, ethics, and the future of scientific publishing.
2 Sources
2 Sources
A Harvard study reveals the presence of AI-generated research papers on Google Scholar, sparking debates about academic integrity and the future of scholarly publishing. The findings highlight the challenges posed by AI in distinguishing between human-authored and machine-generated content.
4 Sources
4 Sources
A comprehensive study reveals that scientific papers mentioning AI methods receive more citations, but this benefit is not equally distributed among researchers, potentially exacerbating existing inequalities in science.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved