The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Wed, 7 May, 12:02 AM UTC
2 Sources
[1]
Researchers raise red flag about AI-generated fake images in biomedical research
The authors of an editorial published in the American Journal of Hematology, claim that "generative Artificial Intelligence can be exploited to produce fraudulent scientific images, either from scratch or by modifying existing visual materials to increase the realism of the final fabricated product." Authors Enrico M. Bucci, Professor of Biology at the Sbarro Institute for Cancer Research at the College of Science and Technology, Temple University, and Angelo Parini, University of Toulouse, highlight a growing concern in science: the use of artificial intelligence (AI) to create fake images that look like real research data. The paper, titled "The Synthetic Image Crisis in Science," explains how tools powered by AI are being used to make realistic but completely fake scientific images. These images may be capable of evading detection because they are not edited versions of real photos. Instead, they are generated from scratch using AI tools, which can avoid discovery because they don't include telltale features to distinguish them from real ones. "These tools are now usable by anyone, regardless of scientific training. Prompted correctly, they can simulate a study's entire visual apparatus in minutes," the authors say. Modern AI systems can create fake images based on simple text descriptions. For example, a user can ask for a Western blot showing a certain protein in treated cells, and the AI will generate a believable image, even though no experiment was ever done. The same tools can also be used to make subtle changes to real scientific images. These changes can include adjusting colors, moving parts of the image, or adding features -- all without leaving the clues that normal editing tools would. The tools used for image generators are trained using real scientific images and are now widely available to the public. As a result, peer reviewers and journal editors are starting to find synthetic images in submitted research papers. "It is crucial that the scientific community and the peer-review system adapt to this looming threat of fakery in scientific data," says Antonio Giordano, M.D., Ph.D., Professor at Temple University, Founder and Director of the Sbarro Health Research Organization (SHRO). "The concerns outlined by Bucci and Parini require updated protocols for things like documentation, transparency, and accountability in response to the new reality of a world with AI."
[2]
Researchers Raise Red Flag about AI-Generated Fake Images in Biomedical Research | Newswise
Newswise -- "Generative Artificial Intelligence can be exploited to produce fraudulent scientific images," say the authors of an editorial published in the American Journal of Hematology (AJH), "either from scratch or by modifying existing visual materials to increase the realism of the final fabricated product." The authors Enrico M. Bucci, Professor of Biology at the Sbarro Institute for Cancer Research at the College of Science and Technology, Temple University, and Angelo Parini, University of Toulouse, highlight a growing concern in science: the use of artificial intelligence (AI) to create fake images that look like real research data. The paper, titled The Synthetic Image Crisis in Science, explains how tools powered by AI are being used to make realistic but completely fake scientific images. These images may be capable of evading detection because they are not edited versions of real photos. Instead, they are generated from scratch using AI tools, which can avoid discovery because they don't include telltale features to distinguish them from real ones. "These tools are now usable by anyone, regardless of scientific training. Prompted correctly, they can simulate a study's entire visual apparatus in minutes," the authors say. Modern AI systems can create fake images based on simple text descriptions. For example, a user can ask for a Western blot showing a certain protein in treated cells, and the AI will generate a believable image, even though no experiment was ever done. The same tools can also be used to make subtle changes to real scientific images. These changes can include adjusting colors, moving parts of the image, or adding features -- all without leaving the clues that normal editing tools would. The tools used for image generators are trained using real scientific images and are now widely available to the public. As a result, peer reviewers and journal editors are starting to find synthetic images in submitted research papers. "It is crucial that the scientific community and the peer-review system adapt to this looming threat of fakery in scientific data," says Antonio Giordano, M.D., Ph.D., Professor at Temple University, Founder and Director of the Sbarro Health Research Organization (SHRO). "The concerns outlined by Bucci and Parini require updated protocol for things like documentation, transparency, and accountability in response to the new reality of a world with AI." The authors warn that AI-generated images could damage trust in science if not addressed. It calls for new methods to detect these images and protect the quality of scientific research. Funding Acknowledgement: This research was funded by the Agence Nationale de la Recherche (ANR-23-IAHU-0011). About Sbarro Health Research Organization (SHRO) The Sbarro Health Research Organization conducts groundbreaking research in cancer, diabetes, and cardiovascular disease. Based in Philadelphia, Pennsylvania, on the campus of Temple University, SHRO's programs train young scientists from around the globe, accelerating the pace of health research and innovation.
Share
Share
Copy Link
Researchers warn about the potential misuse of AI in creating fraudulent scientific images, highlighting the need for updated protocols in the peer-review system to maintain research integrity.
Researchers have raised alarm bells about the potential misuse of artificial intelligence (AI) in creating fraudulent scientific images, particularly in biomedical research. In an editorial published in the American Journal of Hematology, authors Enrico M. Bucci from Temple University and Angelo Parini from the University of Toulouse highlight the growing concern of AI-generated fake images that closely resemble real research data 12.
The paper, titled "The Synthetic Image Crisis in Science," explains how AI-powered tools are being utilized to produce realistic but entirely fabricated scientific images. These synthetic images pose a significant challenge as they are not merely edited versions of real photos but are generated from scratch, making them potentially undetectable by conventional means 12.
One of the most concerning aspects of this technology is its accessibility. The authors emphasize that "These tools are now usable by anyone, regardless of scientific training. Prompted correctly, they can simulate a study's entire visual apparatus in minutes" 12. This ease of use significantly lowers the barrier for potential scientific fraud.
Modern AI systems have demonstrated remarkable capabilities in creating fake images based on simple text descriptions. For instance, a user could request a Western blot showing a specific protein in treated cells, and the AI would generate a convincing image without any actual experiment being conducted 12.
Beyond creating entirely new images, these AI tools can also be used to make subtle alterations to existing scientific images. These modifications can include adjusting colors, repositioning elements within the image, or adding features – all without leaving the typical traces that traditional editing tools would 12.
The widespread availability of these image-generating tools, which are trained on real scientific images, has led to a new challenge for peer reviewers and journal editors. They are now encountering synthetic images in submitted research papers, raising concerns about the integrity of scientific publications 12.
Antonio Giordano, M.D., Ph.D., Professor at Temple University and Founder and Director of the Sbarro Health Research Organization (SHRO), stresses the urgency of the situation: "It is crucial that the scientific community and the peer-review system adapt to this looming threat of fakery in scientific data" 1. The concerns outlined by Bucci and Parini necessitate updated protocols for documentation, transparency, and accountability in response to this new AI-driven reality 12.
The authors warn that if left unaddressed, AI-generated images could significantly erode trust in science. They call for the development of new methods to detect these synthetic images and protect the quality and integrity of scientific research 2.
Reference
[1]
Medical Xpress - Medical and Health News
|Researchers raise red flag about AI-generated fake images in biomedical researchAI is transforming scientific research, offering breakthroughs and efficiency, but also enabling easier fabrication of data and papers. The scientific community faces the challenge of maximizing AI's benefits while minimizing risks of misconduct.
2 Sources
2 Sources
A Harvard study reveals the presence of AI-generated research papers on Google Scholar, sparking debates about academic integrity and the future of scholarly publishing. The findings highlight the challenges posed by AI in distinguishing between human-authored and machine-generated content.
4 Sources
4 Sources
Felice Frankel, a veteran science photographer at MIT, shares insights on the challenges and ethical considerations of using AI in scientific image creation and communication.
2 Sources
2 Sources
Researchers develop DeepGuard, an innovative AI-powered solution to distinguish between fake and genuine images, addressing growing concerns about deepfake threats to personal security and misinformation.
2 Sources
2 Sources
A surge in AI-generated historical photos is causing concern among historians and researchers, who warn that these fake images could distort our understanding of the past and undermine trust in visual historical evidence.
4 Sources
4 Sources