Curated by THEOUTPOST
On Fri, 7 Mar, 8:02 AM UTC
2 Sources
[1]
Q&A with science photographer: Visualizing research in the age of AI
by Melanie M. Kaufman, Massachusetts Institute of Technology For over 30 years, science photographer Felice Frankel has helped MIT professors, researchers, and students communicate their work visually. Throughout that time, she has seen the development of various tools to support the creation of compelling images: some helpful, and some antithetical to the effort of producing a trustworthy and complete representation of the research. In a recent opinion piece published in Nature magazine, Frankel discusses the burgeoning use of generative artificial intelligence (GenAI) in images and the challenges and implications it has for communicating research. On a more personal note, she questions whether there will still be a place for a science photographer in the research community. You've mentioned that as soon as a photo is taken, the image can be considered 'manipulated.' There are ways you've manipulated your own images to create a visual that more successfully communicates the desired message. Where is the line between acceptable and unacceptable manipulation? In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. We need to remember the image is merely a representation of the thing, and not the thing itself. Decisions have to be made when creating the image. The critical issue is not to manipulate the data, and in the case of most images, the data is the structure. For example, for an image I made some time ago, I digitally deleted the petri dish in which a yeast colony was growing, to bring attention to the stunning morphology of the colony. The data in the image is the morphology of the colony. I did not manipulate that data. However, I always indicate in the text if I have done something to an image. I discuss the idea of image enhancement in my handbook, "The Visual Elements, Photography." What can researchers do to make sure their research is communicated correctly and ethically? With the advent of AI, I see three main issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a continuing need for researchers to be trained in visual communication. For years, I have been trying to develop a visual literacy program for the present and upcoming classes of science and engineering researchers. MIT has a communication requirement which mostly addresses writing, but what about the visual, which is no longer tangential to a journal submission? I will bet that most readers of scientific articles go right to the figures, after they read the abstract. We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of "nudging" an image to look a certain predetermined way. I describe in the article an incident when a student altered one of my images (without asking me) to match what the student wanted to visually communicate. I didn't permit it, of course, and was disappointed that the ethics of such an alteration were not considered. We need to develop, at the very least, conversations on campus and, even better, create a visual literacy requirement along with the writing requirement. Generative AI is not going away. What do you see as the future for communicating science visually? For the Nature article, I decided that a powerful way to question the use of AI in generating images was by example. I used one of the diffusion models to create an image using the following prompt: "Create a photo of Moungi Bawendi's nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light." The results of my AI experimentation were often cartoon-like images that could hardly pass as reality -- let alone documentation -- but there will be a time when they will be. In conversations with colleagues in research and computer-science communities, all agree that we should have clear standards on what is and is not allowed. And most importantly, a GenAI visual should never be allowed as documentation. But AI-generated visuals will, in fact, be useful for illustration purposes. If an AI-generated visual is to be submitted to a journal (or, for that matter, be shown in a presentation), I believe the researcher MUST
[2]
3 Questions: Visualizing research in the age of AI
Caption: An image of a growing yeast colony where the petri dish has been digitally deleted. This type of manipulation could be acceptable because the actual data has not been manipulated, Frankel says. For over 30 years, science photographer Felice Frankel has helped MIT professors, researchers, and students communicate their work visually. Throughout that time, she has seen the development of various tools to support the creation of compelling images: some helpful, and some antithetical to the effort of producing a trustworthy and complete representation of the research. In a recent opinion piece published in Nature magazine, Frankel discusses the burgeoning use of generative artificial intelligence (GenAI) in images and the challenges and implications it has for communicating research. On a more personal note, she questions whether there will still be a place for a science photographer in the research community. Q: You've mentioned that as soon as a photo is taken, the image can be considered "manipulated." There are ways you've manipulated your own images to create a visual that more successfully communicates the desired message. Where is the line between acceptable and unacceptable manipulation? A: In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. We need to remember the image is merely a representation of the thing, and not the thing itself. Decisions have to be made when creating the image. The critical issue is not to manipulate the data, and in the case of most images, the data is the structure. For example, for an image I made some time ago, I digitally deleted the petri dish in which a yeast colony was growing, to bring attention to the stunning morphology of the colony. The data in the image is the morphology of the colony. I did not manipulate that data. However, I always indicate in the text if I have done something to an image. I discuss the idea of image enhancement in my handbook, "The Visual Elements, Photography." Q: What can researchers do to make sure their research is communicated correctly and ethically? A: With the advent of AI, I see three main issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a continuing need for researchers to be trained in visual communication. For years, I have been trying to develop a visual literacy program for the present and upcoming classes of science and engineering researchers. MIT has a communication requirement which mostly addresses writing, but what about the visual, which is no longer tangential to a journal submission? I will bet that most readers of scientific articles go right to the figures, after they read the abstract. We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of "nudging" an image to look a certain predetermined way. I describe in the article an incident when a student altered one of my images (without asking me) to match what the student wanted to visually communicate. I didn't permit it, of course, and was disappointed that the ethics of such an alteration were not considered. We need to develop, at the very least, conversations on campus and, even better, create a visual literacy requirement along with the writing requirement. Q: Generative AI is not going away. What do you see as the future for communicating science visually? A: For the Nature article, I decided that a powerful way to question the use of AI in generating images was by example. I used one of the diffusion models to create an image using the following prompt: "Create a photo of Moungi Bawendi's nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light." The results of my AI experimentation were often cartoon-like images that could hardly pass as reality -- let alone documentation -- but there will be a time when they will be. In conversations with colleagues in research and computer-science communities, all agree that we should have clear standards on what is and is not allowed. And most importantly, a GenAI visual should never be allowed as documentation. But AI-generated visuals will, in fact, be useful for illustration purposes. If an AI-generated visual is to be submitted to a journal (or, for that matter, be shown in a presentation), I believe the researcher MUST clearly label if an image was created by an AI model;indicate what model was used;include what prompt was used; andinclude the image, if there is one, that was used to help the prompt.
Share
Share
Copy Link
Felice Frankel, a veteran science photographer at MIT, shares insights on the challenges and ethical considerations of using AI in scientific image creation and communication.
Felice Frankel, a renowned science photographer with over 30 years of experience at MIT, has recently voiced her concerns about the increasing use of generative artificial intelligence (GenAI) in scientific image creation. In an opinion piece published in Nature magazine, Frankel explores the challenges and implications of AI in research communication, questioning the future role of science photographers in the research community 12.
Frankel acknowledges that all images, even photographs, involve some level of manipulation. She explains, "In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality" 1. However, she emphasizes that the critical issue is not manipulating the data itself, which in most scientific images is the structure being represented.
To illustrate this point, Frankel shares an example of her work where she digitally removed a petri dish from an image of a yeast colony to highlight its morphology. She stresses the importance of transparency, always indicating in the text when such manipulations have been made 12.
With the advent of AI in image creation, Frankel identifies three main issues:
Frankel advocates for the development of a visual literacy program for science and engineering researchers. She argues, "We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it" 1. This includes discussing the ethics of altering images to match predetermined visual expectations 2.
While acknowledging that generative AI is here to stay, Frankel expresses concerns about its use in scientific documentation. She conducted an experiment using an AI diffusion model to create an image of nano crystals, finding that the results were often "cartoon-like images that could hardly pass as reality" 12.
However, Frankel sees potential for AI-generated visuals in illustration. She proposes a set of guidelines for using AI-generated images in scientific publications or presentations:
As AI continues to evolve and impact scientific visualization, Frankel's insights highlight the need for ongoing discussions about ethical standards, transparency, and the development of visual literacy skills in the scientific community. The balance between leveraging AI's capabilities and maintaining the integrity of scientific communication remains a critical challenge for researchers and institutions alike.
Reference
[2]
AI is transforming scientific research, offering breakthroughs and efficiency, but also enabling easier fabrication of data and papers. The scientific community faces the challenge of maximizing AI's benefits while minimizing risks of misconduct.
2 Sources
2 Sources
A new report reveals how news audiences and journalists feel about the use of generative AI in newsrooms, highlighting concerns about transparency, accuracy, and ethical implications.
3 Sources
3 Sources
A recent study reveals that AI image generators struggle to accurately depict the diversity of chemists in terms of gender, race, and disability, raising concerns about the potential impact on public perception and future generations.
2 Sources
2 Sources
The rise of AI-generated art in the style of Studio Ghibli has ignited a fierce debate about intellectual property, creativity, and the future of human artists in an AI-dominated landscape.
5 Sources
5 Sources
AI tools are transforming filmmaking, creating a new aesthetic and making movie production more accessible to indie creators. From surreal visuals to efficient post-production, AI is revolutionizing the cinematic landscape.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved