Curated by THEOUTPOST
On Sat, 23 Nov, 12:01 AM UTC
3 Sources
[1]
AI may help simplify science communication and restore trust
Michigan State UniversityNov 22 2024 Have you ever read about a scientific discovery and felt like it was written in a foreign language? If you're like most Americans, new scientific information can prove challenging to understand -; especially if you try to tackle a science article in a research journal. In an era when scientific literacy is crucial for informed decision-making, the abilities to communicate and comprehend complex content are more important than ever. Trust in science has been declining for years, and one contributing factor may be the challenge of understanding scientific jargon. New research from David Markowitz, associate professor of communication at Michigan State University, points to a potential solution: using artificial intelligence, or AI, to simplify science communication. His work demonstrates that AI-generated summaries may help restore trust in scientists and, in turn, encourage greater public engagement with scientific issues -; just by making scientific content more approachable. The question of trust is particularly important, as people often rely on science to inform decisions in their daily lives, from choosing what foods to eat to making critical heath care choices. Responses are excerpts from an article originally published in The Conversation. How did simpler, AI-generated summaries affect the general public's comprehension of scientific studies? Artificial intelligence can generate summaries of scientific papers that make complex information more understandable for the public compared with human-written summaries, according to Markowitz's recent study, which was published in PNAS Nexus. AI-generated summaries not only improved public comprehension of science but also enhanced how people perceived scientists. Markowitz used a popular large language model, GPT-4 by OpenAI, to create simple summaries of scientific papers; this kind of text is often called a significance statement. The AI-generated summaries used simpler language -; they were easier to read according to a readability index and used more common words, like "job" instead of "occupation" -; than summaries written by the researchers who had done the work. In one experiment, he found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements. How did simpler, AI-generated summaries affect the general public's perception of scientists? In another experiment, participants rated the scientists whose work was described in simple terms as more credible and trustworthy than the scientists whose work was described in more complex terms. In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. When I asked participants who they believed wrote each summary, they ironically thought the more complex ones were written by AI and simpler ones were written by humans. What do we still need to learn about AI and science communication? As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals. Indeed, the academic publishing field is still establishing norms regarding the use of AI. By simplifying scientific writing, AI could contribute to more engagement with complex issues. While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered. There is some risk that relying on AI to simplify scientific content may remove nuance, potentially leading to misunderstandings or oversimplifications. There's always the chance of errors, too, if no one pays close attention. Additionally, transparency is critical. Readers should be informed when AI is used to generate summaries to avoid potential biases. Simple science descriptions are preferable to and more beneficial than complex ones, and AI tools can help. But scientists could also achieve the same goals by working harder to minimize jargon and communicate clearly -; no AI necessary. Michigan State University has been advancing the common good with uncommon will for more than 165 years. One of the world's leading public research universities, MSU pushes the boundaries of discovery to make a better, safer, healthier world for all while providing life-changing opportunities to a diverse and inclusive academic community through more than 400 programs of study in 17 degree-granting colleges. Michigan State University Journal reference: Markowitz, D. M. (2024). From complexity to clarity: How AI enhances perceptions of scientists and the public's understanding of science. PNAS Nexus. doi.org/10.1093/pnasnexus/pgae387.
[2]
MSU expert: How AI can help people understand research and increase trust in science | Newswise
Newswise -- Have you ever read about a scientific discovery and felt like it was written in a foreign language? If you're like most Americans, new scientific information can prove challenging to understand -- especially if you try to tackle a science article in a research journal. In an era when scientific literacy is crucial for informed decision-making, the abilities to communicate and comprehend complex content are more important than ever. Trust in science has been declining for years, and one contributing factor may be the challenge of understanding scientific jargon. New research from David Markowitz, associate professor of communication at Michigan State University, points to a potential solution: using artificial intelligence, or AI, to simplify science communication. His work demonstrates that AI-generated summaries may help restore trust in scientists and, in turn, encourage greater public engagement with scientific issues -- just by making scientific content more approachable. The question of trust is particularly important, as people often rely on science to inform decisions in their daily lives, from choosing what foods to eat to making critical heath care choices. Responses are excerpts from an article originally published in The Conversation. How did simpler, AI-generated summaries affect the general public's comprehension of scientific studies? Artificial intelligence can generate summaries of scientific papers that make complex information more understandable for the public compared with human-written summaries, according to Markowitz's recent study, which was published in PNAS Nexus. AI-generated summaries not only improved public comprehension of science but also enhanced how people perceived scientists. Markowitz used a popular large language model, GPT-4 by OpenAI, to create simple summaries of scientific papers; this kind of text is often called a significance statement. The AI-generated summaries used simpler language -- they were easier to read according to a readability index and used more common words, like "job" instead of "occupation" -- than summaries written by the researchers who had done the work. In one experiment, he found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements. How did simpler, AI-generated summaries affect the general public's perception of scientists? In another experiment, participants rated the scientists whose work was described in simple terms as more credible and trustworthy than the scientists whose work was described in more complex terms. In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. When I asked participants who they believed wrote each summary, they ironically thought the more complex ones were written by AI and simpler ones were written by humans. What do we still need to learn about AI and science communication? As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals. Indeed, the academic publishing field is still establishing norms regarding the use of AI. By simplifying scientific writing, AI could contribute to more engagement with complex issues. While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered. There is some risk that relying on AI to simplify scientific content may remove nuance, potentially leading to misunderstandings or oversimplifications. There's always the chance of errors, too, if no one pays close attention. Additionally, transparency is critical. Readers should be informed when AI is used to generate summaries to avoid potential biases. Simple science descriptions are preferable to and more beneficial than complex ones, and AI tools can help. But scientists could also achieve the same goals by working harder to minimize jargon and communicate clearly -- no AI necessary.
[3]
AI Summaries Simplify Science, Boosting Public Understanding and Trust - Neuroscience News
Summary: AI-generated summaries make scientific studies more accessible and improve public trust in scientists. Using GPT-4, researchers created simplified summaries that were easier to read and understand than human-written ones. Participants rated scientists whose work was described in simpler terms as more credible and trustworthy. While promising, using AI in science communication raises ethical concerns about accuracy, transparency, and potential oversimplification. Have you ever read about a scientific discovery and felt like it was written in a foreign language? If you're like most Americans, new scientific information can prove challenging to understand -- especially if you try to tackle a science article in a research journal. In an era when scientific literacy is crucial for informed decision-making, the abilities to communicate and comprehend complex content are more important than ever. Trust in science has been declining for years, and one contributing factor may be the challenge of understanding scientific jargon. New research from David Markowitz, associate professor of communication at Michigan State University, points to a potential solution: using artificial intelligence, or AI, to simplify science communication. His work demonstrates that AI-generated summaries may help restore trust in scientists and, in turn, encourage greater public engagement with scientific issues -- just by making scientific content more approachable. The question of trust is particularly important, as people often rely on science to inform decisions in their daily lives, from choosing what foods to eat to making critical heath care choices. Responses are excerpts from an article originally published in The Conversation. How did simpler, AI-generated summaries affect the general public's comprehension of scientific studies? Artificial intelligence can generate summaries of scientific papers that make complex information more understandable for the public compared with human-written summaries, according to Markowitz's recent study, which was published in PNAS Nexus. AI-generated summaries not only improved public comprehension of science but also enhanced how people perceived scientists. Markowitz used a popular large language model, GPT-4 by OpenAI, to create simple summaries of scientific papers; this kind of text is often called a significance statement. The AI-generated summaries used simpler language -- they were easier to read according to a readability index and used more common words, like "job" instead of "occupation" -- than summaries written by the researchers who had done the work. In one experiment, he found that readers of the AI-generated statements had a better understanding of the science, and they provided more detailed, accurate summaries of the content than readers of the human-written statements. How did simpler, AI-generated summaries affect the general public's perception of scientists? In another experiment, participants rated the scientists whose work was described in simple terms as more credible and trustworthy than the scientists whose work was described in more complex terms. In both experiments, participants did not know who wrote each summary. The simpler texts were always AI-generated, and the complex texts were always human-generated. When I asked participants who they believed wrote each summary, they ironically thought the more complex ones were written by AI and simpler ones were written by humans. What do we still need to learn about AI and science communication? As AI continues to evolve, its role in science communication may expand, especially if using generative AI becomes more commonplace or sanctioned by journals. Indeed, the academic publishing field is still establishing norms regarding the use of AI. By simplifying scientific writing, AI could contribute to more engagement with complex issues. While the benefits of AI-generated science communication are perhaps clear, ethical considerations must also be considered. There is some risk that relying on AI to simplify scientific content may remove nuance, potentially leading to misunderstandings or oversimplification. There's always the chance of errors, too, if no one pays close attention. Additionally, transparency is critical. Readers should be informed when AI is used to generate summaries to avoid potential biases. Simple science descriptions are preferable to and more beneficial than complex ones, and AI tools can help. But scientists could also achieve the same goals by working harder to minimize jargon and communicate clearly -- no AI necessary. From complexity to clarity: How AI enhances perceptions of scientists and the public's understanding of science This article evaluated the effectiveness of using generative AI to simplify science communication and enhance the public's understanding of science. By comparing lay summaries of journal articles from PNAS, yoked to those generated by AI, this work first assessed linguistic simplicity differences across such summaries and public perceptions in follow-up experiments. Specifically, study 1a analyzed simplicity features of PNAS abstracts (scientific summaries) and significance statements (lay summaries), observing that lay summaries were indeed linguistically simpler, but effect size differences were small. Study 1b used a large language model, GPT-4, to create significance statements based on paper abstracts and this more than doubled the average effect size without fine-tuning. Study 2 experimentally demonstrated that simply-written generative pre-trained transformer (GPT) summaries facilitated more favorable perceptions of scientists (they were perceived as more credible and trustworthy, but less intelligent) than more complexly written human PNAS summaries. Crucially, study 3 experimentally demonstrated that participants comprehended scientific writing better after reading simple GPT summaries compared to complex PNAS summaries. In their own words, participants also summarized scientific papers in a more detailed and concrete manner after reading GPT summaries compared to PNAS summaries of the same article. AI has the potential to engage scientific communities and the public via a simple language heuristic, advocating for its integration into scientific dissemination for a more informed society.
Share
Share
Copy Link
A new study reveals that AI-generated summaries of scientific papers can improve public comprehension and enhance trust in scientists, potentially addressing the decline in scientific literacy and trust.
A groundbreaking study by David Markowitz, associate professor of communication at Michigan State University, has revealed that artificial intelligence (AI) could play a crucial role in simplifying science communication and restoring public trust in scientific research. The study, published in PNAS Nexus, demonstrates that AI-generated summaries of scientific papers can make complex information more accessible to the general public, potentially addressing the declining trust in science 1.
Markowitz's research utilized OpenAI's GPT-4, a large language model, to create simplified summaries of scientific papers. These AI-generated summaries, known as significance statements, employed simpler language and more common words compared to summaries written by the researchers themselves 2.
The study found that readers of AI-generated statements demonstrated:
In addition to improving comprehension, the study revealed that AI-generated summaries positively influenced public perception of scientists. Participants rated scientists whose work was described in simpler terms as more credible and trustworthy than those whose work was presented in more complex language 3.
Interestingly, when participants were asked to guess the source of the summaries, they mistakenly attributed the simpler AI-generated texts to humans and the more complex human-written texts to AI.
The research suggests that by making scientific content more approachable, AI-generated summaries could:
While the benefits of AI in science communication are evident, the study also highlights important ethical considerations:
As AI continues to evolve, its role in science communication may expand, potentially becoming more commonplace in academic publishing. However, the study emphasizes that while AI tools can help, scientists themselves could achieve similar goals by focusing on clear communication and minimizing jargon 1.
This research opens new avenues for exploring how AI can bridge the gap between complex scientific knowledge and public understanding, potentially addressing the critical issue of declining trust in science in an era where scientific literacy is more important than ever.
Reference
[1]
[2]
New research from the University of Kansas reveals that readers' trust in news decreases when they believe AI is involved in its production, even when they don't fully understand the extent of AI's contribution.
3 Sources
3 Sources
Computer scientists are working on innovative approaches to enhance the factual accuracy of AI-generated information, including confidence scoring systems and cross-referencing with reliable sources.
2 Sources
2 Sources
AI is transforming scientific research, offering unprecedented speed and efficiency. However, it also raises concerns about accessibility, understanding, and the future of human-led science.
3 Sources
3 Sources
A comprehensive study reveals that scientific papers mentioning AI methods receive more citations, but this benefit is not equally distributed among researchers, potentially exacerbating existing inequalities in science.
3 Sources
3 Sources
MIT researchers have created a system called EXPLINGO that uses large language models to convert complex AI explanations into easily understandable narratives, aiming to bridge the gap between AI decision-making and human comprehension.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved