Curated by THEOUTPOST
On Tue, 10 Dec, 8:03 AM UTC
3 Sources
[1]
Readers trust news less when AI is involved - Earth.com
As artificial intelligence (AI) becomes increasingly integrated into journalism, newsrooms face the dual challenge of effectively using the technology while transparently disclosing its involvement to readers. New research from the University of Kansas (KU) reveals that readers often view AI's role in news production negatively, even when they don't fully understand its specific contributions. This perception can lower their trust in the credibility of the news. The studies, led by researchers Alyssa Appelman and Steve Bien-Aimé of the William Allen White School of Journalism and Mass Communications at KU, explores how readers interpret AI involvement in news articles and its impact on perceptions of credibility. Appelman and Bien-Aimé, along with their collaborators Haiyan Jia of Lehigh University and Mu Wu of California State University, conducted an experiment to investigate how different AI-related bylines influence readers. Participants were randomly assigned one of five bylines on an article about the safety of artificial sweetener aspartame. These bylines ranged from "written by staff writer" to "written by artificial intelligence," with variations indicating collaboration or assistance from AI. The researchers found that readers interpreted these bylines in diverse ways. Even when the byline simply stated "written by staff writer," many readers assumed AI had played a role in the article's creation due to the absence of a named human author. Participants used their prior knowledge to make sense of AI's potential contributions, often overestimating its involvement. "People have a lot of different ideas on what AI can mean, and when we are not clear on what it did, people will fill in the gaps on what they thought it did," Appelman explained. Regardless of their interpretation, participants consistently rated news articles as less credible when they believed artificial intelligence was involved. This effect persisted even when the byline explicitly indicated human contribution alongside AI assistance. Readers appeared to prioritize the perceived extent of human involvement in evaluating the article's trustworthiness. "The big thing was not between whether it was AI or human: It was how much work they thought the human did," Bien-Aimé noted. The findings highlight the importance of clear and precise disclosure about AI's role in news production. While transparency is crucial, simply stating that AI was used may not suffice to alleviate reader concerns. If readers perceive AI as having contributed more than a human, their trust in the news could diminish. The studies highlight the need for greater transparency and improved communication about the use of AI in journalism. Recent controversies, such as allegations that Sports Illustrated published AI-generated articles while presenting them as human-written, have underscored the risks of insufficient disclosure. The research also suggests that readers may be more accepting of AI in contexts where it has not traditionally replaced human roles. For instance, algorithmic recommendations on platforms like YouTube are often perceived as helpful rather than intrusive. However, in fields like journalism, where human expertise is traditionally valued, the introduction of AI can create skepticism about the quality and authenticity of the work. "Part of our research framework has always been assessing if readers know what journalists do," Bien-Aimé said. "And we want to continue to better understand how people view the work of journalists." Appelman and Bien-Aimé's findings point to a gap in reader understanding of journalistic practices. Disclosures about AI involvement, corrections, ethics training, or even bylines are often interpreted differently by readers than journalists intend. To bridge this gap, the researchers emphasize the need for journalists and educators to better communicate the specifics of how AI is used in news production. "This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not," Bien-Aimé said. Both studies call for further investigation into how readers perceive AI's role in journalism and how these perceptions influence trust in the media. By understanding these dynamics, journalists can refine their practices to maintain credibility while leveraging AI's potential. As AI continues to shape the future of journalism, the field must navigate the balance between technological innovation and maintaining public trust. Transparency, clear communication, and ethical practices will be essential to ensuring that AI serves as a tool to enhance rather than undermine the credibility of the news. Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
[2]
Study finds readers trust news less when AI is involved, even when they don't understand to what extent
As artificial intelligence becomes more involved in journalism, journalists and editors are grappling not only with how to use the technology, but how to disclose its use to readers. New research from the University of Kansas has found that when readers think AI is involved in some way in news production, they have lower trust in the credibility of the news, even when they don't fully understand what it contributed. The findings show that readers are aware of the use of AI in creating news, even if they view it negatively. But understanding what and how the technology contributed to news can be complicated, and how to disclose that to readers in a way they understand is a problem that needs addressed in a clear manner, according to the researchers. "The growing concentration of AI in journalism is a question we know journalists and educators are talking about, but we were interested in how readers are perceiving it. So we wanted to know more about media byline perceptions and their influence, or what people think about news generated by AI," said Alyssa Appelman, associate professor in the William Allen White School of Journalism and Mass Communications, and co-author of two studies on the topic. Appelman and Steve Bien-Aimé, assistant professor in the William Allen White School of Journalism and Mass Communications, helped lead an experiment in which they showed readers a news story about the artificial sweetener aspartame and its safety for human consumption. Readers were randomly assigned one of five bylines: written by staff writer, written by staff writer with artificial intelligence tool, written by staff writer with artificial intelligence assistance, written by staff writer with artificial intelligence collaboration and written by artificial intelligence. The article was otherwise consistent in all cases. The findings were published in two research papers. Both were written by Appelman and Bien-Aimé of KU, along with Haiyan Jia of Lehigh University and Mu Wu of California State University, Los Angeles. One paper focused on how readers made sense of AI bylines. Readers were surveyed after reading the article about what the specific byline they received meant and whether they agreed with several statements intended to measure their media literacy and attitudes toward AI. Findings showed that regardless of the byline they received, participants had a wide view of what the technology did. The majority reported they felt humans were the primary contributors, while some said they thought AI might have been used as research assistance or in writing a first draft that was edited by a human. Results showed that participants had an understanding of what AI technology can do, and that it is human-guided with prompts. However, the different byline conditions left much for people to interpret on how specifically it may have contributed to the article they read. When AI contribution was mentioned in the byline, it negatively affected readers' perceptions of the source and author credibility. Even with the byline "written by staff writer," readers interpreted it to mean it was at least partially written by AI, as there was not a human's name connected to the story. Readers used sensemaking as a technique to interpret the contributions of AI, the authors wrote. The tactic is a way of using information one has already learned to make sense of situations they may not be familiar with. "People have a lot of different ideas on what AI can mean, and when we are not clear on what it did, people will fill in the gaps on what they thought it did," Appelman said. The results showed that, regardless of what they thought AI contributed to the story, their opinions of the news' credibility were negatively affected. The findings were published in the journal Communication Reports. A second research paper explored how perceptions of humanness mediated the relationship between perceived AI contribution and credibility judgments. It found that acknowledging AI enhanced transparency and that readers felt human contribution to the news improved trustworthiness. Participants reported what percentage they thought AI was involved in the creation of the article, regardless of which byline condition they received. The higher percentage they gave, the lower their judgment of its credibility was. Even those who read "written by staff writer" reported they felt AI was involved to some degree. "The big thing was not between whether it was AI or human: It was how much work they thought the human did," Bien-Aimé said. "This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not." The findings suggest that people give higher credibility to human contributions in fields like journalism that have traditionally been performed by humans. When that is replaced by a technology such as AI, it can affect perceptions of credibility, whereas it might not for things that are not traditionally human, such as YouTube suggesting videos for a person to watch, based on their previous viewing, the authors said. While it can be construed as positive that readers tend to perceive human-written news as more credible, journalists and educators should also understand they need to be clear in disclosing how or if they use AI. Transparency is a sound practice, as shown by a scandal earlier this year in which Sports Illustrated was alleged to have published AI-generated articles presented as being written by people. However, the researchers argue, simply stating that AI was used may not be clear enough for people to understand what it did and if they feel it contributed more than a human, could negatively influence credibility perceptions. The findings on perceived authorship and humanness were published in the journal Computers in Human Behavior: Artificial Humans. Both journal articles indicate that further research should continue to explore how readers perceive the contributions of AI to journalism, the authors said, and they also suggest that journalism as a field can benefit from improvements in how it discloses such practices. Appelman and Bien-Aimé study reader understanding of various journalism practices and have found readers often do not perceive what certain disclosures such as corrections, bylines, ethics training or use of AI mean in a way consistent with what journalists intended. "Part of our research framework has always been assessing if readers know what journalists do," Bien-Aimé said. "And we want to continue to better understand how people view the work of journalists."
[3]
Readers trust news less when AI is involved, even when they don't understand to what extent
As artificial intelligence becomes more involved in journalism, journalists and editors are grappling not only with how to use the technology, but how to disclose its use to readers. New research from the University of Kansas has found that when readers think AI is involved in some way in news production, they have lower trust in the credibility of the news, even when they don't fully understand what it contributed. The findings show that readers are aware of the use of AI in creating news, even if they view it negatively. But understanding what and how the technology contributed to news can be complicated, and how to disclose that to readers in a way they understand is a problem that needs addressed in a clear manner, according to the researchers. "The growing concentration of AI in journalism is a question we know journalists and educators are talking about, but we were interested in how readers are perceiving it. So we wanted to know more about media byline perceptions and their influence, or what people think about news generated by AI," said Alyssa Appelman, associate professor in the William Allen White School of Journalism and Mass Communications, and co-author of two studies on the topic. Appelman and Steve Bien-Aimé, assistant professor in the William Allen White School of Journalism and Mass Communications, helped lead an experiment in which they showed readers a news story about artificial sweetener aspartame and its safety for human consumption. Readers were randomly assigned one of five bylines: written by staff writer, written by staff writer with artificial intelligence tool, written by staff writer with artificial intelligence assistance, written by staff writer with artificial intelligence collaboration and written by artificial intelligence. The article was otherwise consistent in all cases. The findings were published in two research papers. Both were written by Appelman and Bien-Aimé of KU, along with Haiyan Jia of Lehigh University and Mu Wu of California State University, Los Angeles. One paper focused on how readers made sense of AI bylines. Readers were surveyed after reading the article about what the specific byline they received meant and whether they agreed with several statements intended to measure their media literacy and attitudes toward AI. Findings showed that regardless of the byline they received, participants had a wide view of what the technology did. The majority reported they felt humans were the primary contributors, while some said they thought AI might have been used as research assistance or in writing a first draft that was edited by a human. Results showed that participants had an understanding of what AI technology can do, and that it is human-guided with prompts. However, the different byline conditions left much for people to interpret on how specifically it may have contributed to the article they read. When AI contribution was mentioned in the byline, it negatively affected readers' perceptions of the source and author credibility. Even with the byline "written by staff writer," readers interpreted it to mean it was at least partially written by AI, as there was not a human's name connected to the story. Readers used sensemaking as a technique to interpret the contributions of AI, the authors wrote. The tactic is a way of using information one has already learned to make sense of situations they may not be familiar with. "People have a lot of different ideas on what AI can mean, and when we are not clear on what it did, people will fill in the gaps on what they thought it did," Appelman said. The results showed that, regardless of what they thought AI contributed to the story, their opinions of the news' credibility were negatively affected. The findings were published in the journal Communication Reports. A second research paper explored how perceptions of humanness mediated the relationship between perceived AI contribution and credibility judgments. It found that acknowledging AI enhanced transparency and that readers felt human contribution to the news improved trustworthiness. Participants reported what percentage they thought AI was involved in the creation of the article, regardless of which byline condition they received. The higher percentage they gave, the lower their judgment of its credibility was. Even those who read "written by staff writer" reported they felt AI was involved to some degree. "The big thing was not between whether it was AI or human: It was how much work they thought the human did," Bien-Aimé said. "This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not." The findings suggest that people give higher credibility to human contributions in fields like journalism that have traditionally been performed by humans. When that is replaced by a technology such as AI, it can affect perceptions of credibility, whereas it might not for things that are not traditionally human, such as YouTube suggesting videos for a person to watch, based on their previous viewing, the authors said. While it can be construed as positive that readers tend to perceive human-written news as more credible, journalists and educators should also understand they need to be clear in disclosing how or if they use AI. Transparency is a sound practice, as shown by a scandal earlier this year in which Sports Illustrated was alleged to have published AI-generated articles presented as being written by people. However, the researchers argue, simply stating that AI was used may not be clear enough for people to understand what it did and if they feel it contributed more than a human, could negatively influence credibility perceptions. The findings on perceived authorship and humanness were published in the journal Computers in Human Behavior: Artificial Humans. Both journal articles indicate that further research should continue to explore how readers perceive the contributions of AI to journalism, the authors said, and they also suggest that journalism as a field can benefit from improvements in how it discloses such practices. Appelman and Bien-Aimé study reader understanding of various journalism practices and have found readers often do not perceive what certain disclosures such as corrections, bylines, ethics training or use of AI mean in a way consistent with what journalists intended. "Part of our research framework has always been assessing if readers know what journalists do," Bien-Aimé said. "And we want to continue to better understand how people view the work of journalists."
Share
Share
Copy Link
New research from the University of Kansas reveals that readers' trust in news decreases when they believe AI is involved in its production, even when they don't fully understand the extent of AI's contribution.
A new study from the University of Kansas has revealed that readers' trust in news decreases when they believe artificial intelligence (AI) is involved in its production, even when they don't fully understand the extent of AI's contribution 123. This finding comes as AI becomes increasingly integrated into journalism, presenting challenges for newsrooms in both utilizing the technology and transparently disclosing its involvement to readers.
Researchers Alyssa Appelman and Steve Bien-Aimé, along with collaborators from Lehigh University and California State University, conducted an experiment to investigate how different AI-related bylines influence readers' perceptions 12. Participants were randomly assigned one of five bylines on an article about the safety of artificial sweetener aspartame, ranging from "written by staff writer" to "written by artificial intelligence," with variations indicating collaboration or assistance from AI 123.
The study found that readers interpreted these bylines in diverse ways, often overestimating AI's involvement 1. Even when the byline simply stated "written by staff writer," many readers assumed AI had played a role in the article's creation due to the absence of a named human author 23.
Regardless of their interpretation, participants consistently rated news articles as less credible when they believed artificial intelligence was involved 123. This effect persisted even when the byline explicitly indicated human contribution alongside AI assistance 1.
The findings highlight the importance of clear and precise disclosure about AI's role in news production 123. While transparency is crucial, simply stating that AI was used may not suffice to alleviate reader concerns 1. If readers perceive AI as having contributed more than a human, their trust in the news could diminish 23.
Readers used their prior knowledge to make sense of AI's potential contributions, often filling in gaps with their own assumptions 12. This "sensemaking" process led to a wide range of interpretations about AI's role in news production 23.
The study revealed that readers prioritized the perceived extent of human involvement in evaluating an article's trustworthiness 123. "The big thing was not between whether it was AI or human: It was how much work they thought the human did," noted Bien-Aimé 23.
The research suggests a need for greater transparency and improved communication about the use of AI in journalism 123. It also highlights a gap in reader understanding of journalistic practices, emphasizing the need for journalists and educators to better communicate the specifics of how AI is used in news production 12.
As AI continues to shape the future of journalism, the field must navigate the balance between technological innovation and maintaining public trust 1. Transparency, clear communication, and ethical practices will be essential to ensuring that AI serves as a tool to enhance rather than undermine the credibility of the news 123.
A survey of Canadian news consumers reveals strong preferences for transparency in AI use in journalism, with concerns about accuracy, trust, and the potential spread of misinformation.
2 Sources
2 Sources
A new report reveals how news audiences and journalists feel about the use of generative AI in newsrooms, highlighting concerns about transparency, accuracy, and ethical implications.
3 Sources
3 Sources
A new study reveals that AI-generated summaries of scientific papers can improve public comprehension and enhance trust in scientists, potentially addressing the decline in scientific literacy and trust.
3 Sources
3 Sources
A new study reveals that while AI-generated stories can match human-written ones in quality, readers show a bias against content they believe is AI-created, even when it's not.
6 Sources
6 Sources
A new study by New York Institute of Technology researchers shows that consumers view AI-generated emotional marketing content as less authentic, potentially harming brand perception and customer relationships.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved