Curated by THEOUTPOST
On Mon, 17 Feb, 8:00 AM UTC
3 Sources
[1]
Generative AI is already being used in journalism -- here's how people feel about it
by T.J. Thomson, Michelle Riedlinger, Phoebe Matich and Ryan J. Thomas, The Conversation Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception. A new report published today finds that news audiences and journalists alike are concerned about how news organizations are -- and could be -- using generative AI such as chatbots, image, audio and video generators, and similar tools. The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France). Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had. This suggests a potential lack of transparency from news organizations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences. Who or what makes your news -- and how -- matters for a host of reasons. Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources -- such as politicians or experts -- more than others. Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet's staff themselves aren't representative of their audience. Carelessly using AI to produce or edit journalism can reproduce some of these inequalities. Our report identifies dozens of ways journalists and news organizations can use generative AI. It also summarizes how comfortable news audiences are with each. The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic. But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower. The problem -- and opportunity Generative AI can be used in just about every part of journalism. For example, a photographer could cover an event. Then, a generative AI tool could select what it "thinks" are the best images, edit the images to optimize them, and add keywords to each. These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make "good" images are different to what a computer might think? These criteria may also change over time or in different contexts. Even something as simple as lightening or darkening an image can cause a furor when politics are involved. AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context. Generative AI is also frequently used for writing headlines or summarizing articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others' content. AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC. What do people think about journalists using AI? Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes. For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the "portrait" mode on smartphones. Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who'd previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media. The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral. For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles's coronation, news outlets reported on this false image. Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns. Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use. Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example. On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to "enliven" an otherwise static image in the hopes of attracting viewer interest and engagement. Your role as an audience member If you're unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can't find one, consider asking the outlet to develop and publish a policy. Consider supporting media outlets that use AI to complement and support -- rather than replace -- human labor. Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.
[2]
Generative AI is already being used in journalism - here's how people feel about it
Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception. A new report published today finds that news audiences and journalists alike are concerned about how news organisations are - and could be - using generative AI such as chatbots, image, audio and video generators, and similar tools. The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France). Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had. This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences. Who or what makes your news - and how - matters for a host of reasons. Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources - such as politicians or experts - more than others. Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet's staff themselves aren't representative of their audience. Carelessly using AI to produce or edit journalism can reproduce some of these inequalities. Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each. The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic. But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower. The problem - and opportunity Generative AI can be used in just about every part of journalism. For example, a photographer could cover an event. Then, a generative AI tool could select what it "thinks" are the best images, edit the images to optimise them, and add keywords to each. These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make "good" images are different to what a computer might think? These criteria may also change over time or in different contexts. Even something as simple as lightening or darkening an image can cause a furore when politics are involved. AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context. Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others' content. AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC. What do people think about journalists using AI? Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes. For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the "portrait" mode on smartphones. Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who'd previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media. The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral. For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles's coronation, news outlets reported on this false image. Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns. Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use. Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example. On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to "enliven" an otherwise static image in the hopes of attracting viewer interest and engagement. Your role as an audience member If you're unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can't find one, consider asking the outlet to develop and publish a policy. Consider supporting media outlets that use AI to complement and support - rather than replace - human labour. Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.
[3]
Generative AI is already being used in journalism - here's how people feel about it
Only 25% of participants in the news audience survey were confident they had encountered generative AI in journalism, while around 50% were unsure or believed they had. Research found that news audiences appear to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes.Generative artificial intelligence (AI) has taken off at lightning speed in the past couple of years, creating disruption in many industries. Newsrooms are no exception. A new report published today finds that news audiences and journalists alike are concerned about how news organisations are - and could be - using generative AI such as chatbots, image, audio and video generators, and similar tools. The report draws on three years of interviews and focus group research into generative AI and journalism in Australia and six other countries (United States, United Kingdom, Norway, Switzerland, Germany and France). Only 25% of our news audience participants were confident they had encountered generative AI in journalism. About 50% were unsure or suspected they had. This suggests a potential lack of transparency from news organisations when they use generative AI. It could also reflect a lack of trust between news outlets and audiences. Who or what makes your news - and how - matters for a host of reasons. Some outlets tend to use more or fewer sources, for example. Or use certain kinds of sources - such as politicians or experts - more than others. Some outlets under-represent or misrepresent parts of the community. This is sometimes because the news outlet's staff themselves aren't representative of their audience. Carelessly using AI to produce or edit journalism can reproduce some of these inequalities. Our report identifies dozens of ways journalists and news organisations can use generative AI. It also summarises how comfortable news audiences are with each. The news audiences we spoke to overall felt most comfortable with journalists using AI for behind-the-scenes tasks rather than for editing and creating. These include using AI to transcribe an interview or to provide ideas on how to cover a topic. But comfort is highly dependent on context. Audiences were quite comfortable with some editing and creating tasks when the perceived risks were lower. The problem - and opportunity Generative AI can be used in just about every part of journalism. For example, a photographer could cover an event. Then, a generative AI tool could select what it "thinks" are the best images, edit the images to optimise them, and add keywords to each. These might seem like relatively harmless applications. But what if the AI identifies something or someone incorrectly, and these keywords lead to mis-identifications in the photo captions? What if the criteria humans think make "good" images are different to what a computer might think? These criteria may also change over time or in different contexts. Even something as simple as lightening or darkening an image can cause a furore when politics are involved. AI can also make things up completely. Images can appear photorealistic but show things that never happened. Videos can be entirely generated with AI, or edited with AI to change their context. Generative AI is also frequently used for writing headlines or summarising articles. These sound like helpful applications for time-poor individuals, but some news outlets are using AI to rip off others' content. AI-generated news alerts have also gotten the facts wrong. As an example, Apple recently suspended its automatically generated news notification feature. It did this after the feature falsely claimed US murder suspect Luigi Mangione had killed himself, with the source attributed as the BBC. What do people think about journalists using AI? Our research found news audiences seem to be more comfortable with journalists using AI for certain tasks when they themselves have used it for similar purposes. For example, the people interviewed were largely comfortable with journalists using AI to blur parts of an image. Our participants said they used similar tools on video conferencing apps or when using the "portrait" mode on smartphones. Likewise, when you insert an image into popular word processing or presentation software, it might automatically create a written description of the image for people with vision impairments. Those who'd previously encountered such AI descriptions of images felt more comfortable with journalists using AI to add keywords to media. The most frequent way our participants encountered generative AI in journalism was when journalists reported on AI content that had gone viral. For example, when an AI-generated image purported to show Princes William and Harry embracing at King Charles's coronation, news outlets reported on this false image. Our news audience participants also saw notices that AI had been used to write, edit or translate news articles. They saw AI-generated images accompanying some of these. This is a popular approach at The Daily Telegraph, which uses AI-generated images to illustrate many of its opinion columns. Overall, our participants felt most comfortable with journalists using AI for brainstorming or for enriching already created media. This was followed by using AI for editing and creating. But comfort depends heavily on the specific use. Most of our participants were comfortable with turning to AI to create icons for an infographic. But they were quite uncomfortable with the idea of an AI avatar presenting the news, for example. On the editing front, a majority of our participants were comfortable with using AI to animate historical images, like this one. AI can be used to "enliven" an otherwise static image in the hopes of attracting viewer interest and engagement. Your role as an audience member If you're unsure if or how journalists are using AI, look for a policy or explainer from the news outlet on the topic. If you can't find one, consider asking the outlet to develop and publish a policy. Consider supporting media outlets that use AI to complement and support - rather than replace - human labour. Before making decisions, consider the past trustworthiness of the journalist or outlet in question, and what the evidence says.
Share
Share
Copy Link
A new report reveals how news audiences and journalists feel about the use of generative AI in newsrooms, highlighting concerns about transparency, accuracy, and ethical implications.
A recent report has shed light on the growing use of generative artificial intelligence (AI) in journalism and its implications for both news organizations and their audiences. The study, which spanned three years and involved research across seven countries, reveals a complex landscape of perceptions and concerns surrounding this rapidly evolving technology 1.
One of the most striking findings is the apparent lack of awareness among news consumers about the use of generative AI in journalism. Only 25% of the study's participants were confident they had encountered AI-generated content, while about 50% were unsure or merely suspected its presence 2. This disparity suggests a potential transparency issue in how news organizations are implementing and disclosing their use of AI technologies.
The research indicates that news audiences are generally more comfortable with AI being used for behind-the-scenes tasks rather than for content creation and editing. Tasks such as transcribing interviews or generating story ideas were viewed more favorably. However, comfort levels varied significantly depending on the specific application and perceived risks 3.
While generative AI offers numerous opportunities for enhancing journalistic processes, it also presents significant risks. These include:
Interestingly, the study found that news consumers were more accepting of AI use in journalism when they had personal experience with similar technologies. For instance, participants who had used AI-powered image editing tools in their daily lives were more comfortable with journalists using AI for tasks like blurring parts of an image 2.
The research highlighted a spectrum of AI applications in journalism, each eliciting different reactions from the audience:
As generative AI continues to integrate into newsrooms, the study underscores the importance of transparency and ethical considerations. News organizations are encouraged to develop and publish clear policies on their use of AI technologies. Meanwhile, news consumers are advised to seek out information about how their preferred news outlets are implementing AI and to engage in dialogue about these practices 1.
Reference
[2]
A recent survey reveals widespread apprehension among Australians regarding artificial intelligence. The study emphasizes the crucial role of media literacy in addressing these concerns and navigating the evolving AI landscape.
5 Sources
5 Sources
A survey of Canadian news consumers reveals strong preferences for transparency in AI use in journalism, with concerns about accuracy, trust, and the potential spread of misinformation.
2 Sources
2 Sources
New research from the University of Kansas reveals that readers' trust in news decreases when they believe AI is involved in its production, even when they don't fully understand the extent of AI's contribution.
3 Sources
3 Sources
As AI-powered search transforms the media landscape, newsrooms are adopting new strategies to stay relevant. From pivoting to reader-revenue models to leveraging AI for support tasks, media outlets are finding innovative ways to engage audiences and maintain their relevance in a rapidly changing digital environment.
2 Sources
2 Sources
A new study reveals that while AI-generated stories can match human-written ones in quality, readers show a bias against content they believe is AI-created, even when it's not.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved