4 Sources
4 Sources
[1]
Largest study of its kind shows AI assistants misrepresent news content 45% of the time - regardless of language or territory
An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants - already a daily information gateway for millions of people - routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context. AI assistants are already replacing search engines for many users. According to the Reuters Institute's Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s. 'This research conclusively shows that these failings are not isolated incidents,' says EBU Media Director and Deputy Director General Jean Philip De Tender. 'They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don't know what to trust, they end up trusting nothing at all, and that can deter democratic participation.' Peter Archer, BBC Programme Director, Generative AI, says: 'We're excited about AI and how it can help us bring even more value to audiences. But people must be able to trust what they read, watch and see. Despite some improvements, it's clear that there are still significant issues with these assistants. We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.' Next steps The research team have also released a News Integrity in AI Assistants Toolkit, to help develop solutions to the issues uncovered in the report. It includes improving AI assistant responses and media literacy among users. Building on the extensive insights and examples identified in the current research, the Toolkit addresses two main questions: "What makes a good AI assistant response to a news question?" and "What are the problems that need to be fixed?". In addition, the EBU and its Members are pressing EU and national regulators to enforce existing laws on information integrity, digital services, and media pluralism. And they stress that ongoing independent monitoring of AI assistants is essential, given the fast pace of AI development, and are seeking options for continuing the research on a rolling basis. About the project This study built on research by the BBC published in February 2025, which first highlighted AI's problems in handling news. This second round expanded the scope internationally, confirming that the issue is systemic and is not tied to language, market or AI assistant. Participating broadcasters: Separately, the BBC has today published research into audience use and perceptions of AI assistants for News. This shows that many people trust AI assistants to be accurate - with just over a third of UK adults saying that they trust AI to produce accurate summaries, rising to almost half for people under-35. The findings raise major concerns. Many people assume AI summaries of news content are accurate, when they are not; and when they see errors, they blame news providers as well as AI developers - even if those mistakes are a product of the AI assistant. Ultimately, these errors could negatively impact people's trust in news and news brands.
[2]
Global study on news integrity in AI assistants shows need for safeguards and improved accuracy
NPR was one of 22 public service media (PSM) organizations across 18 countries participating in study, led by the BBC and European Broadcasting Union (EBU) At NPR, we recognize our responsibility in understanding AI's impact on journalism and in advocating for best practices that ensure our reporting is represented accurately. To that end, NPR participated in a global research study led by the BBC and the European Broadcasting Union (EBU) on news integrity in AI assistants. This was one of the largest evaluations of its kind to date, including 22 public service media organizations across 18 countries, and in 14 languages. The study's results, released today by BBC and the EBU, found that AI assistants routinely misrepresent news content no matter which language, territory, or AI platform is tested. An accompanying toolkit outlines problems that need to be solved to address the study's findings. As a public media organization, NPR is committed to delivering trusted, accurate journalism to our audiences, even as news consumption habits change. AI assistants are already replacing search engines for many users. According to the Reuters Institute's Digital News Report 2025, 7% of total online news consumers use AI assistants to get their news, rising to 15% of under-25s. This study provided us with a unique opportunity to collaborate with a well-respected set of journalism organizations to analyze the impact of AI summarization and representation of news content. Fourteen members of NPR's editorial staff volunteered to serve as reviewers of the AI assistants' answers. As part of this study, we temporarily stopped blocking relevant bots from accessing our content for approximately two weeks to collect the necessary responses for our analysis. Content blocking was then re-enabled. The study identified multiple systemic issues across four leading AI tools. Based on data from 18 countries and 14 languages, 45% of all AI answers had at least one significant issue, and 31% of responses showed serious sourcing problems -- missing, misleading, or incorrect attributions. The full study can be found here. The results help us consider what safeguards and audience education may be necessary, and can inform our strategies and training for AI adoption internally. These findings also reinforce the importance of our existing principles and standards, which demand that all final work products must be reviewed, fact-checked, and edited by humans, and cannot rely on AI for accuracy. NPR also contributed to the News Integrity in AI Assistants Toolkit, intended to be a resource for technology companies, media organizations, the research community and the general public.
[3]
Can AI Be Trusted for News Reporting? Study Finds 45% of Responses Misleading
The European Broadcasting Union (EBU) and other media organizations are urging governments and to make information integrity laws a reality. They launched the 'Facts In: Facts Out' campaign, recommending that AI technologies handle news content responsibly. "If facts go in, facts must come out. AI tools should not compromise the integrity of the news they consume," the campaign urges. BBC program director of generative AI Peter Archer said, "AI has potential, but people need to be able to trust what they read, watch, and see." The research reveals that as AI assistants become a significant source of news, supervision and accountability are key to preserving public trust.
[4]
AI assistants make widespread errors about the news, new research shows
GENEVA (Reuters) -Leading AI assistants misrepresent news content in nearly half their responses, according to new research published on Wednesday by the European Broadcasting Union (EBU) and the BBC. The international research studied 3,000 responses to questions about the news from leading artificial intelligence assistants - software applications that use AI to understand natural language commands to complete tasks for a user. It assessed AI assistants in 14 languages for accuracy, sourcing and ability to distinguish opinion versus fact, including ChatGPT, Copilot, Gemini and Perplexity. Overall, 45% of the AI responses studied contained at least one significant issue, with 81% having some form of problem, the research showed. Reuters has made contact with the companies to seek their comment on the findings. Gemini, Google's AI assistant, has stated previously on its website that it welcomes feedback so that it can continue to improve the platform and make it more helpful to users. OpenAI and Microsoft have previously said hallucinations - when an AI model generates incorrect or misleading information, often due to factors such as insufficient data - are an issue that they are seeking to resolve. Perplexity says on its website that one of its "Deep Research" modes has 93.9% accuracy in terms of factuality. SOURCING ERRORS A third of AI assistants' responses showed serious sourcing errors such as missing, misleading or incorrect attribution, according to the study. Some 72% of responses by Gemini, Google's AI assistant, had significant sourcing issues, compared to below 25% for all other assistants, it said. Issues of accuracy were found in 20% of responses from all AI assistants studied, including outdated information, it said. Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death. Twenty-two public-service media organisations from 18 countries including France, Germany, Spain, Ukraine, Britain and the United States took part in the study. With AI assistants increasingly replacing traditional search engines for news, public trust could be undermined, the EBU said. "When people don't know what to trust, they end up trusting nothing at all, and that can deter democratic participation," EBU Media Director Jean Philip De Tender said in a statement. Some 7% of all online news consumers and 15% of those aged under 25 use AI assistants to get their news, according to the Reuters Institute's Digital News Report 2025. The new report urged AI companies to be held accountable and to improve how their AI assistants respond to news-related queries. (Reporting by Olivia Le Poidevin, Editing by Timothy Heritage)
Share
Share
Copy Link
A large-scale international study led by the BBC and European Broadcasting Union finds that AI assistants frequently misrepresent news content across languages and territories. The research raises concerns about information integrity and public trust in AI-generated news summaries.
A groundbreaking international study coordinated by the European Broadcasting Union (EBU) and led by the BBC has uncovered significant issues with AI assistants' ability to accurately represent news content. The research, involving 22 public service media (PSM) organizations across 18 countries and 14 languages, found that AI assistants misrepresent news content 45% of the time, regardless of the language, territory, or AI platform used
1
.Source: Market Screener
The study evaluated over 3,000 responses from leading AI tools, including ChatGPT, Copilot, Gemini, and Perplexity. Professional journalists assessed these responses based on key criteria such as accuracy, sourcing, distinguishing opinion from fact, and providing context
1
. This extensive research builds upon a previous BBC study published in February 2025, confirming that the issues are systemic and not limited to specific languages or markets1
.The research revealed that 45% of all AI answers had at least one significant issue, with 81% showing some form of problem
4
. Notably, 31% of responses demonstrated serious sourcing problems, including missing, misleading, or incorrect attributions2
. Accuracy issues were found in 20% of responses, which included outdated information and factual errors4
.The study's findings raise significant concerns about the potential impact on public trust in news and information. Jean Philip De Tender, EBU Media Director, warns that when people don't know what to trust, they may end up trusting nothing at all, potentially deterring democratic participation
1
.Source: Analytics Insight
The research gains importance in light of the growing use of AI assistants for news consumption. According to the Reuters Institute's Digital News Report 2025, 7% of online news consumers use AI assistants to access news, with this figure rising to 15% among those under 25
1
2
.Related Stories
In response to these findings, the EBU and its members are urging EU and national regulators to enforce existing laws on information integrity, digital services, and media pluralism
1
. The 'Facts In: Facts Out' campaign has been launched, advocating for responsible handling of news content by AI technologies3
.To address the issues uncovered in the report, the research team has released a News Integrity in AI Assistants Toolkit. This resource aims to improve AI assistant responses and enhance media literacy among users
1
. The study's participants stress the importance of ongoing independent monitoring of AI assistants, given the rapid pace of AI development1
2
.Summarized by
Navi
[2]
[3]
[4]
12 Feb 2025•Technology
19 Feb 2025•Technology
06 May 2025•Technology