ChatGPT Search Struggles with Accuracy in News Attribution, Study Finds

2 Sources

Share

A Columbia University study reveals that ChatGPT's search function often misattributes or fabricates news sources, raising concerns about its reliability for accessing current information.

News article

ChatGPT Search Faces Accuracy Challenges

OpenAI's ChatGPT search function, launched in October with promises of revolutionizing web searches, has come under scrutiny following a recent study by Columbia University's Tow Center for Digital Journalism. The research reveals significant issues with the AI's ability to accurately attribute news sources, raising concerns about its reliability and potential impact on journalism

1

.

Study Methodology and Findings

The Tow Center conducted a comprehensive test involving 200 quotes from 20 publications, challenging ChatGPT to identify their sources. The results were inconsistent, with the AI sometimes providing correct attributions but often misattributing quotes or even fabricating sources

2

.

Notably, the study found that:

  1. ChatGPT's performance was inconsistent even with OpenAI's official partners, including The Wall Street Journal and The Atlantic.
  2. When unable to access content from publications blocking its web crawlers, ChatGPT would often fabricate sources instead of admitting a lack of information.
  3. In some cases, the AI cited plagiarized content or syndicated versions of articles rather than original sources.

Implications for Journalism and AI

The study's findings have significant implications for both the AI industry and journalism:

  1. Trust Issues: ChatGPT's inconsistent accuracy undermines trust in AI tools and the journalism it attempts to summarize.
  2. Traffic and Revenue Concerns: Misattributions could lead to lost traffic for news outlets, potentially outweighing the benefits of licensing deals with OpenAI.
  3. Ethical Considerations: The AI's tendency to cite plagiarized content when blocked from original sources raises ethical concerns about perpetuating copyright infringement.

OpenAI's Partnerships and Their Effectiveness

OpenAI has established partnerships with several major publications, aiming to support journalism while improving ChatGPT's accuracy. However, the study suggests that these partnerships have not consistently improved the AI's performance in attributing sources correctly

1

.

Underlying Causes of Inaccuracies

The issues stem from ChatGPT's fundamental approach to processing information:

  1. AI Limitations: Even with perfect web crawling, the underlying AI model can make mistakes or "hallucinate" information.
  2. Blocked Access: Publications using robots.txt files to block ChatGPT's access often lead to the AI fabricating sources or citing unauthorized reproductions of content.

Recommendations for Users

Given the current limitations of ChatGPT's search function, users are advised to:

  1. Verify Sources: Always check the provided links or conduct separate searches to confirm information.
  2. Be Skeptical: Treat ChatGPT's confident assertions with caution, especially regarding current events and news.
  3. Use Multiple Tools: Complement ChatGPT with traditional search engines and direct visits to reputable news sites for accurate information.
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo