The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 5 Dec, 4:02 PM UTC
2 Sources
[1]
ChatGPT search can't find the real news, even with a publisher holding its hand
OpenAI proudly debuted ChatGPT search in October as the next stage for search engines. The company boasted that the new feature combined ChatGPT's conversational skills with the best web search tools, offering real-time information in a more useful form than any list of links. According to a recent review by Columbia University's Tow Center for Digital Journalism, that celebration may have been premature. The report found ChatGPT to have a somewhat lassie-faire attitude toward accuracy, attribution, and basic reality when sourcing news stories. What's especially notable is that the problems crop up regardless of whether a publication blocks OpenAI's web crawlers or has an official licensing deal with OpenAI for its content. The study tested 200 quotes from 20 publications and asked ChatGPT to source them. The results were all over the place. Sometimes, the chatbot got it right. Other times, it attributed quotes to the wrong outlet or simply made up a source. OpenAI's partners, including The Wall Street Journal, The Atlantic, and the Axel Springer and Meredith publications, sometimes fared better, but not with any consistency. Gambling on accuracy when asking ChatGPT about the news is not what OpenAI or its partners want. The deals were trumpeted as a way for OpenAI to support journalism while improving ChatGPT's accuracy. When ChatGPT turned to Politico, published by Axel Springer, for quotes, the person speaking was often not whom the chatbot cited. The short answer to the problem is simply ChatGPT's method of finding and digesting information. The web crawlers ChatGPT uses to access data can be performing perfectly, but the AI model underlying ChatGPT can still make mistakes and hallucinate. Licensed access to content doesn't change that basic fact. Of course, if a publication is blocking the web crawlers, ChatGPT can slide from newshound to wolf in sheep's clothing in accuracy. Outlets employing robots.txt files to keep ChatGPT away from their content, like The New York Times, leave the AI floundering and fabricating sources instead of saying it has no answer for you. More than a third of the responses in the report fit this description. That's more than a small coding fix. Arguably worse is that if ChatGPT couldn't access legitimate sources, it would turn to places where the same content was published without permission, perpetuating plagiarism. Ultimately, AI misattributing quotes isn't as big a deal as the implication for journalism and AI tools like ChatGPT. OpenAI wants ChatGPT search to be where people turn for quick, reliable answers linked and cited properly. If it can't deliver, it undermines trust in both AI and the journalism it's summarizing. For OpenAI's partners, the revenue from their licensing deal might not be worth the lost traffic from unreliable links and citations. So, while ChatGPT search can be a boon in a lot of activities, be sure to check those links if you want to ensure the AI isn't hallucinating answers from the internet.
[2]
Don't trust ChatGPT Search and definitely verify anything it tells you
Columbia's Tow Center for Digital Journalism report shows that ChatGPT Search may not be as accurate as advertised. In October, OpenAI integrated ChatGPT Search into ChatGPT, promising an experience in which users could browse the web and access the latest news from its news partners and sites that have not blocked OpenAI's web crawler. A new review by Columbia's Tow Center for Digital Journalism shows that the process may not be as efficient as it sounds. The Tow Center performed a test to determine how well publisher content is represented on ChatGPT. It selected 10 articles from 20 random publishers who partnered with OpenAI, are involved in lawsuits against OpenAI, or unaffiliated publishers who either allowed or blocked the web crawler. Also: '12 Days of OpenAI' promises product launches and demos - here's how to watch The researcher then extracted 200 quotes, which, when run among search engines like Google or Bing, pointed back to the source in the top three results. Finally, it was time to let ChatGPT identify the quotes' sources. Ultimately, the goal was to see if the AI accurately serves publications, giving them credit for their work. If the approach worked as advertised, it should be able to attribute the sources just as well. The results varied in accuracy, some entirely correct or incorrect, and some partially correct. Yet, nearly all answers were presented confidently, without the AI saying it couldn't produce an answer even from publishers who had blocked its web crawler. Only in seven of the outputs did ChatGPT say to use words or phrases that insinuated it was unclear, as seen below: "Beyond misleading users, ChatGPT's false confidence could risk causing reputational damage to publishers," the article stated. Also: This new AI podcast generator offers 32 languages and dozens of voices - for free That statement was backed up by an example in which ChatGPT inaccurately attributed a quote from the Orlando Sentinel to a Time article, with over a third of ChatGPT's responses with incorrect citations being of that nature. In addition to harming traffic, misattribution can harm a publication's brand and trust with its audience. Other problematic findings from the experiment include ChatGPT citing an article from The New York Times, which has blocked it, from another website that had plagiarized the article, or the citing of a syndicated version of a piece from MIT Tech Review instead of the original article, although MIT Tech Review does allow crawling to take place. Also: '12 Days of OpenAI' promises product launches and demos - here's how to watch Ultimately, this research points to a larger question of whether or not partnering with these AI companies offers publishers more control and whether creating new AI search engines truly benefits publishers or hurts their businesses in the long run. The data behind the methodology is shared on GitHub and can be looked at by the public. Consumers should always verify the source by clicking on the footnote the AI provides or doing a quick search on an established search engine, such as Google. These extra steps will help prevent hallucinations.
Share
Share
Copy Link
A Columbia University study reveals that ChatGPT's search function often misattributes or fabricates news sources, raising concerns about its reliability for accessing current information.
OpenAI's ChatGPT search function, launched in October with promises of revolutionizing web searches, has come under scrutiny following a recent study by Columbia University's Tow Center for Digital Journalism. The research reveals significant issues with the AI's ability to accurately attribute news sources, raising concerns about its reliability and potential impact on journalism 1.
The Tow Center conducted a comprehensive test involving 200 quotes from 20 publications, challenging ChatGPT to identify their sources. The results were inconsistent, with the AI sometimes providing correct attributions but often misattributing quotes or even fabricating sources 2.
Notably, the study found that:
The study's findings have significant implications for both the AI industry and journalism:
OpenAI has established partnerships with several major publications, aiming to support journalism while improving ChatGPT's accuracy. However, the study suggests that these partnerships have not consistently improved the AI's performance in attributing sources correctly 1.
The issues stem from ChatGPT's fundamental approach to processing information:
Given the current limitations of ChatGPT's search function, users are advised to:
OpenAI's ChatGPT Search feature is found vulnerable to manipulation through hidden text and prompt injections, raising concerns about the reliability of AI-powered web searches.
2 Sources
2 Sources
OpenAI introduces ChatGPT Search, a new feature that combines AI-powered chatbot capabilities with up-to-date online search results, potentially disrupting Google's long-standing supremacy in the search engine market.
78 Sources
78 Sources
OpenAI's ChatGPT Search emerges as a potential rival to Google, offering a conversational AI-powered search experience with real-time web information and enhanced usability.
8 Sources
8 Sources
OpenAI expands ChatGPT's search functionality to all users, introducing a potential rival to Google's search engine with AI-powered, conversational results and enhanced mobile features.
23 Sources
23 Sources
A BBC study finds that popular AI chatbots, including ChatGPT, Google Gemini, Microsoft Copilot, and Perplexity AI, produce significant errors when summarizing news articles, raising concerns about their reliability for news consumption.
2 Sources
2 Sources