AI Chatbots Spread Misinformation While Being Used for Fact-Checking

Reviewed byNidhi Govil

4 Sources

As AI chatbots like Grok, ChatGPT, and Gemini are increasingly used for fact-checking, concerns arise about their reliability and potential to spread misinformation, especially during critical events like the recent India-Pakistan conflict.

AI Chatbots Fail as Reliable Fact-Checkers

In the wake of a four-day conflict between India and Pakistan, social media users turned to AI-powered chatbots for fact-checking, only to encounter more misinformation. This trend highlights a growing concern about the reliability of AI tools in verifying information, especially during critical events 1.

The Rise of AI in Fact-Checking

As tech platforms reduce human fact-checkers, users are increasingly relying on AI chatbots such as xAI's Grok, OpenAI's ChatGPT, and Google's Gemini for information verification. The phrase "Hey @Grok, is this true?" has become commonplace on Elon Musk's platform X, where the AI assistant is integrated 2.

Source: VnExpress International

Source: VnExpress International

Misinformation Propagation by AI

Recent incidents have exposed the unreliability of AI chatbots in fact-checking:

  1. Grok misidentified old footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the India-Pakistan conflict.
  2. Unrelated footage of a fire in Nepal was incorrectly labeled as Pakistan's military response to Indian strikes.
  3. Gemini, when questioned by AFP fact-checkers, not only confirmed the authenticity of an AI-generated image but also fabricated details about the subject's identity and location 3.

Research Findings on AI Chatbot Reliability

Studies have consistently shown the limitations of AI chatbots in fact-checking:

  1. NewsGuard found that 10 leading chatbots were prone to repeating falsehoods, including Russian disinformation and misleading claims about the Australian election.
  2. The Tow Center for Digital Journalism at Columbia University reported that chatbots often provide incorrect or speculative answers instead of admitting uncertainty 4.

Shift in Fact-Checking Landscape

Source: Tech Xplore

Source: Tech Xplore

The reliance on AI chatbots coincides with significant changes in the fact-checking ecosystem:

  1. Meta announced the end of its third-party fact-checking program in the United States.
  2. The task of debunking falsehoods is being transferred to ordinary users through models like "Community Notes" on X.
  3. Researchers question the effectiveness of user-generated fact-checking in combating misinformation.

Concerns Over AI Bias and Control

The quality and accuracy of AI chatbots can vary based on their training and programming, raising concerns about potential political influence or control. A recent incident involving Grok generating unsolicited posts about "white genocide" in South Africa has intensified these worries 1.

Expert Opinions

McKenzie Sadeghi from NewsGuard warns, "AI chatbots are not reliable sources for news and information, particularly when it comes to breaking news." Angie Holan, director of the International Fact-Checking Network, expresses concern about AI assistants providing biased answers or fabricated results, especially on sensitive topics 2.

Source: Economic Times

Source: Economic Times

Explore today's top stories

Netflix Pioneers Use of Generative AI in Original Content Production

Netflix has incorporated generative AI technology in its original series "El Eternauta," marking a significant shift in content production methods for the streaming giant.

TechCrunch logoCNET logoThe Verge logo

23 Sources

Technology

14 hrs ago

Netflix Pioneers Use of Generative AI in Original Content

Meta Refuses to Sign EU's AI Code of Practice, Citing Overreach and Innovation Concerns

Meta declines to sign the European Union's voluntary AI code of practice, calling it an overreach that could stifle innovation and economic growth in Europe. The decision highlights growing tensions between tech giants and EU regulators over AI governance.

TechCrunch logoReuters logoengadget logo

13 Sources

Policy and Regulation

14 hrs ago

Meta Refuses to Sign EU's AI Code of Practice, Citing

OpenAI Advisory Board Recommends Continued Nonprofit Oversight for AI Development

An advisory board convened by OpenAI recommends that the company should continue to be controlled by a nonprofit, emphasizing the need for democratic participation in AI development and governance.

AP NEWS logoTech Xplore logoThe Seattle Times logo

6 Sources

Policy and Regulation

15 hrs ago

OpenAI Advisory Board Recommends Continued Nonprofit

Perplexity AI Surges in India, Challenging ChatGPT's Dominance

Perplexity AI partners with Airtel to offer free Pro subscriptions, leading to a significant increase in downloads and user base in India, potentially reshaping the AI search landscape.

TechCrunch logoAnalytics India Magazine logoEconomic Times logo

5 Sources

Technology

14 hrs ago

Perplexity AI Surges in India, Challenging ChatGPT's

Perplexity AI Secures $100 Million Funding, Reaching $18 Billion Valuation

Perplexity AI, an AI-powered search engine startup, has raised $100 million in a new funding round, valuing the company at $18 billion. This development highlights the growing investor interest in AI startups and Perplexity's potential to challenge Google's dominance in internet search.

Bloomberg Business logoAnalytics India Magazine logoEconomic Times logo

4 Sources

Startups

14 hrs ago

Perplexity AI Secures $100 Million Funding, Reaching $18
TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo