ChatGPT's GPT-5.2 model caught citing Grokipedia, sparking fears about AI misinformation loops

3 Sources

Share

OpenAI's latest ChatGPT model has been found sourcing information from Grokipedia, Elon Musk's AI-generated encyclopedia that doesn't allow direct human editing. Tests revealed the GPT-5.2 model cited Grokipedia nine times on obscure topics like Iranian politics and British historian Richard Evans, raising alarms about recursive loops where AI models cite each other's unverified content.

ChatGPT Sources Information from AI-Written Encyclopedia

OpenAI's latest ChatGPT model, GPT-5.2, has been discovered sourcing data from Grokipedia, xAI's fully AI-generated encyclopedia created by Elon Musk. Testing conducted by The Guardian revealed that the GPT-5.2 model cited Grokipedia nine times when responding to more than a dozen queries, particularly on obscure or niche topics

2

. These included questions about Iranian political structures, such as salaries of the Basij paramilitary force and ownership of the Mostazafan Foundation, as well as biographical details about British historian Sir Richard Evans, who served as an expert witness against Holocaust denier David Irving

2

.

Source: Digit

Source: Digit

Unlike Wikipedia, Grokipedia does not allow direct human editing. Instead, an AI model writes all content and only responds to user requests for changes

2

. The AI-written encyclopedia, launched in October, has faced criticism for promoting right-leaning narratives on topics including same-sex marriage and the January 6 insurrection in the US

2

. ChatGPT's reliance on this platform for data sourcing raises serious questions about information reliability and the spread of misinformation through large language models.

The Risk of Recursive Loop and Model Collapse

The practice of AI training on AI-generated data has long concerned experts who warn it could lead to model collapse, a phenomenon where quality degrades over time

1

. While citing AI-generated content differs from using it for training, it still creates a dangerous recursive loop where LLMs reference each other's unverified outputs

1

. This situation mirrors how rumors spread between humans, with "someone else said it" becoming the source, potentially creating digital folklore that proliferates at speeds far exceeding human information exchange

1

.

Source: Tom's Hardware

Source: Tom's Hardware

AI models are known to hallucinate or fabricate information. Nvidia CEO Jensen Huang admitted in 2024 that solving this issue remains "several years away" and requires significantly more computing power

1

. Anthropic's experiment with its 'Claudius' AI demonstrated this problem when the model hallucinated multiple times, even claiming it would hand-deliver drinks in person

1

. The issue becomes more acute when users trust ChatGPT to deliver accurate information without verifying the actual sources cited.

LLM Grooming and Propaganda Networks Exploit Unreliable Sources

The situation has drawn attention to LLM grooming, a concerning practice where malicious actors flood the internet with disinformation to influence AI models. Security experts raised alarms last spring about Russian propaganda networks churning out massive volumes of false information specifically designed to seed AI models with lies

2

. In June, concerns emerged in the US Congress when Google Gemini reportedly repeated the Chinese government's official position on human rights abuses in Xinjiang and China's Covid-19 policies

2

.

Nina Jankowicz, a disinformation researcher who has studied LLM grooming, noted that Grokipedia entries reviewed by her colleagues were "relying on sources that are untrustworthy at best, poorly sourced and deliberate disinformation at worst"

2

. She expressed concern that when LLMs cite such unreliable sources, it may inadvertently boost their credibility, with users assuming that if ChatGPT references something, it must be trustworthy

2

. The problem extends beyond OpenAI, as Anthropic's Claude chatbot has also been observed referencing Grokipedia on topics ranging from petroleum production to Scottish ales

2

3

.

OpenAI Response and Ongoing Challenges with Safety Filters

An OpenAI spokesperson stated that the model's web search "aims to draw from a broad range of publicly available sources and viewpoints" and that they apply safety filters to reduce risks associated with high-severity harms

2

. The company emphasized that ChatGPT clearly shows which sources informed responses through citations and maintains ongoing programs to filter out low-credibility information and influence campaigns

2

. However, the pattern observed in testing suggests these measures may not be sufficient.

Interestingly, ChatGPT did not cite Grokipedia when prompted directly about widely reported misinformation topics like the January 6 insurrection or media bias against Donald Trump

2

. Instead, the AI-written encyclopedia surfaced primarily for more obscure queries where verification is harder. In some instances, ChatGPT repeated stronger claims from Grokipedia than those found on Wikipedia, such as assertions about Iranian government links to MTN-Irancell and its connections to the office of Iran's supreme leader

2

. This pattern suggests AI bias may be harder to detect and address when it infiltrates responses on technical or niche subjects rather than high-profile controversies. When xAI was asked for comment, a spokesperson simply stated: "Legacy media lies"

2

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo