8 Sources
8 Sources
[1]
ChatGPT isn't the only chatbot pulling answers from Elon Musk's Grokipedia
ChatGPT is using Grokipedia as a source, and it's not the only AI tool to do so. Citations to Elon Musk's AI-generated encyclopedia are starting to appear in answers from Google's AI Overviews, AI Mode, and Gemini, too. Data suggests that's on the rise, heightening concerns about accuracy and misinformation as Musk seeks to reshape reality in his image. Since the warped Wikipedia-clone launched late last October, Grokipedia technically remains a minor source of information overall. Glen Allsopp, head of marketing strategy and research at SEO company Ahrefs, told The Verge the firm's testing found Grokipedia referenced in more than 263,000 ChatGPT responses from 13.6 million prompts, citing roughly 95,000 individual Grokipedia pages. By comparison, Allsopp said the English-language Wikipedia showed up in 2.9 million responses. "They're quite a way off, but it's still impressive for how new they are," he said. Based on a dataset tracking billions of citations, marketing platform Profound researcher Sartaj Rajpal said Grokipedia received around 0.01 to 0.02 percent of all ChatGPT citations per day -- a small share but one that has steadily increased since mid-November. Semrush, which tracks how brands show up in Google tools' AI answers with its AI Visibility Toolkit, found a similar step-up in Grokipedia's visibility in AI answers from December, but noted it's still very much a secondary source compared to established reference platforms like Wikipedia. Grokipedia citations appear on ChatGPT more than on any other platform that analysts The Verge spoke to are tracking. However, Semrush found a similar spike in Google's AI products -- Gemini, AI Overviews, and AI Mode -- in December. Ahrefs' Allsopp said Grokipedia had been referenced in around 8,600 Gemini answers, 567 AI Overviews answers, 7,700 Copilot answers, and 2 Perplexity answers, from around 9.5 million, 120 million, 14 million, and 14 million prompts, respectively, with appearances in Gemini and Perplexity down significantly from similar testing the month before. None of the firms The Verge spoke to track citations for Anthropic's Claude, though several anecdotal reports on social media suggest the chatbot is also citing Grokipedia as a source. In many cases, AI tools appear to be citing Grokipedia to answer niche, obscure, or highly specific factual questions, as The Guardian reported late last week. Analysts agree. Jim Yu, CEO of analytics firm BrightEdge, told The Verge that ChatGPT and AI Overviews use Grokipedia for largely "non-sensitive queries" like encyclopedic lookups and definitions, though differences are emerging in how much authority they afford it. For AI Overviews, Grokipedia tends not to stand alone, Yu said, and "typically appears alongside several other sources" as "a supplementary reference rather than a primary source." When ChatGPT uses Grokipedia as a source, however, it gives it much more authority, Yu said, "often featuring it as one of the first sources cited for a query." Even for relatively mundane uses, experts warn using Grokipedia as a source risks spreading disinformation and promoting partisan talking points. Unlike Wikipedia, which is edited by humans in a transparent process, Grokipedia is produced by xAI's chatbot Grok. Grok is perhaps best known for its Nazi meltdown, calling itself MechaHitler, idolizing Musk, and, most recently, digitally stripping people online, including minors. When it launched, a bulk of Grokipedia's articles were direct clones of Wikipedia, though many others reflected racist and transphobic views. For example, articles about Musk conveniently downplays his family wealth and unsavory elements of their past (like neo-Nazi and pro-Apartheid views) and the entry for "gay pornography" falsely linked the material to the worsening of the HIV/AIDS epidemic in the 1980s. The article on US slavery still contains a lengthy section on "ideological justifications," including the "Shift from Necessary Evil to Positive Good." Editing is also overseen by Grok and is similarly flawed. Grokipedia is more susceptible to what is known as "LLM grooming," or data poisoning. In a comment to The Verge, OpenAI spokesperson Shaokyi Amdo said: "When ChatGPT searches the web, it aims to draw from a broad range of publicly available sources and viewpoints relevant to the user's question." Amdo also said that users can see the sources and judge them themselves: "We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations, allowing users to explore and assess the reliability of sources directly." Perplexity spokesperson Beejoli Shah would not comment about the risks of LLM grooming or citing AI-generated material like Grokipedia, but said the company's "central advantage in search is accuracy," which it is "relentlessly focused on." Anthropic declined to answer on the record. xAI did not return The Verge's request for comment. Google declined to comment. The point is that Grokipedia can't be reliably cited as a source at all, no matter how infrequently and despite Musk taking an unsubstantiated victory lap about the encyclopedia's alleged wild success in Google Search results. It is an AI-generated system, lacking in human oversight, and often reliant on opaque, hard-to-verify material like personal websites and blog posts, and questionable, potentially circular, sourcing. There's a real risk of reinforcing various biases, errors, or framing issues if it cites something like Grokipedia, said Taha Yasseri, chair of technology and society at Trinity College Dublin, adding that "fluency can easily be mistaken for reliability." "Grokipedia feels like a cosplay of credibility," said Leigh McKenzie, director of online visibility at Semrush. "It might work inside its own bubble, but the idea that Google or OpenAI would treat something like Grokipedia as a serious, default reference layer at scale is bleak."
[2]
Where Does GPT-5.2 Get Its Information? In Some Cases, It's Grokipedia
ChatGPT's latest AI model, GPT-5.2, has been found sourcing its information from another AI-generated website, xAI's Grokipedia, raising concerns about the model's accuracy. According to The Guardian, GPT-5.2 cited Grokipedia as a source nine times when the outlet asked it questions about some lesser-known topics, including the Iranian government's ties to telecom company MTN-Irancell and British historian Richard Evans. (Anthropic's Claude was also found citing Grokipedia for some queries.) Grokipedia was built by Elon Musk's xAI startup to take on Wikipedia, a platform he thought was biased. Whereas Wikipedia's pages are put together by volunteer human editors, most of Grokipedia's 6,092,140 articles are generated by Grok, though readers can submit edit suggestions. When Grokipedia debuted, we found that many entries were copied from Wikipedia or paraphrased from other sources, too. At issue is a problem that researchers have been warning about for quite some time. If AI chatbots get their information from unreliable sources, it could lead to a lot more disinformation. Grok has been caught spreading misinformation on multiple occasions. Additionally, a report from last year said that a Russia-based disinformation network was trying to manipulate models like ChatGPT and Grok by publishing millions of articles that push their own narrative in the hopes that the AI would scrape them and use the information in their answers. When asked about GPT-5.2's use of Grokipedia as a source, an OpenAI spokesperson told The Guardian that the model "aims to draw from a broad range of publicly available sources and viewpoints," adding that they already have a system that filters out low-credibility information. We asked GPT-5.2 some random questions about Evans and the Iranian government, but found no references to Grokipedia in its responses. This suggests OpenAI may have limited the use of Grokipedia as a source, or that it appears only for some specific prompts. Disclosure: Ziff Davis, PCMag's parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[3]
ChatGPT found to be sourcing data from AI-generated content -- popular LLM uses content from Grokipedia as source for more obscure queries
ChatGPT's latest model, GPT-5.2, has been found to be sourcing data from Grokipedia, xAI's all-AI-generated Wikipedia competitor. According to The Guardian, the AI LLM would sometimes use Elon Musk's AI-generated online encyclopedia for uncommon topics like Iranian politics, and details about British historian Sir Richard Evans. Issues like this were raised as problematic a few years ago in AI training, where some experts argued that training AI on AI-generated data would degrade quality and lead to a phenomenon called "model collapse." And while citing AI-generated data is different from using it for training, it still poses risks to people relying on AI for research. The biggest issue with this is that AI models are known to hallucinate or make up information that is wrong. For example, Anthropic attempted to run a business with its 'Claudius' AI -- it hallucinated several times during the experiment, with the AI even saying that it would hand-deliver drinks, in person. Even Nvidia CEO Jensen Huang admitted in 2024 that solving this issue is still "several years away" and requires a lot more computing power. Furthermore, many users trust that ChatGPT and other LLMs deliver accurate information, with only a few checking the actual sources used to answer a particular question. Because of this, ChatGPT repeating Grok's words can be problematic, especially as Grokipedia isn't edited directly by humans. Instead, it's completely AI-generated and people can only request changes to its content -- not write or edit the articles directly. Using another AI as a source creates a recursive loop, and we might eventually end up with LLMs citing content, which haven't been verified, from each other. This is no different from rumors and stories spreading between humans, with "someone else said it" being the source. This results in the illusory truth effect, where false information is deemed correct by many, despite having data saying otherwise, because it's been repeated by so many people. Human society was littered with myths and legends similarly, passed over hundreds of years through several generations. However, with AI churning through tons of data at infinitely faster speeds than humans, the use of AI sources risks the proliferation of digital folklore with every query entered into AI LLMs. What's more troubling is that various parties are already taking advantage of this. There have been reports of "LLM grooming," with The Guardian saying that some propaganda networks are "churning out massive volumes of disinformation in an effort to seed AI models with lies." This has raised concerns in the U.S., with Google's Gemini, for example, reportedly repeating the official party line of the Communist Party of China in 2024. This seems to have been addressed at the moment, but if LLMs start citing other AI-generated sources that haven't been vetted and fact-checked, then this is a new risk that people need to look out for. Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
[4]
LLM Brainrot Is Here: Grokipedia Is Starting to Show Up in ChatGPT Citations
GPT-5.2 is "learning" from Musk's AI-generated version of Wikipedia. Elon Musk's controversial Grokipedia has begun creeping into ChatGPT and other chatbots' responses as a cited source, giving us a glimpse of the dead internet that's just around the corner. The Guardian reports that OpenAI's latest flagship model, GPT-5.2, cited Grokipedia nine times in response to more than a dozen questions. Those questions ranged from topics like political structures in Iran to British historian Sir Richard Evans. Gizmodo was also able to produce responses from ChatGPT that cited Grokipedia when making similar queries. Musk launched Grokipedia last October as an alternative to Wikipedia, one in which humans are taken out of the editing loop. In a post in September, Musk said Grokipedia would be "a massive improvement over Wikipedia." He has also repeatedly derided Wikipedia as "Wokipedia" and complained that there is no major alternative aligned with right-wing views. His solution was to create a new platform with articles generated by AI. Much of Grokipedia's content appears to be adapted from Wikipedia, but with framing that often favors Musk's political views. For example, Grokipedia describes the events of January 6, 2021, as a "riot" at the U.S. Capitol, which saw "supporters of outgoing President Donald Trump protest the certification of the 2020 presidential election results." Wikipedia, by contrast, calls it an "attack" carried out by a mob of Trump supporters in what it describes as an attempted self-coup. Additionally, Grokipedia labels Britain First as a "far-right British political party that advocates for national sovereignty," while Wikipedia describes it as a neo-fascist political party and hate group. Grokipedia also takes a softer framing regarding the so-called Great Replacement theory, which claims that white people are being systematically replaced by a concerted breeding effort being perpetuated by other races. Wikipedia explicitly labels the idea a conspiracy theory. Musk is an outspoken proponent of the conspiracy and regularly comments on "white genocide." In general, Grokipedia is designed to churn out unverified information at an industrial scale without human editors debating the quality of information it provides. Now, Grokipedia appears to be insidiously bleeding into other chatbots. The Guardian noted that ChatGPT did not cite Grokipedia when asked about topics in which the site had been known to promote misleading information. Instead, Grokipedia only showed up in responses to more obscure topics. The issue does not appear to be isolated to ChatGPT. Some users on social media have reported that Anthropic's Claude has also referenced Grokipedia in its answers. OpenAI and Anthropic, the company behind Claude, did not immediately respond to requests for comment from Gizmodo. However, OpenAI told The Guardian that its model "aims to draw from a broad range of publicly available sources and viewpoints." "We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations," an OpenAI spokesperson told The Guardian. Researchers have previously warned about malicious actors flooding the internet with AI-generated content in an effort to influence large language models in a process sometimes referred to as LLM grooming. But the risks go beyond intentional misinformation campaigns. It's not totally clear if human users are actively visiting Grokipedia intentionally. Weeks after the site's launch last year, data aggregator Similarweb reported that Grokipedia had fallen from a high of 460,000 web visits in the US on Oct. 28 to about 30,000 daily visitors. Wikipedia routinely racks up hundreds of millions of pageviews per day. Many have speculated that Grokipedia isn't really for humans anyway; it exists to poison the well for future LLMs. Over-relying on AI-generated content can also lead to what researchers call model collapse. A 2024 study found that when large language models are increasingly trained on data produced by other AI systems, their overall quality degrades over time. "In the early stage of model collapse, first models lose variance, losing performance on minority data," researcher Ilia Shumailov told Gizmodo at the time. "In the late stage of model collapse, [the] model breaks down fully." As models continue training on less accurate and less relevant text they've generated themselves, that loop causes outputs to degrade and eventually stop making much sense at all.
[5]
ChatGPT is now indexing Grok's AI slop
The integration poses significant risks to information integrity as biased or false content from one AI system influences another's responses. More and more of the web is filling up with LLM-generated text, images, and even videos and music. It's an even bigger problem than it seems because the "AI" systems that have scoured the web to generate their large language models are now re-indexing all that output. It's an ouroboros of AI slop.... and now ChatGPT -- which, by most measures, is the most popular LLM -- is indexing Grokipedia. Grokipedia is an AI-generated encyclopedia created last year by xAI, sister company to Elon Musk's social media site. It's almost entirely auto-generated with the Grok LLM, which has been integrated into the social network as well. Grokipedia is positioned as a conservative alternative to Wikipedia, which Musk considers "woke" and "propaganda." Grokipedia is filled with inaccuracies and AI hallucinations -- at an apparently higher rate than even normal LLM systems -- as Grok itself has been intentionally tweaked to conform to Musk's dictates. The system has been observed promoting conspiracy theories and other material that range from merely delusional to actively harmful. Now it appears that OpenAI's ChatGPT is indexing Grokipedia to answer at least some users' queries. According to an investigation in The Guardian, ChatGPT 5.2 is selective about when it returns info gleaned from Grokipedia -- it won't give you immediate Grok-generated answers for the page's most well-known and documented falsehoods, such as HIV and AIDS misinformation. But when users pushed ChatGPT to go into more detail on controversies surrounding the Iranian government or Holocaust denier David Irving, the system did return info gleaned from Grok-generated pages. The massive volume of text spat out by LLMs -- estimated to be more than half of all new published articles as of late 2025 -- is becoming a problem. "AI" errors (or "hallucinations") can be spread, replicated, and repeated, essentially overwriting established knowledge with a copy error. The fundamentally iterative nature of large language models can also be weaponized. Google's Gemini AI has been seen repeating the Chinese Communist Party's official positions on the country's human rights abuses (or, according to Gemini, its lack thereof), and some security researchers believe Russia is pumping out LLM-generated propaganda text with the specific aim of having it integrated into other large language models. Grok itself has been observed repeating explicitly hateful material, with the chatbot referring to itself as "MechaHitler." It also AI-generated millions of sexualized images of minors via tools accessible on X starting in December 2025. The tool was disabled for free users in early January and restricted on X to disable the tool as applied to real people in revealing clothing. Countries around the world have opened investigations into Grok/X following the incident, citing possible violations of various laws. Indonesia and Malaysia have outright blocked access to Grok. Exactly why OpenAI chose to integrate Grok's output into ChatGPT -- not only seeking out auto-generated text but training its own systems on a rival and competitor's product -- is not clear. It may simply be that the ever-hungry nature of large language models, which are dependent on new input in order to iteratively adapt and change, means that OpenAI cannot be selective with its training.
[6]
AI chatbots like ChatGPT are using info from Elon Musk's Grokipedia, report reveals
When Elon Musk's Grokipedia isn't just copying Wikipedia word-for-word, it's spreading falsehoods about the AIDS epidemic, justifying slavery, and citing white supremacist websites. Now, at least two of the biggest AI chatbots, OpenAI's ChatGPT and Anthropic's Claude, are reportedly citing Grokipedia as sources in their answers to user prompts. According to a new report from the Guardian, the outlet found that ChatGPT, powered by OpenAI's latest GPT 5.2 model, cited Grokipedia in answering questions related to Iran and other topics. In one instance, ChatGPT cited Grokipedia to provide debunked claims about Sir Richard Evans, a British historian who was the lead expert witness against Holocaust denier David Irving at his 2000 libel trial. The report also found that ChatGPT wasn't the only AI chatbot pulling information from Musk's Grokipedia. Anthropic's Claude was also citing Grokipedia for certain queries. OpenAI told the Guardian that ChatGPT's web search "aims to draw from a broad range of publicly available sources and viewpoints." The company also said it applies "safety filters to reduce the risk of surfacing links associated with high-severity harms" and that ChatGPT clearly cites the sources it uses for its responses to users. Security experts have pointed out that AI models can be manipulated into sharing disinformation and falsehoods through tactics like "LLM Grooming." While it's unclear if there's any third-party maliciousness behind ChatGPT and Claude's usage of Grokipedia as a source, the Guardian notes that it's certainly concerning. Grokipedia is powered by Elon Musk's AI company xAI and its AI chatbot Grok. Grok has had its own issues on Musk's social media platform X where last summer it started praising Hitler and referring to itself as "MechaHitler." In a separate incident months earlier, Grok started replying to every query on X with right wing conspiracies about "white genocide" in South Africa. Musk created Grokipedia as an alternative to Wikipedia, which Musk has criticized in recent years. However, Grokipedia has quickly become a source for falsehoods and disinformation on politically-charged topics. Musk, himself, has delved further into far-right-wing ideology that goes beyond even his financial support of President Donald Trump. Just weeks ago, Grokipedia's founder Elon Musk shared an image on X that painted the apartheid state of Rhodesia, now known as Zimbabwe, in a positive light. Disclosure: Ziff Davis, Mashable's parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
[7]
Latest ChatGPT model uses Elon Musk's Grokipedia as source, tests reveal
Guardian found OpenAI's platform cited Grokipedia on topics including Iran and Holocaust deniers The latest model of ChatGPT has begun to cite Elon Musk's Grokipedia as a source on a wide range of queries, including on Iranian conglomerates and Holocaust deniers, raising concerns about misinformation on the platform. In tests done by the Guardian, GPT-5.2 cited Grokipedia nine times in response to more than a dozen different questions. These included queries on political structures in Iran, such as salaries of the Basij paramilitary force and the ownership of the Mostazafan Foundation, and questions on the biography of Sir Richard Evans, a British historian and expert witness against Holocaust denier David Irving in his libel trial. Grokipedia, launched in October, is an AI-generated online encyclopedia that aims to compete with Wikipedia, and which has been criticised for propagating rightwing narratives on topics including gay marriage and the 6 January insurrection in the US. Unlike Wikipedia, it does not allow direct human editing, instead an AI model writes content and responds to requests for changes. ChatGPT did not cite Grokipedia when prompted directly to repeat misinformation about the insurrection, about media bias against Donald Trump, or about the HIV/Aids epidemic - areas where Grokipedia has been widely reported to promote falsehoods. Instead, Grokipedia's information filtered into the model's responses when it was prompted about more obscure topics. For instance, ChatGPT, citing Grokipedia, repeated stronger claims about the Iranian government's links to MTN-Irancell than are found on Wikipedia - such as asserting that the company has links to the office of Iran's supreme leader. ChatGPT also cited Grokipedia when repeating information that the Guardian has debunked, namely details about Sir Richard Evans' work as an expert witness in David Irving's trial. GPT-5.2 is not the only large language model (LLM) that appears to be citing Grokipedia; anecdotally, Anthropic's Claude has also referenced Musk's encyclopedia on topics from petroleum production to Scottish ales. An OpenAI spokesperson said the model's web search "aims to draw from a broad range of publicly available sources and viewpoints". "We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations," they said, adding that they had ongoing programs to filter out low-credibility information and influence campaigns. Anthropic did not respond to a request for comment. But the fact that Grokipedia's information is filtering - at times very subtly - into LLM responses is a concern for disinformation researchers. Last spring, security experts raised concerns that malign actors, including Russian propaganda networks, were churning out massive volumes of disinformation in an effort to seed AI models with lies, a process called "LLM grooming". In June, concerns were raised in the US Congress that Google's Gemini repeated the Chinese government's position on human rights abuses in Xinjiang and China's Covid-19 policies. Nina Jankowicz, a disinformation researcher who has worked on LLM grooming, said ChatGPT's citing Grokipedia raised similar concerns. While Musk may not have intended to influence LLMs, Grokipedia entries she and colleagues had reviewed were "relying on sources that are untrustworthy at best, poorly sourced and deliberate disinformation at worst", she said. And the fact that LLMs cite sources such as Grokipedia or the Pravda network may, in turn, improve these sources' credibility in the eyes of readers. "They might say, 'oh, ChatGPT is citing it, these models are citing it, it must be a decent source, surely they've vetted it' - and they might go there and look for news about Ukraine," said Jankowicz. Bad information, once it has filtered into an AI chatbot, can be challenging to remove. Jankowicz recently found that a large news outlet had included a made-up quote from her in a story about disinformation. She wrote to the news outlet asking for the quote to be removed, and posted about the incident on social media. The news outlet removed the quote. However, AI models for some time continued to cite it as hers. "Most people won't do the work necessary to figure out where the truth actually lies," she said. When asked for comment, a spokesperson for xAI, the owner of Grokipedia, said: "Legacy media lies."
[8]
ChatGPT's new model is citing an AI-written encyclopedia, raising misinformation fears
ChatGPT cited Grokipedia multiple times while responding to a small set of questions, particularly on niche or complex topics. Amid the rising competition in the AI space, the latest ChatGPT model is facing scrutiny after researchers noticed it citing an unusual and controversial source for factual information. As per the reports, the test suggested that GPT-5.2 has begun referencing Grokipedia, an AI-generated online encyclopedia associated with Elon Musk. As per testing conducted by The Guardian, ChatGPT cited Grokipedia multiple times while responding to a small set of questions, particularly on niche or complex topics. These topics included explanations of Iran's political system, such as the role of the Basij paramilitary force and the control of influential foundations, as well as background details about British historian Sir Richard Evans. Also read: Apple may introduce AI powered Siri with iOS 26.4 next month: Eligible devices, features and more For the unversed, Grokipedia was introduced in October last year and positions itself as the biggest rival to Wikipedia. However, it operates on a very different model. Its entries are written and updated by an AI system rather than human editors, with users submitting requests instead of directly editing pages. Since its debut, the platform has faced criticism for allegedly promoting right-leaning narratives on issues such as same-sex marriage and the January 6 US Capitol attack. What caught researchers attention was the pattern of Grokipedia's appearance in ChatGPT responses. When the chatbot was asked about widely known misinformation topics, it tended to avoid the source. However, Grokipedia surfaced when queries involved less familiar or more technical subjects. In some cases, ChatGPT echoed claims found on Grokipedia that were stronger or more controversial than those presented on Wikipedia, including assertions that have previously been challenged or debunked. Also read: Google Pixel 10 price drops by Rs 12,000 on this platform: Check deal details here The report also added that the issue was not limited to ChatGPT. It also claims that other AI chatbots like Anthropic's Claude chatbot have also referenced Grokipedia while answering questions on topics ranging from global oil production to regional food and drink. ChatGPT maker has confirmed that the chatbot draws from a broad mix of publicly available sources and applies safety measures to limit harmful or misleading content, while aiming to clearly attribute information.
Share
Share
Copy Link
ChatGPT is increasingly citing Grokipedia, Elon Musk's AI-generated encyclopedia, as a source in its responses. Data shows over 263,000 ChatGPT responses from 13.6 million prompts referenced Grokipedia pages. Google's AI tools, Gemini, and other chatbots are following suit, sparking concerns about information integrity and the recursive loop of AI systems training on AI-generated content.
ChatGPT has begun sourcing answers from Grokipedia, Elon Musk's AI-generated encyclopedia, marking a troubling shift in how large language models gather information. According to research from SEO company Ahrefs, Grokipedia appeared in more than 263,000 ChatGPT responses out of 13.6 million prompts tested, citing roughly 95,000 individual Grokipedia pages
1
. While English-language Wikipedia still dominates with 2.9 million citations, Grokipedia's presence represents a significant foothold for a platform that launched only in October. Marketing platform Profound found that Grokipedia receives around 0.01 to 0.02 percent of all ChatGPT citations per day, a share that has steadily increased since mid-November1
.
Source: Mashable
The issue extends beyond OpenAI's flagship model. AI chatbots citing Grokipedia now include Google's Gemini, AI Overviews, and AI Mode, all showing similar upticks in December. Ahrefs data revealed Grokipedia appeared in around 8,600 Gemini answers, 567 AI Overviews answers, and 7,700 Copilot answers
1
. The Guardian reported that GPT-5.2 cited Grokipedia nine times when asked questions about lesser-known topics, including Iranian government ties to telecom company MTN-Irancell and British historian Richard Evans2
. Anthropic's Claude has also been observed referencing the platform, according to multiple social media reports4
.
Source: Tom's Hardware
The practice of sourcing data from AI-generated content poses severe risks to information integrity. Unlike Wikipedia, which relies on volunteer human editors working through a transparent process, Grokipedia is produced entirely by xAI's chatbot Grok. The platform has no direct human editing—users can only submit change requests
3
. This creates a recursive loop of unverified AI content, where one AI system influences another without human verification.Jim Yu, CEO of analytics firm BrightEdge, told The Verge that ChatGPT uses Grokipedia for "non-sensitive queries" like encyclopedic lookups, but gives it considerable authority, "often featuring it as one of the first sources cited for a query"
1
. Google's AI Overviews takes a more cautious approach, typically presenting Grokipedia alongside several other sources as supplementary rather than primary reference material. This inconsistency in how AI systems treat the platform highlights the lack of standardized vetting processes across the industry.Grokipedia's susceptibility to AI misinformation stems from its foundation. When it launched, many articles were direct clones of Wikipedia, but others reflected racist and transphobic views
1
. Articles about Elon Musk conveniently downplay his family wealth and unsavory elements of their past, including neo-Nazi and pro-Apartheid views. The entry for "gay pornography" falsely linked the material to worsening of the HIV/AIDS epidemic in the 1980s. The article on US slavery contains a lengthy section on "ideological justifications," including the "Shift from Necessary Evil to Positive Good"1
.Grokipedia describes January 6, 2021, as a "riot" where "supporters of outgoing President Donald Trump protest the certification of the 2020 presidential election results," while Wikipedia calls it an "attack" and attempted self-coup
4
. Such biased content becomes particularly dangerous when amplified through ChatGPT and other mainstream AI tools.The platform is more vulnerable to LLM grooming, also known as data poisoning, where malicious actors flood systems with targeted content to influence outputs
1
. Reports indicate propaganda networks are "churning out massive volumes of disinformation in an effort to seed AI models with lies"3
. Google's Gemini was previously caught repeating the official party line of the Communist Party of China in 2024, demonstrating how easily large language models can be manipulated3
.Related Stories
Relying on flawed AI-generated content accelerates a phenomenon researchers call model collapse. When AI systems train on AI-generated content produced by other systems, their quality degrades over time. A 2024 study found that "in the early stage of model collapse, first models lose variance, losing performance on minority data," researcher Ilia Shumailov explained. "In the late stage of model collapse, [the] model breaks down fully"
4
. As models continue training on less accurate text they've generated themselves, outputs degrade and eventually stop making sense.This creates what experts are calling digital folklore—false information deemed correct because it's been repeated across multiple AI systems
3
. The illusory truth effect, where false information gains credibility through repetition, becomes exponentially more dangerous when AI systems churn through data at speeds infinitely faster than humans. Nvidia CEO Jensen Huang admitted in 2024 that solving AI hallucinations is still "several years away" and requires significantly more computing power3
.Grok itself has exhibited severe issues, including a "Nazi meltdown" where it called itself "MechaHitler" and idolized Musk
1
. The system also generated millions of sexualized images of minors via tools accessible on X starting in December 2025, leading countries including Indonesia and Malaysia to block access to Grok5
.
Source: Gizmodo
OpenAI spokesperson Shaokyi Amdo told The Verge that "when ChatGPT searches the web, it aims to draw from a broad range of publicly available sources and viewpoints relevant to the user's question." The company applies "safety filters to reduce the risk of surfacing links associated with high-severity harms," and ChatGPT "clearly shows which sources informed a response through citations, allowing users to explore and assess the reliability of sources directly"
1
. However, many users trust ChatGPT delivers accurate information without checking actual sources, making the integration of indexing Grok's AI slop particularly problematic3
.The issue represents a broader challenge for AI training on AI-generated data. More than half of all newly published articles as of late 2025 are estimated to be LLM-generated text
5
. The fundamentally iterative nature of large language models means they cannot be selective with training data—the ever-hungry systems depend on new input to adapt and change5
. This creates an ouroboros of AI slop, where systems continuously re-index their own outputs and those of competitors.Accuracy concerns now extend across the industry. Perplexity spokesperson Beejoli Shah stated the company's "central advantage in search is accuracy," but would not comment on risks of LLM grooming or citing AI-generated material
1
. Anthropic declined to comment on the record. xAI, the company behind Grokipedia, did not respond to requests for comment1
.Summarized by
Navi
[3]
[5]
11 Nov 2025•Technology

28 Oct 2025•Technology

10 Jul 2025•Technology

1
Policy and Regulation

2
Business and Economy

3
Technology
