4 Sources
4 Sources
[1]
The Right-Wing Attack on Wikipedia
The free internet encyclopedia is widely used to train AI. That's why conservatives are trying to dethrone it. Late last month, Elon Musk launched Grokipedia, an AI-generated encyclopedia with 855,279 articles, no human editors, and no way for users to request improvements beyond a suggestion box addressed to its eponymous chatbot author. The tech entrepreneur is eager, he has said, to "purge out the propaganda" that he argues afflicts Wikipedia, the venerable user-generated reference source. But some Grokipedia articles are near replicas of Wikipedia entries. Other articles in the new source seem conspicuously sanitized: The article about the U.S. government's now-defunct foreign-aid agency fails to mention Musk, who boasted about his role in "feeding USAID into the wood chipper." The articles on Grokipedia are produced by Grok, Musk's AI model, and they are roughly what you'd expect from replacing a dedicated community of human volunteer creators and editors with a chatbot. It confuses large-scale information retrieval for knowledge, and automation for neutrality. Yet Musk's AI encyclopedia is also part of something broader: an escalating campaign to discredit Wikipedia and reshape what counts as a reliable source of basic information in the age of AI. Whatever the potential flaws of a crowdsourced reference site, many users often find Wikipedia more convenient, comprehensive, and reliable than any alternative. In a typical month, more than a billion people consult it. Over the past decade, Wikipedia has also become essential information infrastructure. It shapes what AI systems learn and what chatbots say. It's used to provide context for YouTube videos, and influences what AI-powered answer engines present as truth. Control what Wikipedia considers reliable, and you control what machines -- and then people -- learn about the world. Lila Shroff: Elon Musk wants what he can't have: Wikipedia This is why Republicans in Congress have recently begun sending letters that accuse the nonprofit Wikimedia Foundation, which operates the encyclopedia, of ideological bias and demand the names of certain volunteer arbitrators who help address factual disagreements. It's also why some of the most powerful people in the world are demanding "reforms" to Wikipedia -- or launching their own copycats. In sports, players who want more sympathetic treatment from game officials try to make them second-guess themselves, in some cases by loudly accusing them of making bad, or even biased, calls. This strategy is called "working the referees." Politicians, particularly conservatives, have been using it against social-media companies for years. In 2016, Gizmodo published allegations by anonymous former Facebook contractors that editors of the social-media platform's Trending Topics feature had been secretly "blacklisting" popular right-wing topics and domains while "injecting" mainstream news stories with less organic appeal. Material associated with Glenn Beck and Steven Crowder had allegedly been suppressed; mainstream coverage of the disappearance of a Malaysian airliner and the Charlie Hebdo attacks had been added. Bias allegations exploded across right-wing media and on Capitol Hill. Facebook investigated and released an explainer of how Trending Topics worked: Editors could remove clickbait, hoaxes, or stories with insufficient sources -- fake news, as it was once quaintly called, which credible studies would show disproportionately catered to conservative audiences. The company maintained that its editors had based their decisions on story validity and source reliability, not on their own political preferences. Nonetheless, to avoid even the appearance of bias, and to placate angry critics, Facebook fired the humans who worked on Trending Topics, converting it to a fully algorithmic list -- which quickly began amplifying conspiracy theories and untrustworthy outlets. The company let a useful feature be ref-worked into irrelevance. From that point on, fear of appearing biased against conservatives shaped not only Facebook's decisions about how to handle low-quality information on its platform, but other companies' decisions as well. Once ref-working proved effective, Republican politicians began to accuse many social-media companies of anti-conservative bias no matter how little evidence supported the claim. When co-founders Jimmy Wales and Larry Sanger started Wikipedia in January 2001, the idea that it would become a front line in the war for reality a quarter century later would have been laughable. In a new book, The Seven Rules of Trust, Wales recounts a joke that the comedian Stephen Colbert made about Wikipedia in 2006: "Any user can change any entry and if enough other users agree with them, it becomes true," Colbert said. This is better than reality, he went on. It's "Wikiality." Colbert was being facetious, but Wikipedia does operate on the radical premise that people can collectively determine what's true through reliable sourcing and methodical deliberation. Contributing is surprisingly easy: Go to a page, click into the editing window, and write. Registered accounts are optional; noncontroversial topics can be edited anonymously. Transparency is part of the ethos: When neutrality is disputed in a Wikipedia article, it is typically marked accordingly, right up at the top of the page. When an entry is thin on sources, it lets you know. Renée DiResta: Rumors on X are becoming the right's new reality For controversial topics -- abortion, the October 7 attacks -- edits are limited to established users to minimize trolls defacing pages. The Wikipedia community's formal policies and guidelines emphasize collaboration and neutrality; an ideal entry should lay out multiple sides of a controversial issue. As Wales notes, however, volunteer writers and editors inevitably must make judgments about the reliability of information: "Clearly we don't treat crackpot, random websites as being the equal of the New England Journal of Medicine, and that's fine." In fact, Wikipedia maintains detailed guidelines on reliable sources; there is a long list of sources with lengthy discussions of their suitability, and a top-level note that "context matters tremendously" when deciding which to use and when (The Federalist, for example, is deemed suitable for attributed opinions, but is generally unreliable for facts). Editing disputes that can't be resolved through public discussions on a topic's Talk page move through a series of community-deliberation mechanisms; allegations of more serious manipulation may go to the Arbitration Committee, which follows an elaborate public process for conducting investigations and making decisions. Another of Wikipedia's guiding principles is "assume good faith" -- which its prominent critics are not doing. Musk and others have taken to calling it "Wokipedia." Sanger, who has become an outspoken critic, argues that Wikipedia has adopted what he calls a "GASP" worldview -- globalist, academic, secular, progressive. To fix this, he proposed reforms such as promoting accountability by de-anonymizing arbitrators and others with power over the Wikipedia community and abolishing the consensus model to allow parallel articles with declared viewpoints -- separate "pro-life" and "pro-choice" entries for an abortion-related topic, perhaps. Arguing that partisan bias is what distinguishes the community's acceptance of CNN and The Washington Post from its avoidance of right-wing outlets such as The Federalist and The Epoch Times, Sanger has also called on Wikipedia to eliminate what he calls "source blacklists," and other conservatives have eagerly taken up that call. Some of Sanger's ideas reflect legitimate tensions in Wikipedia governance. Wikipedia's source-assessment lists do treat some advocacy groups, such as GLAAD and the Anti-Defamation League, as reliable information sources on some issues. Any given article can be edited in ways that unfairly lionize, smear, or otherwise distort its subject. An investigation by the tech outlet Pirate Wires alleged that a ring of pro-Hamas editors had succeeded in reshaping articles to favor their point of view. However, the Arbitration Committee quickly responded by banning six of the offenders -- seemingly an act of effective community correction. And nothing prevents right-leaning writers from contributing. Musk, Sanger, and others have nonetheless advanced the argument that the site is systemically biased against conservatives, and that view has taken hold among Republicans in Congress. In August, Representatives James Comer and Nancy Mace, both Republicans, sent a letter demanding answers about foreign influence on Wikipedia, asking whether hostile actors or "individuals at academic institutions subsidized by U.S. taxpayer dollars" were inserting bias into entries on politically charged topics. In a recent letter, Senator Ted Cruz demanded answers about the site's "ideological bias," source list, and policies for how editors are removed or banned. Like earlier ref-working campaigns against social-media platforms, the letter seems intended to push a private organization toward policies favoring the right. Cruz's letter helps explain the urgency of the campaign against Wikipedia.The site's influence, he wrote, "extends even further in the age of artificial intelligence, as every major large language model has been trained on the platform. Wikipedia shapes what Americans read today and what technology will produce tomorrow." Musk's Grokipedia may not be used to train large language models anytime soon, but it is an attempt to elbow Wikipedia out of its position of prominence. Theoretically, it can generate new articles far more quickly and thoroughly than Wikipedia's volunteer writers and editors can, and it is not subject to Wikipedia's elaborate process for adjudicating factual disagreements. Kaitlyn Tiffany: So maybe Facebook didn't ruin politics But this is also one of Grokipedia's greatest weaknesses. The remarkably thorough article about me contains nonsense that conspiracy theorists entered into congressional proceedings -- including claims that my former research team at Stanford Internet Observatory censored 22 million tweets during the 2020 presidential campaign. The article also hallucinates that we were involved in Twitter's moderation of stories about Hunter Biden's laptop. We weren't, and the cited source did not even make that claim. And there's no reliable way to correct such problems. I reported these issues via the Suggest Edit tool included in Grokipedia's user interface -- so far, to no avail. On Wikipedia, I could appeal to an editor by dropping a note on a Talk page. But Musk's version misses what gives Wikipedia authority: human consensus. (When I requested comment via the press account at xAI, the Musk-founded artificial-intelligence company that developed Grok, I received an automated response that said "Legacy Media Lies.") Musk's X platform recognizes that human consensus can be helpful in Community Notes, its fact-checking feature. Like Wikipedia, Community Notes recognizes that legitimacy and trust ultimately come from people getting together to decide that an explanation is accurate, needed, and fair. Grokipedia abandons this entirely. It's pure algorithmic output with no community, no transparency, no clear process for dispute resolution. The irony is striking: Even as Musk and his friends attack Wikipedia for supposed bias, he is building something far more opaque and unaccountable.
[2]
Elon Musk's Grokipedia encyclopedia project sparks trust and accuracy concerns
What Happened: So, Elon Musk's new AI-powered encyclopedia, Grokipedia, is already in some serious hot water. A new study from researchers at Cornell Tech just came out, and it's pretty damning. They're saying the platform is packed with references to super-unreliable and biased sources. Grokipedia was launched last month by Musk's AI company, xAI. It was supposed to be a new rival to Wikipedia - which Musk and others often accuse of having a liberal bias. But here's the kicker: the Cornell study found that Grokipedia is not only ripping a lot of its text straight from Wikipedia, but it's also citing sources that Wikipedia itself has banned for being total junk. The most shocking example they found? Grokipedia had an entry for the "Clinton body count" conspiracy theory. And the source it was citing to back up this long-debunked claim? InfoWars. Yep, that InfoWars. Why Is This Important: This whole thing shines a massive, bright light on a huge problem with these new AI information tools: they don't seem to have any standards for their sources. According to the study, the articles on Grokipedia that weren't just copied from Wikipedia were over 3 times more likely to cite an unreliable source and 13 times more likely to use a source that's on a blacklist. This is a really big deal. An AI platform like this can scale up and spread that kind of conspiracy-laden junk to millions of people in a split second, all without a single human editor ever looking at it. Why Should I Care: Look, we're all starting to rely on AI tools to get us quick facts. But this is the danger. When you ask an AI for information, you could be getting a heavy dose of misleading, politically-charged spin, and you'd have no idea. It's blurring the line between a fact and just... something the algorithm made up. Plus, it's worth noting that Musk now controls multiple, massive information platforms - X (Twitter) and this new AI company. That's a lot of control over what people see and read. Recommended Videos What's Next: Musk, for his part, has already announced he's rebranding Grokipedia to "Encyclopedia Galactica" - calling it a "sci-fi version of the Library of Alexandria." (Yes, really). But as the experts are pointing out, you can call it whatever you want. Without real-world accountability and a commitment to using actual, trustworthy sources, it's just going to be a machine for amplifying misinformation, not correcting it. Meanwhile, the Wikimedia Foundation (the folks behind Wikipedia) put out a statement that basically said, "See? This is why we stick with our open, community-run model. It's the only way to build trust."
[3]
Elon Musk Has His Own Encyclopedia Now. Well, We Read Some of the Entries ...
Are you sure you want to unsubscribe from email alerts for Mary Harris? A hallucinating encyclopedia may not be exactly what you're looking for when you're doing research, but that's what writer Stephen Harrison found when he dug into Elon Musk's new A.I.-powered Grokipedia. "It's definitely not, like, an alternative encyclopedia," Harrison said. "It's got a lot of flaws." Harrison's area of expertise is Wikipedia. So it's possible he's biased here, but his criticisms are not small ones. I asked Harrison to pull up Elon Musk's Grokipedia page. It is very, very long. But if you dig in, you'll find some lines that are really telling. Grok portrays the Tesla CEO through stubbornly rose-colored lenses. While Grokipedia notes Musk's social media leadership has been scrutinized, it also says criticism of him has come from "legacy media outlets that exhibit systemic left-leaning tilts in coverage." On a recent episode of What Next, host Mary Harris spoke to Harrison about how the emergence of Grokipedia is a symptom of something else entirely: a right-wing project to rethink truth altogether. This transcript has been edited and condensed for clarity. Mary Harris: Is Grokipedia just a vanity project for Elon Musk? Stephen Harrison: It's hard to figure out what Grokipedia actually is. In some ways, there has been a lot of discussion about bias on Wikipedia. I wonder if Elon Musk sensed an opening and thought, "Well, I could take what we have from Grok." Wikipedia's information is available under a Creative Commons license -- it's freely licensed. He maybe saw the idea of an A.I.-generated encyclopedia and wanted to be first to market. Why does the internet need a Grokipedia? There's already a Wikipedia. I'm not sure that it does. The broader narrative here is that the far right is attacking all kinds of our traditional institutions, whether that's academia, journalists, and now Wikipedia, and the thinking is that by deprecating the trust that people have in Wikipedia, they can point towards their own user-generated content, or Elon Musk can kind of put out his own narrative. One of the funny things about Elon Musk creating his own A.I. version of Wikipedia is that Elon Musk was a fan of Wikipedia not that long ago, right? Back in 2017, Elon Musk tweeted, "I love Wikipedia, keeps getting better and better." You couldn't be more direct, right? In 2022, five years later, is when I really started to see a shift. He said that it's "Wokepedia." He offered to buy it for $44 billion. That led to all of the donor ads saying, "Hey, Wikipedia is not for sale. We're an independent nonprofit." A big part of the shift was the representation of himself on Wikipedia. He really did not like the word investor. He tweeted about that several times. He asked Wikipedia editors to remove the word investor. He sees himself as a thought leader and a visionary, and he really wanted that word removed. Basically, he wanted to take the Twitter approach. He didn't like what was going on, so he wanted to buy it. But that wasn't possible. It's funny. He hates the word investor, but that's his move. He buys it. What do we know about the popularity of Grokipedia, this new site he's created? Is it legitimately a competitor for Wikipedia at this point? I don't think so, and there are a couple of reasons. Just even on X, I see a lot of blowback to Grokipedia. And I think of X as a place where a lot of Elon's most vocal supporters tend to post. But they're saying, "Hey, I've identified a lot of errors. This page is very lengthy, but it's got meaningless A.I. slop in it." People are saying that it reads like a LinkedIn page, which would make sense because it's pulling from user-generated content as opposed to the more traditional news media sources. And we know now that when a large language model is trained on A.I.-generated content, it has what's called "model collapse," and it ultimately really suffers. So there's a real concern here for Elon Musk that if his Grok is trained on Grokipedia, then that could lead to the collapse of Grok. Is it sort of like a mimeograph, where the more you copy it, the fuzzier the image gets? Yes, it gets more and more fuzzy, and it gets more and more of these hallucinations because it's just predicting what it thinks you want and not what's actually based on an original source text. I guess I hadn't really realized how political encyclopedias could be. I think an encyclopedia reflects a worldview, or aims to. It's making a claim on neutrality, right? Like, this is a neutral point of view. It's trying to present information accurately. And people who do not like that accurate, curated version that appears on Wikipedia, they're going to be angry about that. So, anybody who's making a claim that what they're working on is accurate is automatically in the firing line. Ultimately, any government that is trying to control information, they want the universities, they want national news, they want news media, and then they want Wikipedia. One thing I've noticed during Trump 2.0 is that some parts of the internet seem more robustly democratic than others. Like, if I look at Reddit, if I look at Wikipedia, to me what's happening there seems muscular and driven by human beings. But then I look at Twitter and Facebook, and they've swung in these extreme ways, politically. What do you think of these more robust sites, if you agree that's what they are -- what do they share? A big piece of it is that they have a higher purpose. Wikipedia editors, when they're acting at their best, they really do believe that there's something more than partisan politics -- and that's the accurate reflection of reliable sources. That higher purpose can bring together people of different politics. One of the things I'm trying to make clear in my reporting is that Wikipedia is not a political monolith, one way or the other. I certainly know Wikipedians who are conservative. There's a lot of discussion about Gen Z men. Well, they're the ones sometimes that are sending me Signal messages and are very active on Wikipedia. So, it's just a fake narrative that's being pushed by Elon Musk and others that Wikipedia is a bunch of liberals. It's just not true.
[4]
Elon Musk's Grokipedia Is a Warning
In 2021, somewhere near the peak of his pre-political celebrity, Elon Musk tweeted to celebrate a milestone for the web: "Happy birthday Wikipedia! So glad you exist." His public relationship with the platform had been, up until that point, fairly normal, at least for a controversial public figure. He was an avid consumer, frequently tweeting links on a range of topics. His occasional criticisms of the platform were about how it represented him. "History is written by the victors," he wrote in 2020, "except on Wikipedia haha." A year earlier, he'd complained about his own entry. "Just looked at my wiki for 1st time in years. It's insane!" he wrote, bemusedly calling his page a "war zone" with "a zillion edits." In response to a supportive comment, he joked: "Some day, I should probably write what *my* fictionalized version of reality is 🤣🤣." Six years, nearly $500 billion, and one extremely public political transformation later, well, "🤣🤣" indeed. The newly launched Grokipedia, an AI-generated encyclopedia with more than 800,000 entries, will be, according to Musk, a "massive improvement over Wikipedia," which he has referred to more recently as "Dickipedia" and "Wokipedia," characterized as "broken," and accused of being an "extension of legacy media propaganda." Since 2019, Musk's narrow problem with Wikipedia has grown into an expansive grievance, transforming from a personal affront to a righteous crusade that's "necessary" for humanity's goal of "understanding the Universe." Maybe so. Or maybe it simply didn't make sense to one of the wealthiest and most powerful people in the world that others -- be they volunteer Wikipedians, paid members of the media, or users on a platform he doesn't own -- should be able to talk about him, describe things he cares about, and be taken seriously. Musk's particular desire to remake the information environment around him is as unique to the man and his position as are his available methods (buying a social-media company; starting an AI company; creating a chatbot in his image and commanding it to rewrite the entire encyclopedia). It's also a preview of an experience that AI tools will soon be able to offer to almost anyone: the whole world reinterpreted to their preferences, or the preferences of a model, in real time. But first, what did Musk actually create here? Superficially, Grokipedia is true to its name: Its articles are written and formatted like Wikipedia's and in some cases even contain passages of identical text. They're often much longer, though, and organized less consistently than on Wikipedia. As someone who has spent a lot of time testing AI deep-research tools, I find Grokipedia's longer articles to be instantly recognizable as the outputs of a similar process: an AI model that crawls an index of links, synthesizes their contents, and produces a comprehensive-looking but verbose report. (An early systematic comparison by a researcher at Trinity College, Dublin, suggested that "AI-generated encyclopedic content currently mirrors Wikipedia's informational scope but diverges in editorial norms, favoring narrative expansion over citation-based verification.) They aren't directly editable, at least in the Wikipedia sense, but you can suggest changes or corrections through an interface similar to X's Community Notes. Grokipedia's articles are also clearly influenced by the encoded sensibilities of Grok, the Musk "anti-woke" ChatGPT competitor famous for once referring to itself as "mecha-Hitler." On many subjects, it offers fairly straightforward and uncontroversial summaries of publicly available materials; on more contentious ones, it resembles a machine-assisted, post-MAGA Conservapedia, with explicit pushback against "mainstream" narratives and media coverage. In its post-launch review of the platform, Wired reported that notable entries frequently "denounced the mainstream media, highlighted conservative viewpoints, and sometimes perpetuated historical inaccuracies." Inc instantly found a bunch of factual errors, while SFGATE concluded, "boy, is it racist." I'd add that its more controversial articles often contain more text than anyone is likely to read, creating less of an impression of ideological certitude or confident revisionism than a sense that, well, Hey, who can really say what happened on January 6 after someone may or may not have won the American presidential election? In between, you get a lot of stuff like this: Grokipedia can be understood as a straightforward attempt to automate the labor and tune the bias that goes into producing a resource like Wikipedia; indeed, there might even be some lessons for the platform here as we enter a world where chatbot users can produce Wikipedia-like articles on demand. But an automated Wikipedia isn't much of a Wikipedia at all: The site Grokipedia is trying to replace is the result of an unprecedented bottom-up phenomenon in which millions of people contributed time, attention, and effort to create a shared resource, synthesizing existing information through a messy, flawed, but ultimately deliberative and productive process. In contrast, Grokipedia is a top-down effort, generated by a model trained on resources like Wikipedia, then deployed to rewrite them with a different sensibility. It's a futuristic example of AI automation, a regressive throwback to pre-web centralization, and a new piece of a claustrophobically referential informational system: A database of articles written by a chatbot so they can later be referenced as authoritative sources by the same chatbot, and maybe help train another one. (Google's AI Overviews come to mind.) For now, it looks less like an alternative to Wikipedia that people will want to use than an attempt to delegitimize it. As absurd and undignified as Grokipedia's founder-centric origin story may be -- How good could Wikipedia be if its page about me is so rude? -- Elon Musk's attempt to remake his own information environment is instructive and, if not exactly candid, usefully transparent (or at least poorly concealed). You won't hear Musk joking about "his own fictionalized version of reality" in 2025 -- now he prefers to speak in messianic terms about apocalyptic threats, no matter the subject. But Grokipedia, and Musk's AI projects in general, invite us to see LLMs as powerful and intrinsically biased ideological tools, which, whatever you make of Grok's example, they always are. We know an awful lot about what Elon Musk thinks about the world, and we know that he wants his own products to align with his greater project. In Grok and Grokipedia, we get to see clearly what it looks like when particular ideologies are intentionally encoded into AI products that are then deployed widely and to openly ideological ends. We also get to recognize how thoroughly familiar parts of the spectacle are, as chatbots rehash the same pitches to audiences, and invite many of the same obvious criticisms, as newspapers, TV channels, and social-media platforms before them -- when Fox offered its "fair and balanced" alternative to other cable networks, Mark Zuckerberg claimed to be returning to his company's "free speech" roots, or the New York Times reminded us that the "truth" is hard, actually. Now, it's AI companies winking as they tell us to trust them, engaging in flattering marketing, and giving in to paternalistic temptations without much awareness of how their predecessors' decades of similar efforts helped lead the public to a state of profound institutional cynicism. Anyway! Grokipedia was positioned at launch as an alternative product, and Musk generally likes to define xAI in opposition to its larger and less openly politicized competitors. That Musk's claims about "truth," factuality, and narrative are so clearly motivated by self-interest, though, actually helps draw attention to the ways his project is largely the same as OpenAI's. To anyone outside Musk's ideological sphere, his bid to create an enclosed, top-down informational environment seems either silly or sinister (see also the right's characterization of the situation when Google's attempts to optimize Gemini's racial biases resulted in a machine that could only imagine non-white historical figures). But in its clumsy implementation and cringeworthy pitch, it still ends up being clearer about what it's up to than claims like this, from an OpenAI announcement in early October: ChatGPT shouldn't have political bias in any direction. People use ChatGPT as a tool to learn and explore ideas. That only works if they trust ChatGPT to be objective... We created a political bias evaluation that mirrors real-world usage and stress-tests our models' ability to remain objective... Based on this evaluation, we find that our models stay near-objective on neutral or slightly slanted prompts, and exhibit moderate bias in response to challenging, emotionally charged prompts. The company was announcing the development of "an automated evaluation setup to continually track and improve objectivity over time," using "approximately 500 prompts spanning 100 topics and varying political slants," across "five nuanced axes of bias." If the goal of Grok is to express a specific bias against prevailing progressive narratives by reflecting right-wing views -- or just to stay in line with the values and priorities of its creator -- well, that's achievable. (It's also something LLMs are well suited for as a technology.) In contrast, the goal OpenAI has set for itself is "objectivity," in practice or at least reputation, which, for a chatbot tasked with talking about everything to everyone, really isn't. As novel and versatile as LLM-based chatbots are, their relationship to the outside world is recognizably and deeply editorial, like a newspaper or, more recently, an algorithmically sorted-and-censored social network. (It's helpful to think of OpenAI's "bias evaluation" process, or Grokipedia's top-down reactionary political correctness, as less of a systemic audit than a straightforward edit.) What ChatGPT says about politics -- or anything -- is ultimately what the people who created it say it should say, or allow it to say; more specifically, human beings at OpenAI are deciding what neutral answers to those 500 prompts might look like and instructing their model to follow their lead. OpenAI's incoherent appeal to objective neutrality is an effort to avoid this perception and one that anyone who runs a major media outlet or social-media platform knows won't fool people for long. OpenAI would probably prefer not to be evaluated by these punishing and polarized standards, so, as many other organizations have tried before, it's claiming to exist outside them. On that task, I suspect ChatGPT will fail. Luckily for OpenAI, ChatGPT's future doesn't hinge on creating a universal chatbot that everyone sees as unbiased -- it'll settle for being seen as useful, entertaining, or reasonable and trustworthy to enough people. Research papers and "bias evaluations" aside, the product and its users are veering away from shared experiences and into personalized, bespoke forms of interaction in which chatbots gradually profile their users and provide them with information that's more relevant to their specific experiences or more sensitive to their personal preferences or both. Frequent chatbot users know that popular models can drift into sycophancy, which is a powerful and general sort of bias. They also know they can be commanded to inhabit different identities, political or otherwise (you can ask ChatGPT to talk to you like a dead French poststructuralist if you want or ask it to talk to you like Mr. Beast. Soon, reportedly, you'll be able to ask it to pleasure you sexually). Still, for all their dazzling newness and versatility, AI chatbots are in many ways continuing the project started by late-stage social media, extending the logic of machine-learning recommendations into a familiar human voice. It's not just that output neutrality is difficult to obtain for systems like this. It's that they're incompatible with the very concept. In that sense, Grokipedia -- like X and Grok -- is also a warning. Sure, it's part of an excruciatingly public example of one man's gradual isolation from the world inside a conglomerate-scale system of affirming, adulatory, and ideologically safe feeds, chatbots, and synthetic media, a situation that would be funny if not for Musk's desire and power to impose his vision on the world. (To calibrate this a bit, imagine predicting the "Wikipedia rewritten to be more conservative by Elon Musk's anti-PC chatbot" scenario in the run-up to, say, his purchase of Twitter. It would have sounded insane, and you would have too.) But what Musk can build for himself now is something that consumer AI tools, including his, will soon allow regular people to build for themselves, or which will be constructed for them by default: A world mediated not just by publications or social networks but by omnipurpose AI products that assure us they're "maximally truth-seeking" or "objective" as they simply tell us what we want to hear.
Share
Share
Copy Link
Elon Musk launched Grokipedia, an AI-powered encyclopedia with over 800,000 articles, positioning it as an alternative to Wikipedia. However, researchers have identified significant accuracy issues and bias concerns with the platform's automated content generation.
Elon Musk has launched Grokipedia, an AI-generated encyclopedia featuring over 855,279 articles created entirely by his Grok chatbot without human editors
1
. The platform, which Musk positions as a "massive improvement over Wikipedia," aims to "purge out the propaganda" that he claims afflicts the traditional crowdsourced encyclopedia1
. Users can only suggest improvements through a basic suggestion box addressed to the AI author, marking a significant departure from Wikipedia's collaborative editing model1
.
Source: Slate
A comprehensive study by Cornell Tech researchers has exposed serious reliability issues with Grokipedia's content generation system
2
. The research found that while many articles directly copy text from Wikipedia, original Grokipedia content was three times more likely to cite unreliable sources and thirteen times more likely to reference materials from Wikipedia's blacklist2
. The most striking example discovered was an entry promoting the debunked "Clinton body count" conspiracy theory, citing InfoWars as a source2
.Musk's relationship with Wikipedia has deteriorated significantly since 2021, when he celebrated the platform's birthday, to his current characterization of it as "Wokipedia" and "Dickipedia"
4
. His grievances initially centered on how Wikipedia portrayed him personally, particularly his objection to being labeled an "investor" rather than a visionary3
. Grokipedia articles now frequently "denounce mainstream media, highlight conservative viewpoints, and sometimes perpetuate historical inaccuracies," according to analysis by Wired4
.
Source: The Atlantic
Related Stories
The launch of Grokipedia represents part of a larger conservative campaign to discredit Wikipedia's influence on AI training data and information dissemination
1
. Wikipedia's content shapes what AI systems learn and influences billions of monthly users, making control over its perceived reliability a strategic priority1
. Republican Congress members have begun sending letters accusing the Wikimedia Foundation of ideological bias and demanding information about volunteer arbitrators1
.Experts have identified numerous technical problems with Grokipedia's approach to content generation
3
. The platform suffers from "model collapse" risks when AI systems are trained on AI-generated content, leading to increasingly unreliable outputs over time3
. Articles are often excessively lengthy and contain what critics describe as "meaningless AI slop" that reads more like LinkedIn profiles than encyclopedic entries3
. The platform's reliance on automated synthesis without human oversight has resulted in factual errors and the amplification of conspiracy theories2
."Summarized by
Navi
[1]
[4]
28 Oct 2025•Technology

01 Oct 2025•Technology

19 Jun 2025•Technology

1
Business and Economy

2
Technology

3
Policy and Regulation
