Grokipedia: Stupid Or Evil?
Both.

The world’s richest man, who has positively interacted with Neo-Nazis online, performed several Nazi salutes on stage and was crucial in getting the US’s most orange and authoritarian president into office, wants to dethrone the world’s largest and most popular encyclopedia with his own. But Musk hasn’t got an army of writers to help him knock together a manuscript — after all, what if they unionised against his heinous demands? No, no, too risky. Instead, Musk commanded his Hitler-loving AI to generate his own version, making sure to include utterly nonsensical citations, and then asked that same AI to fact-check itself. Does that not sound horrifically dystopian? But the reality of Grokipedia is far worse than you think.
Let’s start with the basics. What is Grokipedia?
Supposedly, Musk was annoyed by Wikipedia’s “left-wing bias” and so launched Grokipedia on October 27th (and yes, I am late to this story). Unlike Wikipedia, which operates using a democratic process of volunteers generating articles, fact-checking them, and editing them, Grokipedia offloads all these tasks to xAI’s Grok AI model. Visitors to Grokipedia can still edit or flag errors for the AI to correct, but they have no say in how the AI interprets or edits these corrections. In fact, there is evidence that the AI model values these corrections more highly than peer-reviewed evidence, which is the opposite of how human Wikipedia editors complete the process, given that they need accurate citations (with more on that later).
So, if Wikipedia is an encyclopedia that is democratically and carefully assembled and edited, Grok is an encyclopedia spat out by an AI overlord who prefers to listen to “Karens” in the comments more than the experts. What an incredible improvement, Mr. Musk — your genius knows no bounds…
There are so many problems with this approach that it is hard to get a clear picture of just how moronic it actually is.
Musk wants Grokipedia to reveal the “truth” and heavily insinuates it will be unbiased. However, safeguarding and optimisation mean that all AI have pre-programmed biases. Early models, which didn’t have these features, like Microsoft’s ill-fated Tay Twitter bot, quickly devolved into spouting the most extreme, horrific, and biased rhetoric. So, programmers have made sure to include fences for these modern bots to make them less extreme and more usable, as well as to optimise their outputs in better directions. This doesn’t make them less biased but means their bias has been moderately controlled in a specific way. We know Musk is influencing these fences within Grok, so we know it will align with his biases.
Moreover, modern AIs are programmed to output whatever pleases the user, which is why AI psychosis is such a huge problem. Combine this with the fact that we know Grokipedia places heavy emphasis on user comments over expert opinions, and you have a recipe for an encyclopedia that becomes increasingly biased over time, particularly as Grokipedia has been positioned and pushed as an extreme-right alternative to Wikipedia.
It is true that Wikipedia can provide similar biased feedback. However, this is where the pressure on the human editors comes in. They aren’t there to please the commenter (in fact, many do the opposite); instead, they face pressure to only greenlight sources with a credible citation or to highlight when one is lacking. This way, biases are put in the context of their sources, and everything remains mostly grounded in reality, not based on user feelings.
But Grokipedia has citations, so doesn’t it have the same mechanics?
No, because AI can’t give you citations. Let me explain.
When you Google something and Google’s annoying AI results pop up, they have citations you can click on. But here’s the thing: because of how the neural networks that power AI work, it is impossible for a model to actually tell you how it knows something. Those Google AI search links are retroactively generated after the AI has written its summary, by the AI finding pages online that statistically appear to agree with what it has written. Firstly, that is literally the definition of cherry-picking data. But secondly, just because something statistically appears to agree with the summary doesn’t actually mean it does. Indeed, I can’t use Google’s AI search for researching these articles, as I find that these citations are either false, with the AI misinterpreting what was written, or support the total opposite position stated in the summary.
Grok is a much smaller and less accurate AI than the one powering Google Search. As such, the citations it generates for its articles will be seriously off the mark a truly horrendous amount of the time. Yes, users can comment that the citation is wrong, but this is something editors usually pick up, not users, causing these comments to be likely drowned out by others. Combine this with Grok’s bias towards non-expert feedback, and it seems likely that these citation errors will either stay or be glossed over (which is another form of cherry-picking).
So, the mechanics of an encyclopedia generated and audited by an AI with a bias towards the user and away from expert sources, with the only form of accountability coming from users who are already far down the right-wing pipeline, many of whom are likely already suffering from AI psychosis and lacking the cognitive faculty to check citations, are not good.
But I feel we haven’t quite defined just how wrong an AI-generated and AI-audited encyclopedia can be.
Let’s take ChatGPT-4o, which routinely dominates Grok during bench testing.
Research has found that it is only 78% accurate when answering questions. That is woeful. Other studies have found that it is equally inept at fact-checking, with one study discovering that ChatGPT-4o labelled true headlines as false 20% of the time and false headlines as true 10% of the time. Worryingly, it also labelled true headlines as uncertain 66% of the time, suggesting it is incapable of giving definitive answers and, if pushed to provide a binary true/false response, would get it wrong much more often.
Let’s be wildly generous and say that Grok reaches a similar level of generation and fact-checking performance for Grokipedia. Out of 100 “facts” it generates, 78 will be accurate and 22 will be inaccurate. During the fact-checking phase, at least 16 accurate facts will be incorrectly labelled as inaccurate, and two false facts will be labelled as accurate. In other words, a huge amount of accurate, credible information will be removed, and a small, but not insignificant, amount of falsehoods will be introduced.
And don’t forget, misinformation is more potent than information. Particularly in a scenario where user feedback, which is significantly more prone to cognitive bias, lack of critical thinking, and cherry-picking than an editor, is the only metric for accountability.
This removal of accurate information will also have significant cascading effects, as it can disrupt the article’s logical flow, leading to more errors down the line. For example, if a foundational statistic or fact is labelled inaccurate, it can completely change the article’s overall conclusion. This is made worse by the fact that AI often hallucinates fake data (or/and misinterprets data) to align with its inputs. So Grok will fudge an article with misinformation to square a round hole made by accurate information being falsely labelled as inaccurate, and vice versa. Considering Grok’s track record of having a strong ultra-right-wing bias, we can only imagine what heinous hallucinations it will generate in these scenarios.
So, while I can’t quantify exactly how often Grokipedia will be wildly inaccurate, I can say with the utmost confidence that it will be far too often for the service to be even remotely trustworthy, simply because of how AI works.
I highly suspect this is why the vast majority of Grok was directly copied from Wikipedia, citations and all, with only a handful of hot-topic articles being modified. For one, Wikipedia is over five billion words long. Generating and fact-checking something of a similar length from scratch would cost Musk tens of millions of dollars, according to my estimates. Copying is just cheaper, assuming you don’t get sued. The majority of Wikipedia’s articles are mature and surprisingly accurate, with good citations. So, copying them makes it look like Grok can accurately assemble an encyclopedia, especially as most users won’t directly compare it to Wikipedia. Effectively, copying Wikipedia gains Grokipedia instant unfounded trustworthiness by plagiarising the very competitor it is meant to be more accurate than and hides just how bad Grok is at being accurate, at least to the layman.
But I don’t believe Musk started Grokipedia to AI-wash Grok and make it look better than it is. After all, directly copying Wikipedia and risking serious lawsuits rather than getting Grok to write the encyclopaedia itself, or even getting Grok to simply rewrite these articles, screams that Grok is totally incapable.
No, this is more akin to Bezos buying the Huffington Post. It is a billionaire looking to control the media landscape.
Except this is potentially far more sinister. The Huffington Post is a media outlet — people are already aware of its biases. But Musk is trying to usurp the world’s most used encyclopedia, which is a place people go for unbiased facts. By using an AI to do this, he can obfuscate his bias, hide sources, and cherry-pick data. It literally takes his feelings, his bias, and his unreality and washes the sins away, falsely presenting them to the public as verified, unbiased facts. This is an attempt to remove what little critical thinking is left in society and force people to line up in rank and file under Musk’s control.
Speaking of critical thinking, that is what truth actually is. The democratic process of a group engaging in critical thinking and objective debate (rather than the “performative debate with no reviews” that is all the rage these days) to arrive at objective and functional truths. That is what Wikipedia set out to achieve: truth through democracy.
But, to a fascist goon, democracy is “left-wing bias”. And that shows, because Grokipedia is completely authoritarian. All the functional democracy has been removed and replaced with the aesthetics of democracy, when in reality Musk has authoritarian control over the very mechanism that generates and verifies these proposed “truths”. After all, if Grokipedia starts writing stuff he doesn’t like, he can just change its code, and it will tweak it to his perspective, and no one is the wiser.
Unfortunately, this goes even deeper. Musk is a cult leader. His wealth is based on a meme stock, and he uses all the same language and psychological methods as Manson or similar to control his followers. Cults typically have a few thousand followers who give up all their worldly possessions. Musk’s meme stock has millions of investors who have invested their life savings in his techno-prophetic nonsense. It is the same setup, using a different name.
If these pseudo-followers actually analysed how Tesla is performing, the controversy around Musk, or how fundamentally broken and false his political worldview is, they could break file, sell up, and devastate Musk. And, like all cult leaders, Musk knows this. What’s more, as Musk becomes more powerful, more greedy, and more extreme, he is pushing this cult further away from reality and into substantially more dangerous terrain. As such, more now than ever, Musk has to protect these pseudo-followers from reality, brainwash them, and make it so that only he controls their perception of reality.
What better way than giving them a comprehensive compendium on the truth created by one of these techno-prophecies, which is actually entirely controlled by him? It is literally the perfect tool to create the fevered group psychosis that drives cults.
Grokipedia is both moronically stupid, but that isn’t a bug; it is a feature, as this enables Musk’s devastatingly evil aims.
I mean this without a single drop of hyperbole or irony: Musk could have tattooed “I am an authoritarian fascist cult leader, and I love it” on his pasty forehead, and he wouldn’t look as much of an authoritarian fascist cult leader as Grokipedia paints him out to be. This is possibly the most mask-off moment of Musk’s career, and I am including those pathetic salutes. When can we, as a society, just move on from this twat without having to constantly defend ourselves from him injecting his fuckery into everything?
Thanks for reading! Don’t forget to check out my YouTube channel for more from me, or Subscribe. Oh, and don’t forget to hit the share button below to get the word out!
Sources: Reuters Institute, PNAS, Wired, The Guardian


I believe the rate of error you state for chat. I use it to support my elderly memory. I know (somewhere deep down) the answers to most of what I ask it. Yet 1/3 of my conversations have me typing "no try again, I said recently" or "Black musician" or "1960s British politician" or whatever. It's better with prompting than my memory but not much. And for sure grock is much worse. And the encyclopedia of grock makes me break out in a sweat.
Try this experiment in both grokipedia and Wikipedia by entering the phrase “Trump Epstein Scandal” and see how Musk’s filters are working.