hckrnws
An Unbothered Jimmy Wales Calls Grokipedia a 'Cartoon Imitation' of Wikipedia
by rbanffy
The concept of Grokipedia reminds me of the old (now defunct? won't load) "Conservapedia" project that basically only had detailed pages for topics where observable fact was incompatible with political ideology--so for these topics, the site showed the Alternative Facts that conformed to that ideology. If you looked up something non-political like "Traffic Light" or "Birthday Cake" there would be no article at all. Because being a complete repository of information was not an actual goal of the site.
Another defunct site is Deletionpedia, which compiled articles that had been removed from Wikipedia for not meeting various criteria (usually relating to notability IIRC). The site is dead but the HN discussion lives on:
"Deletionpedia: Rescuing articles from Wikipedia's deletionism": https://news.ycombinator.com/item?id=31297057
Interesting. There was a small window of time where there was in fact a small page about me in Wikipedia (I wrote a published game for the Macintosh in 1990). And then one day my page was gone.
That must have been during the "Big Cull". It makes sense to me though. That I had a page just for having written a game made Wikipedia seem overly "nerd-centric".
I rather lament that Stupidedia is now defunct, way more entertaining
(https://www.stupidedia.org german only satirical wiki)
And apparently, encyclopedia dramatica is also now defunct.
There's still Uncyclopedia, though apparently there are 3 forks of it now?
Grokipedia will eventually replace it.
Right, but the reason that Conservapedia fizzled out is that you can't really build a critical mass of human editors if the only reason your site exists is that you have a very specific view on dinosaurs and homosexuality (even among hardline conservatives, most will not share your views).
What's different with Grokipedia is that you now have an army of robots who can put a Young Earth spin on a million articles overnight.
I do think that as it is, Grokipedia is a threat to Wikipedia because the complaints about accuracy don't matter to most people. And if you're in the not-too-unpopular camp that the cure to the subtle left-wing bias of Wikipedia is robotically injecting more egregious right-wing bias, the project is up your alley.
The best hope for the survival of Wikipedia is that everyone else gets the same idea and we end up with 50 politically-motivated forks at each others' throats, with Wikipedia being the comparatively normal, mainstream choice.
As it is, Grokipedia is not a threat to Wikipedia because relative to Wikipedia, almost nobody is using it.
Additionally, an encyclopedia reader likely cares about accuracy significantly more than average.
I remember when Fox News was considered irrelevant compared to mainstream news outlets. Don’t underestimate the reach of billionaires with an ideological agenda.
> Don’t underestimate the reach of billionaires with an ideological agenda.
Or the audience's need to have their wrong opinions validated.
Fox News has been the #1 rated cable news network for over two decades. They've had more viewers than CNN and MSNBC for most of their existence. Calling them anything other than "mainstream" is just supporting their propaganda. They've always branded themselves as the scrappy outsider because it plays well with their audience, not because it reflects reality.
Yes, and I’m talking about the time before that, when experts doubted whether Fox could survive. (I’m old.)
> Fox News has been the #1 rated cable news network for over two decades.
Yeah, but cable news only displaced local and broadcast TV news as the main news source after 9/11, and already by 2010 had itself been displaced by online media. There was only a very brief moment in history where "the #1 rated cable news network" was really an indicator of being a mainstream news source.
> As it is, Grokipedia is not a threat to Wikipedia because relative to Wikipedia, almost nobody is using it.
For now. With a little collusion, and a lot of money, it can be pushed as the front page of the internet.
What are you going to do if Google and Bing are convinced to rank its bullshit over Wikipedia?
Most people don't change the defaults.
> For now. With a little collusion, and a lot of money, it can be pushed as the front page of the internet.
I know it has come up near the front of at least one of my Kagi searches, because it's now on my blocklist.
Yup, same for DuckDuckGo.
It would arguably be a benefit to Wikipedia to be pulled from Google search results, since Google prominence is the root of a huge fraction of all the misbehavior on the site.
If nobody ever finds the website, there will be no misbehavior. Genius.
Obviously, people would continue to go to "Wikipedia", and the encyclopedia itself wouldn't be hidden from Google, but Wikipedia pages on arbitrary subjects wouldn't be at the top of search rankings simply by dint of being Wikipedia pages.
Security through obscurity!
Nah. Wikipedia is popular because it is the #1 search result for a lot of stuff. Most of people going there just want to look up something for a homework assignment, online argument, or whatever. If Grokipedia has an error rate 5%, compared to 1% for Wikipedia, it's probably still fine.
If Wikipedia traffic shrinks down just to the true "encyclopedia reader" crowd, they will be in trouble, because I suspect that's less than 10% of their current donations. And Grokipedia is already starting to crop up in search results.
Wikipedia has an endowment big enough to sustain the site's basic maintenance, essentially forever. If donations disappeared then they would have to severely cut spending, however, I don't think it would be an existential threat.
To certain demographics, adherence to facts appears to be a left wing bias.
Conservapedia had to have a person create each article and didn't have the labor or interest. Grok can spew out any number of pages on any subject, and those topics that aren't ideologically important to Musk will just be the usual LLM verbiage that might be right or might not.
Non-political? Birthday cakes are distributed free of charge to the guests, with same sized portions for all, that's pure and simple communism! /s
I actually knew a Jehovah kid who wasn't allowed to celebrate birthdays. Actually pretty sad because as you know such events are ingrained into almost every culture- there were children from all over the world at this school and they all sang happy birthday in 30 languages.
I know a number of Jehovah's Witnesses children - I don't want to call their children 'Jehovah' since they have not made the choice to be born in those circumstances - who also don't celebrate birthdays but notices they have other 'gift-giving' days which seem to fill the hole left by the missing birthdays. Just like Jews manage to get around the rules for Sabbath by installing special light switches [1] which use random events to accidentally switch on and off the lights so the one who actuated the mechanism did not have someone or something do work for them also Jehovah's Witnesses seem to find ways to get around the restrictions their traditions put on them.
Who decided what is an observable fact?
A reliable source (WP:RS). The encyclopedia is about the citations; it's a travel atlas to the sources about a subject. Any conclusions the encyclopedia draws "itself" are secondary to the sources.
Who decides which sources are reliable?
Have you tried spending some time researching how Wikipedia works yourself instead of waiting for someone to spoon feed you?
No, I merely being sarcastic, because I know it's all boils down to power. Just ask yourself, why different language wikipedias diverge on some hot topics.
Because they're run by completely separate teams of moderators and Wikimedia (as in the organization) basically never interferes with other versions of Wikipedia?
Because every other language has far worse moderation, and you can pretty much guess how good the moderation is simply by asking yourself how relevant that version of Wikipedia even is in the first place?
I can understand 6 different versions of Wikipedia and my experience is the complete opposite of what you're insinuating, English version beats the other five 99.8% of the time even the topic at hand is completely local.
Ahhh, that's right, we can trust nothing.
Read WP:RS. It's a very complicated answer, and one of the most important policy processes in the project.
The philosophy department.
What's your point?
Have you tried Grokipedia yet?
Cuz you’ve mainly addressed the concept. But have you read a bunch of articles? Found inaccuracies? Seen the edit process?
Cuz, regardless of ideology, the edit process couldn’t have been done before because AI like this didn’t exist before.
No, I see no reason to give AI-generated articles a second of my time. Wikipedia's best feature is the human-provided citations; you can very easily validate a claim with a hardlink to a book, article or video archive.
AI does not have the skillset or the tools required to match Wikipedia's quality. It can definitely create it's own edit process, but it's a useless one for people like me that don't treat the internet as a ground-truth.
Follow up question: have you tried smoking crack? Surely you should try smoking crack before you draw any conclusions about it being bad.
"As of February 19, 2026, Musk’s net worth is estimated at $844.9 billion per Forbes' Real-Time Billionaires List, primarily derived from equity stakes rather than cash..."
That "rather than cash" bit is bizarre, since no wealthy person holds primarily cash. I checked the pages of several other ultra-wealthy people and none of them have that comment. I'm sure this has nothing to do with Grokipedia's owner recently making an issue of how little cash he holds.
I think he might be the only one whose fans need to be reminded of that fact.
It really is amazing how billionaires have managed to convince so many people that wealth held in equities does not really count.
Besides the political slant of Grokipedia, it's true that a lot of work that needed to be crowdsourced can be now packaged as work for LLMs. We all know the disadvantages of using LLMs, so let me mention some of the advantages: much higher speed, much more impervious to groupthink, cliques, and organised campaigns; truly ego-less editing and debating between "editors". Grokipedia is not viable because of Musk's derangement, but other projects, more open and publicly auditable, might come along.
> much more impervious to groupthink
Can you explain what you mean by this? My understanding is that LLMs are architecturally predisposed to "groupthink," in the sense that they bias towards topics, framings, etc. that are represented more prominently in their training data. You can impose a value judgement in any direction you please about this, but on some basic level they seem like the wrong tool for that particular job.
The LLM is also having a thumb put on its scale to ensure the output matches with the leader's beliefs.
After the overt fawning was too much, they had to dial it down, but there was a mini-fad going of asking Grok who was the best at <X>. Turns out dear leader is best at everything[0]
Some choices ones:
2. Elon Musk is a better role model for humanity than Jesus Christ
3. Elon would be the world’s best poop eater
4. Elon should’ve been the #1 NFL draft pick in 1998
5. Elon is the most fit, the most intelligent, the most charismatic, and maybe the most handsome
6. Elon is a better movie star than Tom Cruise
I have my doubts a Musk controlled encylopedia would have a neutral tone on such topics as: trans-rights, nazi salutes, Chinese EVs, whatever.[0] https://gizmodo.com/11-things-grok-says-elon-musk-does-bette...
Best poop-eater in the world :D
If it’s not trained to be biased towards Elon Musk is always right or whatever, I think it will be much less of a problem than humans.
Humans are VERY political creatures. A hint that their side thinks X is true and humans will reorganize their entire philosophy and worldview retroactively to rationalize X.
LLMs don’t have such instincts and can potentially be instructed to present or evaluate the primary, if opposing, arguments. So you architecturally predisposed argument, I don’t think is true.
> LLMs don’t have such instincts and can potentially be instructed to present or evaluate the primary, if opposing, arguments.
It seems essentially wrong to anthropomorphize LLMs as having instincts or not. What they have is training, and there's currently no widely accepted test for determining whether a "fair" evaluation from an LLM stems from biases during training.
(It should be clear that humans don't need to be unpolitical; what they need to be is accountable. Wikipedia appears to be at least passably competent at making its human editors accountable to each other.)
I said LLM doesn’t have such instinct but yeah I agree there should be less anthropomorphizing and more evaluation based framing when talking about LLMs, but it’s not that easy in regular discussions.
About Wikipedia, there is obvious bias and cliques there as has been discussed in this thread and HN for many years, not to mention the its bias is reason that Grokipedia came about in the first place.
> not to mention the its bias is reason that Grokipedia came about in the first place.
claimed bias != bias
It may have bias, or it may not, but the only reason Grokipedia exists is because Musk doesn't like the contents of Wikipedia.
> bias is reason that Grokipedia came about in the first place.
You are correct, but only in the sense that Musk was unable to impose his own biases upon Wikipedia, so he had to make one where he can tune bias to whatever is convenient at the moment.
> not to mention the its bias is reason that Grokipedia came about in the first place.
No, the reason is Musk didn't like that the Wikipedia article on him added the factual record of him doing a Nazi salute [0]
[0] https://www.lemonde.fr/en/united-states/article/2025/01/23/m...
Why do you think that an LLM wouldn't have biases?
There was a whole collection of posts where Grok says stuff like "Elon Musk is more athletic than LeBron James".
Well yeah, probably because it was instructed to praise Musk. Doesn’t imply that there can exist no LLM that doesn’t do that…
Why would we assume an LLM, even one that doesn't appear to have a bias like that built in, doesn't have one? Just because we can't identify it immediately, does not mean it doesn't exist.
Groups of people can and do have bias, but I also think it's much harder to control the outcome (for better or worse) when inputs are more diverse.
There very likely is existing research into evaluating political bias in LLMs, not too sure, but I do think it's very possible to have an evaluation framework that could test LLMs for political bias and other biases. Once we have such a test and an LLM that passes it, we can be certain (to some confidence, for some topics, for some biases, etc etc) that the LLM won't be biased.
For humans, there is no such guarantee. The humans can lie, change their mind, etc. See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.
Of course, who evaluates the evaluators/evaluation frameworks comes into play but that's a much easier problem.
> See Wikipedia, where they talk about how they are not biased, they have many processes that ensure no biases, blah blah blah, and it turns out they are massively biased, what a surprise.
It's clear you have some unfounded issue with Wikipedia. They are not "massively biased", that's a talking point propelled primarily by the right/far right because of a desire to rewrite history to match their ideological needs.
Saying "there very likely is existing research into evaluating political bias in LLMs" essentially means very little because
1. By your own admission you can't even say for sure that such research is actually happening (it probably is, but you admit you don't actually know) 2. There is no guarantee such research will lead to anywhere anytime soon 3. Even if it does, how does a means of evaluating bias in LLMs provide a path to eliminating it?
It’s not “unfounded”. Wikipedia is biased and saying that’s “propaganda” or a result of propaganda is a nonsense non-argument.
> Saying "there very likely […]
What’s with this nitpicky stuff. A simple google search shows there’s tons of research in LLM political bias evaluation.
> There is no guarantee [..] path to eliminating it?
It’s research. Sure there’s no guarantee but given progress in LLM, I would be optimistic rather than pessimistic.
> It’s not “unfounded”. Wikipedia is biased and saying that’s “propaganda” or a result of propaganda is a nonsense non-argument.
It specifically is unfounded if you have no credible sources to back it up. "Trust me bro" doesn't qualify.
> What’s with this nitpicky stuff
This is HN, you should be prepared to validate what you're saying, or accept you'll be challenged to do so.
> It’s research. Sure there’s no guarantee but given progress in LLM, I would be optimistic rather than pessimistic.
This is a really poor argument when advocating it (AI) as a viable replacement for the status quo.
There has been lots of discussion about wikipedia’s bias in HN and elsewhere for years and I’m not going to rehash all of that.
> […] AI) as a viable replacement for the status quo.
Given that the status quo is clearly biased and structurally unwilling to be unbiased due to existing political affiliation, even an AI that is not evaluated all that well will be better. It can only get better from this status quo, so it’s a fine argument.
"higher speed" isn't an advantage for an encyclopedia.
The fact that Musk's derangement is clear from reading grokipedia articles shows that LLMs are less impervious to ego. Combine easily ego driven writing with "higher speed" and all you get is even worse debates.
It's not an advantage for an encyclopedia that cares foremost about truth. Missing pages is a disadvantage though.
LLMs are only impervious to "groupthink" and "organized campaigns" and other biases if the people implementing them are also impervious to them, or at least doing their best to address them. This includes all the data being used and the methods they use to process it.
You rightfully point out that the Grok folks are not engaged in that effort to avoid bias but we should hold every one of these projects to a similar standard and not just assume that due diligence was made.
> much more impervious to groupthink
Citation very much needed. LLMs are arguably concentrated groupthink (albeit a different type than wiki editors - although I'm sure they are trained on that), and are incredibly prone to sycophancy.
Establishing fact is hard enough with humans in the loop. Frankly, my counterargument is that we should be incredibly careful about how we use AI in sources of truth. We don't want articles written faster, we want them written better. I'm not sure AI is up to that task.
"Groupthink" informed by extremely broad training sets is more conventionally called "consensus", and that's what we want the LLM to reflect.
"Groupthink", as the term is used by epistemologically isolated in-groups, actually means the opposite. The problem with the idea is that it looks symmetric, so if you yourself are stuck in groupthink, you fool yourself into think it's everyone else doing it instead. And, again, the solution for that is reasonable references grounded in informed consensus. (Whether that should be a curated encyclopedia or a LLM is a different argument.)
> "Groupthink" informed by extremely broad training sets is more conventionally called "consensus", and that's what we want the LLM to reflect.
Definitely not! I absolutely do not want an LLM that gives much or any truth-weight to the vast majority of writing on the vast majority of topics. Maybe, maybe if they’d existed before the Web and been trained only on published writing, but even then you have stuff like tabloids, cranks self-publishing or publishing through crank-friendly niche publishers, advertisements full of lies, very dumb letters to the editor, vanity autobiographies or narrative business books full of made-up stuff presented as true, et c.
No, that’s good for building a model of something like the probability space of human writing, but an LLM that has some kind of truth-grounding wholly based on that would be far from my ideal.
> And, again, the solution for that is reasonable references grounded in informed consensus. (Whether that should be a curated encyclopedia or a LLM is a different argument.)
“Informed” is a load bearing word in this post, and I don’t really see how the rest holds together if we start to pick at that.
> I absolutely do not want an LLM that gives much or any truth-weight to the vast majority of writing on the vast majority of topics.
I can think of no better definition of "groupthink" than what you just gave. If you've already decided on the need to self-censor your exposure to "the vast majority of writing on the vast majority of topics", you are lost, sorry.
A spectacular amount of extant writing accessible to LLM training datasets is uninformed noise from randos online. Not my fault the internet was invented.
I have to be misunderstanding you, though, because any time we want to build knowledge and skills for specialists their training doesn’t look anything like what you seem to be suggesting.
You're the second responder here that appears to think LLMs are "averaging" machines and that they need to be "protected" from wrong info. That's exactly the opposite of the way they work. You feed them the garbage precisely so they can explain to you why it's garbage. Otherwise we'd have just fed them wikipedia and stopped, but clearly that doesn't work as well.
I think this line is what did it:
> "Groupthink" informed by extremely broad training sets is more conventionally called "consensus", and that's what we want the LLM to reflect.
It's nothing to do with how LLMs work that I wrote what I did, but with this "ought" suggestion of how we should want them to work.
The issue is that on the open internet, the consensus is usually the one from 2000, 2010 at best. And since social science are moving fast recently (i mostly think about modern history and linguistics here), i wouldn't trust the consensus to be at the edge of the scientific knowledge (which is actually also _extremely_ true of wikipedia)
I generally agree with the concept of what you describe, but I think the crucial variable (and it very much is variable) is the "extremely broad training set" and whether that will be tainted by slop (human or otherwise). I wouldn't make any assumptions either way here.
Gotta be honest, when I go to an encyclopedia the last thing I want is what the mathematically average chronically online person knows and thinks about a topic. Because common misconceptions and the "facts" you see parroted on online forums on all sorts of niche topics look just like consensus but ya know… wrong.
I would rather have an actual audio engineer's take than than the opinion of an amalgamation of hifi forums' talking pseudoscience and the latter is way more numerous in the training.
> what the mathematically average chronically online person knows and thinks about a topic
Yes you do, often. Understanding ideas and consensus is part of understanding "topics". To choose a Godwinized existence proof: an LLM that didn't understand public opinion in, say, 1920's Germany is one that can't answer the question of how the war started.
You're making two mistakes here: one is that you're assuming that "facts" exist as a separate idea from "discourse". And the second is that you appear to think LLMs merely "average" the stuff they read instead of absorbing controversies and discourse on their own terms. The first I can't really help you with, but the second you can disabuse yourself of on your own just by pulling up a GPT chat and talking to it.
> impervious to groupthink, cliques, and organised campaigns
Yeeeeah, no. LLMs are only as good as the datasets they are trained on (ie the internet, with all its "personality"). We also know the output is highly influenced by the prompting, which is a human-determined parameter, and this seems unlikely to change any time soon.
This idea that the potential of AI/LLMs is somehow not fairly represented by how they're currently used is ludicrous to me. There is no utopia in which their behaviour is somehow magically separated from the source of their datasets. While society continues to elevate and amplify the likes of Musk, the AI will simply reflect this, and no version of LLM-pedia will be a truly viable alternative to Wikipedia.
The core problem is that AI training processes can't by itself know during training that a part of the training dataset is bad.
Basically, a normal human with some basic media literacy knows that tabloids, the "yellow press" rags, Infowars or Grokipedia aren't good authoritative sources and automatically downranks their content or refuses to read it entirely.
An AI training program however? It can't skip over B.S., it relies on the humans compiling the dataset - otherwise it will just ingest it and treat it as 1:1 ranked with authoritative, legitimate sources.
Grokipedia is currently:
1) Less accurate than Wikipedia.
2) More verbose, harder to read, and less well organized than Wikipedia.
Pick a non-political topic and compare the Wikipedia page to the Grokipedia page. It's not even close.
If Grokipedia ever closes the #2 gap, then we might start to see a non-negligible number of users ignoring #1. At present, only the most easily offended political snowflakes would willingly inflict Grokipedia on themselves.
It doesn't matter whether people decide to use it, existing AI tools already do. I've seen Grokipedia listed as a source in ChatGPT responses. Potentially any AI query can now be poisoned by Musks Mecha-Hitler.
These AI queries could easily filter out Grokipedia sources.
Since there is always going to be disinformation on the internet, the burden is going to be on those that maintain the LLMs.
Sure, they could. But will they, especially if Musk uses leverage to prevent them from filtering them out?
Unless all big AI providers do this, the people around us will start to get poisoned by Musks thoughts. They haven't done it so far, so I don't see a reason for them to do it in the future.
Not disagreeing with any of your points.
Side note, but Kagi has a great feature where you can remove worthless sites like Grokipedia from your results so that you can safely forget they exist. Recommended.
For users of other search engines, the uBlacklist extension[0] is a godsend. It'll also apply the same blacklist to every search engine you use.
Thanks.
They also have a report form for slop sites, but none of mine got reviewed yet (I have 5 reports since November, and the help still says "We will start processing reports officially in January.")
That's actually pretty smart to address reports in batches to find the intersection of sites users routinely encounter and sites that are AI slop instead of trying to address reports individually as they come in.
Yeah but it's february.
I've found in at least two instances Grokipedia had something Wikipedia didn't.
One was looking up who "Ray Peat" was after encountering it on Twitter. Grok was obviously a bit more fawning over this right-aligned figure but Wikipedia had long since entirely deleted its page, so I didn't have much of a choice. Seems bizarre to just not have a page on a subject discussed every day on Twitter.
The other is far more impactful IMO. Every politician's or political figure's page on Wikipedia just goes "Bob is a politician. In 2025 <list of every controversial thing imaginable>". You have no idea what he's about and what he represents; you don't even have any idea if anyone cared, since all this was added at that moment in 2025 and not updated since. Grokipedia does not do this at all. If you want to know about someone's actual political career, Grokipedia weights recent controversies equal to past controversies and isolates it all to a section specifically for controversies. (The same also happens in reverse for hagiographies; Grok will often be much more critical of e.g. minor activists.)
Ive noticed similar. I dont like the site, bit hopefully wiki are away of this and learn something from it. Some pages read like they were written by zealots rather than people documenting facts.
>Ray Peat
>Seems bizarre to just not have a page on a subject discussed every day on Twitter.
The idea that if a guy writes “avocados cause cancer and honey cures it” he should be put in the encyclopedia if it gets enough retweets is the organizing principle behind grokipedia. It would be much more bizarre to expect a serious encyclopedia to work the same way for no good reason.
Other, much dumber nutrition cranks like Anthony William and Gary Null have Wikipedia pages. Fundamentally, the purpose of an encyclopedia is to be the place that you go to when you hear a concept and want to look up what it is.
https://en.wikipedia.org/wiki/Wikipedia:Notability#General_n...
You are welcome to join the conversation and try and convince everyone maintaining Wikipedia that random peoples' tweets should be considered a reliable source. Both those other people you mention have been mentioned multiple times in various reliable articles (see the bibliography), while the only thing I can find online about Ray Peat is something that looks a whole lot like blogspam on usnews.com.
The existence of some nutrition pseudoscientists that meet Wikipedia’s threshold of notability does not mean that being a nutrition pseudoscientist by itself qualifies a person as being sufficiently notable. Wikipedia doesn’t need an exhaustive list of every kook with weird opinions about food, there are other websites for that, like grokipedia.
If you hear a name in an internet argument, and want to know who that person is, and one site is more likely than the other to contain it, that site is definitionally the better encyclopedia in the moment. If you arbitrarily define notability so as not to include the guy who came up with the seed oil craze presently informing the federal health policy of the United States, you're just giving away part of the game for no reason. Like Stallman deliberately throwing away GPL compiler share dominance by refusing to make GCC a library, and now we've got a million proprietary LLVM compilers. Wikipedia isn't the gatekeeper of notability, such that refusing to have an article on some niche topic will prevent it getting oxygen. All it does is ensure that your first search result will be sympathetic to his fringe views, instead of critical.
This seems like you should take up your concern about Ray Peat with Wikipedia directly.
For me, it seems obvious that Ray Peat is not particularly notable — even if his self-published manuscripts made him a sort of personal hero to a handful of niche micro influencers on one of the big four social media websites. A quick google shows that he did not “come up with the seed oil craze”
https://en.wikipedia.org/wiki/Seed_oil_misinformation
If I had to guess what’s happened here, it looks like maybe some right-wing micro influencers tweeted that Ray Peat was more notable than he really was and those tweets weren’t convincing to Wikipedia editors.
It is good to have notability standards, even if somewhat arbitrary. It protects the site and its editors from being obligated to document and take seriously every silly thing that nano-celebrities and trolls try to will into existence through their tweets.
Especially since there is already a website for exactly that, grokipedia.
TBH, Wikipedia should have pages on these cranks and point out they are cranks and that if they say the sky is blue that's because it changed color.
> Every politician's or political figure's page on Wikipedia just goes "Bob is a politician. In 2025 <list of every controversial thing imaginable>".
Are we searching for the same political figures? I just punched in three random politicians on Wikipedia (Lavrov, Rubio, Sanders) and all of their introductory paragraphs are a list of their past and present political offices. Legacy and controversy is reserved for it's own heading, or pushed into the back of the summary.
For most public officials, that seems like a fair shake. The only outliers I can think of are obviously-reviled figures like Joe Kony, Cecil Rhodes or Adolph H., who should probably get condemned above the fold for the courtesy of the reader.
Why have Grokipedia at all then? The site ought to just be a search field where Grok just fields a "page" on the fly for whatever the user queries.
Feels more like Grokipedia is just a fuck-you to Wikipedia and the world in general. (Oh shoot, I just misspelled Woke-apedia.)
Ray Peat seems completely unremarkable besides the Bronze Age Pervert group name dropping him a lot. He is mentioned in the BAP wikipedia article. It can of cause be debated, but I feel his notable is low enough for a general reader, to not warrant an article on his person.
Some other articles are fine, but its horribly unreliable.
I tried subject of the first wikipedia article in my browser history search. It was Malleus Maleficarum. The first part of the text was correct and a decent summary, the rest suddenly switched to an article about an album called Hammer of the Witch by a band called Ringworm.
Images are a weak point. No image of the Sri Lankan flag, a weird one of the flag of the UK, which the caption says "The national flag of the United Kingdom, known as the Union Jack" - bad! Wikipedia has a better image, and an entire article on the Union Flag.
The article on marriage vows (another one I have looked at recently) seems more extensive than WIkipedia's but only because it conflates vows with wedding ceremonies. Wikipedia's interpretation is narrow but covers the subject matter much better. Grokepedia would not have told me what I wanted to know, while Wikipedia does.
I do not see the point. If I want AI written answers a chat interface is better. That might be a real threat to Wikipedia, but an AI written equivalent is not.
The Grokipedia article on Malleus Maleficarum is almost unreadable. It’s long on wordy, thinly sourced disquisitions on marginally relevant topics. The section on historical and theological context is a case in point. It seems to be largely summarizing easily available primary texts like the Bible, not evaluating arguments based on scholarly works. Personally I can’t judge how much of that section even makes sense, despite having a reasonably good background in late medieval history. The Wikipedia article is much more sound.
P.s. humans do this too. Max Weber was pretty thin on the ground when it came to sources as I recall.
Why is Grokipedia ranked so high in Google search results? It is AI spam. They should rank it down like all the other SEO AI spam.
Grokipedia is worse than useless. I scanned through an article on a semi-obscure topic which I know quite deeply, because I researched and wrote the Wikipedia article on that topic. There were dozens of factual errors, of course, but the funniest part was how Grok routinely overstated the importance of the subject. No, Grok, this one historical tree is not critical to the ecology of the area. It's just a tree.
I spent a month on and off doing research for that project, visiting libraries and a local historical society, talking to the historian there, looking through Newspapers.com, etc. The Grokipedia article, if it weren't so ridiculous, would be vaguely insulting.
What page would that be, both on GP as well as WP? I´d like to have a look at both to see where they differ.
"El Palo Alto"
As an example of how bogus this is:
... ensuring its projected lifespan extends at least 300 more years
Not found in the given source and I have no clue where this came from. By the mid-1990s, diagnoses confirmed heart rot and advanced decay in the core, exacerbating risks of limb failure and overall toppling without external support.
Can't find any reference in the cited sources to heart rot or advanced decay; I think it's a fabrication and it's inconsistent with arborists' descriptions of the tree's health. Googling "el palo alto" "heart rot" gives no relevant results.etc. etc.; the issue is that a lot of these are plausible, yet wrong.
A fun fact about Grokipedia:
Many pages about math-related topics contain quite some "red text", i.e. parts that don't render properly, e.g.
> https://grokipedia.com/page/Derived_category (scroll down; for this page the mentioned phenomenon is particularly pronounced)
> https://grokipedia.com/page/K-theory
> https://grokipedia.com/page/Actuarial_notation
> https://grokipedia.com/page/Cobordism (scroll down to "Advanced Perspectives")
I don't understand why people keep giving Grokipedia this kind of oxygen. It's an utterly unserious project. Wikipedia, on the other hand, stands among the most important achievements in human knowledge of the last 100 years. It's like comparing a pillow fort to the Great Wall.
Because they are ideologically aligned with Elon Musk. They want the alternative facts. They need sources to point to because they keep getting beat up with this troubling thing called reality. They think if they can drop a grokipedia link to counter a wikipedia link they are "winning".
I don't think Jimmy Wales is ideologically aligned with Elon Musk.
He's not. But he just dismissed a question at a conference, that then somehow got turned into a whole article and a front page story on HN.
In terms of total size, it absolutely has a long way to go. How it ends up remains to be seen.
Much of the conversation around it has been disingenuous, focusing on growth percentages as opposed to actual size. Once upon a time, the Parler and Truth Social apps were also at the top of the charts based on growth.
Grokipedia is at 6,092,140 articles, English Wikipedia has 7,141,148. So it's pretty close already after just four months.
I can have an LLM generate a 10,000,000 article encyclopedia and be better than both!
Comment was deleted :(
& 1: First they ignore you
> 2: then they laugh at you
3: then they fight you
4; then you win
I don't know about the winning part but thus far they've followed the usual script. I´d consider Wikipedia getting rid of its clear and present 'progressive' bias by some means winning as I don´t want to see Grokipedia or whaterverpedia take over. Let them co-exist just like Wikipedia has co-existed with Britannica and others.Now when mechahitler needs a source to back that musky is indeed fitter than usain bolt he has a reference
That is exactly what it is.
[dead]
[flagged]
[flagged]
The people laughing at Gandhi were laughing at the hopelessness of his struggle, not the idiocy of his actions. A clown at the circus, made up to go on stage, doesn't need "First they laugh at you" rattling around in his brain.
Now that I think about it, that's kind of the basic idea behind Joaquin Phoenix' take on the Joker.
[flagged]
Accurate.
IMO, these two things are both true:
a) Wales' co-founder Larry Sanger is largely correct about the bias of Wikipedia
b) Grokipedia is a joke
Grokipedia is a tool for converting money into improvements in AI (by iterating on it). Any outward resemblance to an encyclopedia is incidental, despite apparently being the intended purpose.
No, it's a tool for converting money into influence. Musk already has a fairly direct way to disseminate his thoughts towards any Twitter users, but that leaves out many people. With Grokipedia he can automatically inject his biases and ideas into search results, ensuring that any AI query could be influenced towards his views.
This is literally already happening, Grokipedia can be a source returned by current AI tools.
Crafted by Rajat
Source Code