294 Comments
User's avatar
Katherine Dee's avatar

You're easily one of the best (if not *the* best) documenters of Internet history writing today

Expand full comment
sol s⊙therland 🔸's avatar

Agree with this!

Expand full comment
Isaac King's avatar

> “unreliable, editorially incompetent, repeatedly caught publishing false information, conspiracy theories and hoaxes, [undue weight] for opinions.”

Perhaps more damning was David's justification for this comment on Wikipedia. Someone replied asking for David to provide sources for his claims about Quillette, and David replied:

> I do expect you to have read the discussion of this source in the section above, which lists them extensively. If you are unable to do this, you should not be commenting on an RFC following the discussion above. If you are merely unwilling, you should not be commenting either.

I obliged and read the entire (very long) previous discussion, only to discover that no such sources had been provided anywhere in the discussion!

Expand full comment
The Mighty Doom's avatar

This is very typical of Gerard. He makes defamatory statements about a source, sweeping generalisations and gross misrepresentations, he claims this is all backed by the contents of long and disorganised Wikipedia debates. Someone goes and looks, doesn't find what Gerard claims is there, challenges his account, and then nothing happens. Gerard goes silent and nobody else acts (theoretically, ignoring a valid question is a gross breach of Wikipedia's collaborative based protocols and should lead to some form of sanction). Why? Because the claim ultimately serves Wikipedia's political bias. They genuinely want to believe what they say about the Daily Mail's general business model, even though most of it is so absurd they can't incorporate these claims into their own famously unreliable encyclopedia. This is the fatal flaw of Wikipedia. It polices itself. It will only stop when donations dry up, the media wakes up, or there is a mass influx of right wing editors.

Expand full comment
Aapje's avatar

They don't even depend on donations anymore, but are now funded by a deal with Google. And they are working or already have a trust fund, so they can live off the rent forever.

Expand full comment
Aapje's avatar

The way Wikipedia is most likely to end is by being made obsolete by something better, or because the Wikimedia foundation becomes even more bored of running Wikipedia and fully transitions into an activist organization.

Expand full comment
Some Guy's avatar

The courage to publish this must have been immense. I’d never ever want that guy getting fixated on me.

Expand full comment
Martin Blank's avatar

New Wikipedia article has appeared about Tracing Woodgrains highlighting his long membership in the Nazi party and personal friendship with Hitler. All backed up by “Reliable Sources” of course.

It is sad what a joke this article makes Wikipedia out to be.

Expand full comment
Isaac King's avatar

This is a-priori not a likely claim, and I just checked both live Wikipedia and the article deletion log for such an article and found nothing, so my current belief is that you are lying. Additionally, even if a troll were to create such an article, it would be rapidly deleted, which is exactly how Wikipedia is supposed to function.

People who liked this comment really should recalibrate their credulity.

Expand full comment
Martin Blank's avatar

Your sarcasm/satire detector is broken.

Expand full comment
Halftrolling's avatar

Found the rationalist

Expand full comment
Eichelhäher's avatar

Thanks for clearing this up. For a moment I thought he was there in the Fuehrer's bunker when the Red Army took Berlin.

Expand full comment
User's avatar
Comment deleted
Jul 11
Comment deleted
Expand full comment
Martin Blank's avatar

In a way that my silly joke was plausible enough that someone fact checked it says about all you need to say about Wikipedia and what this scandal reveals about it.

Expand full comment
Martin Blank's avatar

I love smbc so often. That was from before I started reading it so thanks!

Expand full comment
1 horsedick (don't laugh)'s avatar

You're really a uniquely fine author, Trace. Bigly talented with words and neurodivergently driven towards meticulous research. I think I'm actually going to give you money now.

Expand full comment
UNRIVALED's avatar

msg aevann to add substack support

Expand full comment
TheOtherKC's avatar

The current standing of Roko's Basilisk in the LessWrong community is actually news to me. I guess I swallowed Gerard's version of the story without thinking. Not least because I'm broadly skeptical of LessWrong and Yudkowsky for other reasons, so it supported my priors.

I know that nobody is immune to propaganda, but it still stings when this truth comes for me.

Expand full comment
1 horsedick (don't laugh)'s avatar

Even if the Basilisk is taken from us, there is still everything else about Yudkowsky to laugh at.

Expand full comment
Timothy's avatar

What exactly? In my opinion he has 3 or 4 odd believes, but mostly he seems like a kind and thoughtful person, and also a talented author.

Expand full comment
Spherb's avatar

I was going to write a comment pointing out that some of his beliefs *are* seriously odd. However, I reconsidered when I found myself putting caveats on all of the odd beliefs. For instance:

- He thinks that AI is almost certainly going to kill everyone (caveat: it's the high confidence that's weird, "AI might kill everyone" isn't an unpopular view even among experts now).

- He thinks that everyone should be signed up for cryonics (caveat: he doesn't necessarily think that the odds of it working are high, he just thinks that they're high enough--over ten percent-ish?--that it's worth $40k).

- He would pick the torture in the torture vs dust specks thought experiment (caveat: coming up with a theory of ethics that picks the dust specks and doesn't also lead to any equally bizarre results is actually really hard).

I'm hardly unbiased here--I've been rationalist-adjacent for a long time--but I think EY deserves more effortful criticism and less jeering than he usually gets. Although I think he's wrong about a lot of things, he's typically wrong in interesting, non-trivial ways that are actually hard to make fun of once you dig into them. And he's genuinely a good writer who has a way of making weird philosophical concepts seem obvious in retrospect.

Expand full comment
Max More's avatar

Yudkowsky is way, way off on AI but he is essentially correct about cryonics -- one of the most (deliberately) misunderstood ideas of all time. I would not agree with him that "everyone" should be signed up for cryonics (if EY did say that) but it is an option that rationally should be far more popular.

Girard has consistently deleted evidence in favor of cryonics on wikipedia. It seems that he has been replaced by someone else fairly recently but probably someone he picked.

Expand full comment
TheOtherKC's avatar

As my "main thing", his p(doom) is much, much too high, based on the assumption that current AI methodologies even have a chance of creating something remotely akin to sapience. For example, in https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/, he calculates the risks involved in a hot war between nation-states capable of building large CPU clusters as less than the risks of an AI:

> If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

> Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.

While I can imagine scenarios where I would agree, I do not consider this particularly rational given the current state of artificial intelligence.

Expand full comment
Daniel Kokotajlo's avatar

Not only is Yudkowsky being not-irrational here, my own view on the risks from AI is pretty similar. Key point: we aren't talking about present AI systems, but future AGI systems. Which we and many other people with relevant expertise think are just a few years away.

Expand full comment
Daniel Kokotajlo's avatar

If you are looking for things to read on this subject, I have blogged about it a lot, also there is https://situational-awareness.ai/ and interviews with OpenAI researchers (e.g. https://www.dwarkeshpatel.com/p/john-schulman) and the bio anchors report and associated literature, responses, etc. e.g. https://www.planned-obsolescence.org/july-2022-training-game-report/

Expand full comment
1 horsedick (don't laugh)'s avatar

The man loudly proclaims the rapture is coming any day now. He's a joke.

Expand full comment
Ben's avatar

I don't think he's saying 'any day now', he's saying we should be concerned about future AI becoming smarter than humans. Seems reasonable given how fast it's developing at the moment.

Expand full comment
Sandro's avatar

Citation please.

Expand full comment
Sandro's avatar

So not any day now, but 6 years from now. And he's not claiming the rapture, because nobody is going to be saved, but all or most of us killed. So basically the complete opposite of what you claimed.

Expand full comment
IJW's avatar
Jul 11Edited

From reading some of his tweets and some of his articles he strikes me as narcissistic and overly full of himself. Kind of like the people featured on the iamverysmart subreddit.

I also don't understand why he is taken serious at all about the subject of AI, as he does not really have notable achievements in the field of AI.

Expand full comment
Paul Crowley's avatar

Then it must be a huge surprise to you to see people like Hinton and Bengio saying "we are now coming around to agreeing with what Eliezer has been saying for decades".

Expand full comment
Sandro's avatar

Having a persuasive argument doesn't require achievements in AI.

Expand full comment
IJW's avatar

I am sure you can find some intelligent non brain surgeons to come up with persuavive arguments regarding brain surgery. Which should be completely ignored, as they are not brain surgeons.

Expand full comment
Rob Miles's avatar

Although in this case, the arguments seem to be persuasive *to brain surgeons*. For example, the top three most cited AI researchers ever have all signed a statement saying that the risk of extinction from AI should be taken as seriously as nuclear war

Expand full comment
Sandro's avatar

No, they shouldn't be ignored because they can make totally legitimate arguments about when brain surgery is appropriate, when the risks are too great, or when people with vested interest in performing risky brain surgeries maybe shouldn't be left to do as they please without oversight. I hope you see the parallels.

Expand full comment
Linch's avatar

This is an overly limiting analogy when it comes to *risk analysis* of emerging technologies. Dangers of climate change are not primarily fielded by petroleum engineers (nor are the solutions primarily discovered by petroleum engineers), nuclear weapons deproliferation is somewhat spearheaded by physicists but not primarily in the domain of physicists (and many of the partial solutions look like game theory or diplomacy or fancy engineering, not nuclear physics), Silent Spring was written by a marine biologist, not a chemical engineer, etc.

Expand full comment
Whatever Happened to Anonymous's avatar

I don't think "everything" is fair (he's taken a big W with the AI risk stuff, even if he's wrong, for being so ahead of the curve), but I think you can still laugh at the fact that he's fat (is he still? has he taken GLP1s?).

Expand full comment
Isaac King's avatar

If your takeaway from this is "I guess I was wrong about Roko's Basilisk, but I must still be correct about all the other beliefs I acquired via the same process about the same person", I would posit that you are have not fixed the true problem in your reasoning. Perhaps consider that the bad actors and mental biases that led you to an incorrect belief about Roko's Basilisk may have done the same for whatever other beliefs you have in mind?

Expand full comment
1 horsedick (don't laugh)'s avatar

The whole 'imminent superintelligent AI singularity', that still a thing?

Expand full comment
Isaac King's avatar

Depends on what you mean by "imminent". Eliezer has been pretty consistent that he doesn't know if it's coming next year or in 30 years.

Expand full comment
Victor Levoso's avatar

Do you think superinteligentligent AI is imposible or just very far away?.

And if so why?

Are you aware it's a relatively common think to believe in the field these days?

Expand full comment
1 horsedick (don't laugh)'s avatar

The burden of proof rests with the doomsday cultists peddling unfalsifiable beliefs

Expand full comment
Sandro's avatar

Why do you think they're unfalsifiable? That's a weird claim. The whole point of the argument is to drive research into safety to either avoid catastrophe or prove it's not going to be a problem. The safety question is totally falsifiable.

Expand full comment
LoveBot 3000's avatar

What an incredible story. I’ve always wondered who the sneer club people were, and why I’d occasionally stumble upon wildly uncharitable comments about LW and Scott. Wild how much one determined antagonist can shape flow of information on the internet.

Expand full comment
Ladygal's avatar

Wow - what an incredible story! Even as a Wikipedia edits layman, this was absolutely fascinating.

Gerard's story made me think of that scene in Memento where Carrie-Anne Moss takes all the pens out of her house so Guy Pearce can't write anything down before his memory blanks. What does it mean to control one of the top Google results and the first place most people click for information about an unfamiliar person or concept? Gerard seems to understand how powerful this can be.

(Hilariously, when I just Googled Roko's Basilisk, the first result was Wikipedia.)

Expand full comment
The Mighty Doom's avatar

The way to annoy Gerard is to point out that he doesn't have that much influence. The effect of the Daily Mail ban in the real world was negligible. Other than the next day's news, nobody cared. The Mail is as strong as ever, becoming the number one newspaper after the ban even. And the UK had a right wing government for another eight years. And a big reason why is because people in the real world can totally believe Wikipedia editors would lie about the Mail. That they would misrepresent the facts. That they wouldn't be fair. Any doubters only need to see Gerard and those like him speaking for a few posts to figure out what's going on.

Expand full comment
Jacob Harrison's avatar

One wonders that Gerard's Wiki editing powers weren't further curtailed, given the evidence of bias and fraud in the Scott Alexander case.

Come to think of it Wikipedia is famously hostile to crypto projects, and now I have a strong suspicion as to why that is the case. Multi-billion dollar protocols don't have pages, and ones that do have scarce technical detail. They publish technical details on their own websites and they are discussed in blogs, none of which is a "reliable source". When a crypto project does make the news its usually the ones that have high-profile criminal activity, and only then they qualify for a Wikipedia entry.

Gerard is the main author of the Urbit Wiki page, which is mostly is a hit piece against Curtis Yarvin's politics. While Urbit is a very interesting technical project with novel design choices, almost none of that shows up in the Wiki page. It also gives an outdated and misleading summary of its functionality as just a "bare-bones messaging server", as it has added hundreds of user generated apps in the last few years. But none of that is in what Gerard deems a "reliable source". It's obvious he's using process to push his political agenda. The talk page is just Gerard using the RS policy and process to bully the poor kid who thought to make the article into removing all the interesting stuff https://en.wikipedia.org/wiki/Talk:Urbit

Expand full comment
The Mighty Doom's avatar

This is what so many people just don't understand about Wikipedia. Their published rules are meaningless. As anyone who understands the purpose of an enyclopedia would expect, it is allowable by their own rules to use primary sources to augment an article. The article still has to be primarily based on reliable secondary sources, to both prove it is a "notable" topic (worthy of inclusion) and ensure the article is "neutral" (which is why they are trying hard to deligitimise right wing sources). So if by some miracle a well known crypto has an article, you are allowed to use it as a primary source to support things like basic technical details (especially if there is nothing controversial or disputed about such information). But this is just another way people like Gerard can make sure subjects he doesn't want on Wikipedia, do not get covered, or do not get covered fairly. He will simply lie and say all content in Wikipedia must be supported by a secondary source, and remove it. The lie is blatant, contrary to their own rules, but because crypto is seen as just another branch of the right wing ecosystem, nobody on the inside objects. And on Wikipedia, outsiders have literally no power. The fallout from this very piece demonstrated that they will ban any newcomer whose first action is to raise concerns about an insider like Gerard using Wikipedia's internal mechanisms. Such a thing is seen as proof the person is not acting in good faith and is not there to help build Wikipedia. It is perverse, but it happens all the time, because Wikipedia has no regulator but itself. They only paused and unbanned the person when they realised a lot of people were watching. Most of the time, people aren't watching, especially not journalists, and that's exactly how they like it.

Expand full comment
Emily Booth's avatar

Wikipedia will never get another dime from me. Seeing behind the curtain has confirmed my suspicions.

Expand full comment
Eugine Nier's avatar

Won't help. The Wikimedia Foundation already has way more money than it knows what to do with. Their only real expenses are server costs since all the content and moderation is provided by volunteers.

Expand full comment
The Mighty Doom's avatar

It is already helping. Wikipedia had to lay off staff recently because small donors are losing faith in the brand. Which is essentially their only asset. They are facing a future where they will have to rely on legacies (rich white men) or corporations (rich white men), if they want to keep Wikipedia content ad free. Once that gets out, that Wikipedia is now written by a handful of editors, still mostly white men, and are now also funded by a handful of white men, Wikipedia's brand will degrade ever further. They ceased to be a mere server company a decade ago at least, and that side of things is already secure due to their giant cash hoard. But they need a lot more cash to pay staff and make grants because their goals are extremely lofty. They would rather merge with a much bigger charity than ever go back to being Classic Wikipedia.

Expand full comment
Dan Gardner's avatar

What is this “Classic Wikipedia” you refer to?

Expand full comment
The Mighty Doom's avatar

Back when it was just an encylopedia project. The "Wikipedia" of today is not only a whole host of different websites for different kinds of project, it's a vast grant issuing and lobbying organization. They have tried to push the other brands, and the name of the parent organisation, the Wikimedia Foundation, but they eventually gave up, admitting people only really know "Wikipedia".

Expand full comment
Peter Bjørn's avatar

WP became a in it own way snobbish social club on a special brand of terminally online people last decade (early 2010s). It was a giddily enthusiastic, happy (and chaotic) quasi-community of people sharing their knowledge the best they could in the 'aughts. That feeling however hasn't been there for a decade at least.

Expand full comment
Shawn Willden's avatar

I haven't given them anything for years, ever since I read their financial disclosure and realized that they have hundreds of millions of dollars and spend at most a couple of million per year on actually operating the site. If the foundation dropped everything other than running the web sites people want them to run, treating their existing pile of cash as an endowment, they'd be able to run it forever without asking anyone for a penny.

Expand full comment
Dan Gardner's avatar

“One person is alleged to have acted badly = This whole, vast collective undertaking, some seven million articles, created and maintained by tens of thousands of people around the world for almost a quarter century is rotten to the core.”

Does that make sense? I don’t think so.

Expand full comment
Max More's avatar

But it is not just one person. Girard is something of a standout in his wickedness. But most Wikipedia pages on controversial issues are controlled by one perspective and therefore unreliable.

Expand full comment
Dan Gardner's avatar

But your conclusion isn’t supported by the essay. Or any evidence that I know of. That’s my point: Wikipedia is huge. If you want to make a

Claim about I generally, such as you are making here, you need to present serious evidence of a sort I have seen literally no one present so far.

Expand full comment
Max More's avatar

Sorry that my brief comment on someone else's blog was not a detailed, evidence-filled essay. If you haven't seen such evidence you haven't been looking.

Expand full comment
Dan Gardner's avatar

Your first sentence is fair enough. Your second sentence, however, amounts to a scoff. I'm pretty sure if somebody wrote that to you in another context you'd feel about it how I feel now. Look, I am sincerely looking for evidence. Yes, there are people who assert this with great confidence. I want to know why, that's all. Someone else who made a comment like this shared a number of links. I didn't find them persuasive, for what that's worth, but it was helpful.

Expand full comment
Max More's avatar

I'm not clear what you are asking, Dan. Are you asking for extensive documentation that Wikipedia's non-controversial pages are largely controlled by one POV? Or are you asking for evidence that cryonics is reasonable? I'm just not willing to take the time to provide the evidence for the former. It's well enough known among those who don't share the view of the controllers that it's not worth my time. I did write about Girard's Wikipedia claims about cryonics here:

https://biostasis.substack.com/p/the-false-claim-of-cryonics-as-pseudoscience

I can see that my second sentence would feel like a scoff, given your view. But it's honestly hard for me to believe that, if you've paid attention, you have seen no evidence supporting this view of Wikipedia. (Or, again, are you instead talking about evidence for cryonics?)

Expand full comment
Halftrolling's avatar

*and he continued to do so for years without anyone being able to stop him beyond mere slaps on the wrist. This may have been a single proven individual but the fact he wasn’t swiftly booted for his behavior speaks fairly ill of the site as a whole. Doubly so due to the fact many in high up places seemingly defended him. If this was a factor then its clear corruption, if not its gross incompetence.

To put the ball back into your court, can you display cases where individuals such as the one mentioned in this article were swiftly booted for their behavior? Can you bring up any whose positions are as high? Since wikipedia is such a vast collective undertaking there must be examples where the system worked effectively or even perfectly.

Expand full comment
Kade U's avatar

A beautiful piece, gripping, emotionally engaging throughout. The empathy with which you've treated a man you clearly had some significant pre-existing distaste for is really the hallmark of any good piece of character writing.

For the record, my exposure with this entire universe of people is fairly limited and comes exclusively by way of getting into Scott's writings via an interest in psychopharmacology (I've never been on LessWrong or had an actual real life conversation with someone who self-describes as 'rationalist' or 'effective altruist') -- I thought you might like to know that despite my lack of background context I found the piece easy to understand and incredibly interesting.

Expand full comment
TracingWoodgrains's avatar

Glad to hear it's easy to understand even without altogether too many layers of built-up context. Thanks for reading!

Expand full comment
Greasy's avatar

One day a Reliable Source will quote this article, and someone will bludgeon him to death with those quotations in a Wikipedia article about him, and all will be symmetrical and right with the world.

Expand full comment
Michael Wheatley's avatar

If this article gets published by a Reliable Source he has to be IP-banned from editing Wikipedia. Sorry, that's just our Reliable Sources policy, nothing to be done about it.

Expand full comment
Ppau's avatar

Damn you

I open an article to get some schadenfreude and ingroup cheer, and I end up learning about the history of Internet culture and Wikipedia logistics

Expand full comment
ProfGerm's avatar

Fascinating. A heap of sad, a dash of disgust, a soupçon of pity.

The primary question left unanswered is

>I find Gerard much more sympathetic than I had expected going in

why?

Is it the "there but for the grace of God go I" that accompanies any tragedy, like rubbernecking at car wrecks? "Addict abuses power" is indeed a human story, and this is a Very Online adaptation of an old tale, but if there's anything sympathetic to his rendition it is too finely scattered for my myopic eyes.

Expand full comment
TracingWoodgrains's avatar

It's "there but for the grace of God" combined with finding a lot to be sympathetic towards in his LessWrong posting era. As one note, I think Eliezer and LessWrong -are- kind of bad at taking jokes at times, and while he still liked them, I could see him getting earnestly frustrated that they didn't know how to take a joke and banter back.

More fully: it's a spectacular villain origin story, and part of me has to admire the sort-of grandeur of it all.

Expand full comment
ProfGerm's avatar

I suspect I’m considerably less… generous than you about the nature of banter and recognizing it, but I can also see that being a crowd particularly deficient in detection and tolerance thereof. Taking a joke is a fragile thing.

Expand full comment
Whatever Happened to Anonymous's avatar

I came off the piece more sympathetic to Gerard as well.

Consider also what was your baseline: For as long as I've been aware of him (be it via RationalWiki, tumblr or sneerclub), Gerard has always been a cartoonishly villainous figure, anything you learn of them that is reasonable and, to a degree, relatable can only make you like them more.

Expand full comment
Linch's avatar

Your throwaway line about Reason.com made me question a longstanding belief I had about the US libertarian party -- that it was rather racist in origin (even for the time) in the ~1970s to ~1990s, for reasons entirely unrelated (some might say antithetical) to libertarianism as a principle. I think I found the evidence at the time fairly compelling, but now that I come to think about it, most of my searches originated from Wikipedia...

So shelving away this belief as something I might need to spend 0.5-3h revisiting at some point.

Expand full comment
Ton of the Kirk's avatar

Libertarianism, like so many other political philosophies, has a large number of subdivisions you might call them. Reason is an interesting one that’s worth checking out. The Ron Paul newsletters you mention in another comment are a good example of an uglier use of the idea of freedom that can arise and represents another one of those subdivisions, like the Birchers in conservatism.

Expand full comment
Max More's avatar

I've never been a fan of the Libertarian Party despite being libertarian for over 4 decades but libertarians are the most non-racist people around. They are *individualists* who look to person achievement and personal responsibility. There is no place for racism in libertarianism. Of course people are inconsistent so that does not mean that no libertarian harbor genuine racism but in my experience it is far rarer than in other political views.

Expand full comment
Linch's avatar

I'm thinking of stuff like the Ron Paul newsletters (linking Wikipedia sorry): https://en.wikipedia.org/wiki/Ron_Paul_newsletters

Expand full comment
Razib Khan's avatar

wow.

Expand full comment
Darij Grinberg's avatar

I ran across RatWiki a few years ago, googling some unrelated topic, and finding some genuinely nice content (I think it was their article on Ramsey theory, which I knew as a mathematical field but had never thought of using metaphorically to refer to a rhetorical device / mental shortcut). But subsequent exploration revealed a shitshow. It felt like a blog taken over by spam comments, except that the authors themselves had become the spammers.

Expand full comment
Isaac King's avatar

Hmm, I think that's not a good analogy. Ramsey's theorem is just one example of the general principle of "the more data you have, the more likely that data is to contain a particular subset of data". That's a general consequence of probability and information theory, fundamental to the fields; I don't think it inherently has anything to do with Ramsey theory.

Expand full comment
Darij Grinberg's avatar

This kind of behavior is all over maths, often for trivial reasons as you say, but Ramsey theory contains some of the most glaring examples (with monochromatic subgraphs being not just very likely but guaranteed), so it lends itself as a catchy metaphor. In real life it is often a form of the base rate fallacy, but it's a somewhat specific form.

Expand full comment
Isaac King's avatar

The real life behavior is *not* guaranteed though, it's probabilistic, so seems like a bad metaphor.

Expand full comment
Eugine Nier's avatar

> Ramsey's theorem is just one example of the general principle of "the more data you have, the more likely that data is to contain a particular subset of data".

This interpretation of Ramsey Theory is downright misleading. Yes, it's true given "enough data", where by "enough data" is often meant *more data then would fit in the observable universe*. In fact even describing it that way massively underplays the amount of data one would need.

Expand full comment