Why is Online Disinformation a Thing? Part 1: The Doubt Factor

The Interwebs are ablaze these days with news that Space Karen might be buying Twitter, and that, of course, re-ignited another raging flamewar: what can be done to save the Twitter ship?

Twitter is probably the most polarized among the mainstream social networks. A combination of high tolerance to bots and rewarding bombastic, categorical statements that fit well into that dreaded word limit made it a toxic dump of lightweight, but strongly-held ideas. Its failure to combat disinformation, while maybe not as high-profile as Facebook's, is coupled with a strange legacy of empty promises and technical debt, and is one of the main exhibits in the "here's why anonymous free speech online is a failure". (Oh, yeah, this is 2022 Internet so I also have to make a note -- when I say "free speech" I mean "speech that echoes free thought", not "racism, but this is America")

Various alternatives have been proposed so far, and Twitter has the dubious benefit of having access to prior art. I say "dubious" because none of these things -- real name policies, identity verification schemes, verified content hubs, claim checks -- have worked. I doubt they'll work better for Twitter but my crystal ball is just as bad as any other so hey, by all means, have it, future, prove me wrong.

But I don't think disinformation is inherent to allowing anonymous free speech on the Internet. I think disinformation is inherent to the Internet, and that reining in speech, or Internet anonymity, will only help it, not hinder it, as conclusively shown, I believe, by Facebook. Even in the EU, land of GDPR and actual privacy laws, an amount of money so modest that it can come out of a director's discretionary fund in any organization with an agenda to push will get you a very good mix of a few hundred bots, fake accounts, and paid shills to credibly push any stupid narrative. And, of course, the more targeting information it has, the better it works.

This is not something that the Internet industry can solve from within because it's systemic, and it's a plague that they brought upon themselves. In order to make things like selling ads, content mills, influencers and content creators, and political sponsorship work, they had to eliminate the one thing that kept Internet-based disinformation at bay up until the late 00s at bay: the doubt factor.

Groups of stay-at-home moms and retired fat creepy uncles that unexpectedly turn out to be racist congregations aren't a new thing. They didn't pop into existence along with Facebook groups. Prior to Facebook being a thing, closet racists, conspiracy theorists and outright loons hung out in obscure web forums, and in low-key USENET groups before that. But most of them were neatly tucked away, so to speak. If you hung out in bars with the wrong kind of crowd, or went past the tenth Google search results page or so, you'd find out about them. Odds were that unless you already agreed with all the crap they spewed and went out of your way to find other people who agreed with that kind of crap, you only ran into these things by mistake.

And that is when the doubt factor popped in. "The speech" about the Internet always included this fun gimmick, one way or another: don't believe anything you read there.

The idea that any interesting and controversial claim you read on the Internet is most likely false was, at one point, quite deeply embedded in most netizens minds (I'm deliberately using that term to give you an idea about how old this is, wink wink). And that idea was mostly devoid of assumptions of malice about the other party: quite simply there were so many crackpots, smartasses and trolls that many (if not most?) bullshit claims were not the result of foreign propaganda but bad LSD trips, schizophrenia, thirty year-old grudges with high school history teachers, or what we would now call shitposting. This was "the doubt factor" -- this idea that, if you see something on the Internet, it's very likely to be false, and that you can only verify it with data from the real world, rather than more Internet claims.

But publishing news... okay, no, let me take a step back here. Persuading media executives to publish news on platforms owned by others (Facebook, Instagram, Twitter, Google, whatever) rather than on their own platforms, by selling ads, rather than news and analyses, simply cannot work. Nobody wants to publish content for free, with the promise of maybe getting some better advertising revenue, on a platform where it's likely to be scrutinized and derided even further than before.

And so, subtly, but steadily, large vendors of Internet publishing and advertising services worked -- in tandem with interested parties, from media publishers to politicians -- to remove the doubt factor from the general Internet public. An entire generation of people was introduced to Internet news with the expectation that, if a website called The Lonely Texas Ranger is making some spectacular claims, there's actually a good chance that they might be telling you things that they can't tell you on TV or print in newspapers, not that they are either selling intellectual snake oil or on a steady diet of anti-psychotic pills and/or very strong psychedelics.

Twitter, like all platforms, in fact, could far more efficiently "prevent the spread of disinformation" by running internal campaigns to drive home three simple points:

  1. If you read it here, it's likely to be false, especially if it's something you agree with.
  2. The only people you should treat as real -- no matter how real their accounts seem -- are the ones you personally know well in real-life.Internet").
  3. If a claim is only backed by more claims, rather than hard evidence, then it's most likely false.

Obviously, though, a campaign like this one would torpedo any large social media service. Twitter, Facebook, but also the less politically-involved Instagram and Tik Tok, derive their value from the ability to construct a reality online and fit it neatly into the real life. That's what makes Instagram the ultimate reality show, where people can pick and choose what characters they follow through their "everyday interactions" -- the most successful of which are scripted just enough for it not to show, like the most successful of their now obsolete television counterparts. If you burst the "this is the real life" bubble, and let people in on the fact that most of it is make-believe, and there's no way to distinguish what's real from what's make-believe, their entire value proposition drops to that of a really big phpBB forum. That's a nice thing but running one hasn't ever been profitable.

Does this sound hopeless? Well, I don't think it's hopeless: I, for one, have faith in the next generation of netizens, and their ability to wield shitposting, that elegant weapon for a more civilized age which can pop the "this is the real life" bubble from within. But I will have to rein in my urge to bombard you with optimism until Part 2.

Back to blag archive

Back to blag index