Site icon itechfy

Is fake news ripe for disruption?

Otherweb

By Alex Fink, CEO @ otherweb.com

The news business is broken. You might think that good outlets are good and bad outlets are bad, but unfortunately this understates the problem. By the standards of the 1980s, all mainstream news outlets are worse than they once were. 

Here is the problem in a nutshell:

  1. The internet democratized content publishing. Anyone could now publish anything and access anything. Some of the new outlets were better than what came before them; most were substantially worse.
  1. The internet also made everything free. Subscription revenue became harder to survive off of, and copyright laws became impossible to enforce for anyone except full-time trolls. So the only way to monetize content became advertising.
  1. Advertising pays per click or per view. There is no pay-per-quality or pay-per-truth – for most publishers, these incentives are absent from the marketplace.

The predictable outcome of these innovations is that for the past 20 years, all content publishers have faced a single selective pressure – to produce the loudest, most exaggerated clickbait they could possibly get away with. 

Of course some outlets resist this pressure more than others – there is always a bell curve – but the entire curve has been shifting to the left and shows no signs of slowing down. It’s a simple law of evolutionary theory: when a population faces a single selective pressure, it drifts towards the traits that solve for this pressure. 

If fruit grows higher, necks get longer.

In our specific case, the predicament is even more dire because the problem keeps being misdiagnosed as misinformation or disinformation or some other scapegoat du jour. We keep imagining that there is a malicious actor that caused all this, and if only we could counter his malicious actions all will be well again.

In reality, however, what we have is a universal incentive that affects everyone in the marketplace. And we cannot possibly fix a problem of universal incentives by seeking malicious actors we can censor – the proposed solutions don’t address the root cause of the problem.

Consider these two solutions that are in vogue these days:

  1. Fact-checking. 

Incentives to maximize clicks and views apply to every piece of content automatically; they’re built into the business model. Fact-checking is applied to a small subset of content, manually, after the fact. 

There is no way that manually checking a small subset of content can fix the universal incentive to produce attention-grabbing junk.

  1. Organized (and often partisan) efforts to fight misinformation or disinformation.

Junk is not misinformation. It’s also not disinformation. It’s just junk. 

Any binary censorship rule that removes particular bad items from the entire population of junk will result in a slightly-smaller population that – still – consists entirely of junk. 

So, let’s consider what the parameters for a solution might be. We need something that: a) can be applied to all content in real-time, and b) creates an incentive to the publisher of the content to produce something other than junk.

Let’s consider the closest parallel: how do we go about consuming less sugar in our diets? That’s easy – we require companies to label their food products in a consistent way, and we make sure to educate people that they should always look at the “added sugars” section of the product’s nutrition label. 

Wouldn’t it be useful if articles had nutrition labels too?

20 year ago the answer would have been negative. The internet has no FDA. Publishers wouldn’t want to place nutrition labels on their own content. Who could devote the time and resources required to generate such labels at their own expense?

But technology is evolving. Natural language processing is suddenly able to detect everything from racism to propaganda to clickbait to hyperbole.

We can generate a nutrition label for every article, every podcast, and every video. 

And we can consume all our content through reader apps and feeds that include this nutrition label with every piece of content. Better yet, we can build reader apps that allow users to customize their feeds to specify which kinds of articles they’d like to be included.

The proximal result will be happier users. But the secondary result is even more crucial –

There will be a universal incentive to produce the kind of content that readers actually want to see. 

By collectively setting our filters to positions that reflect our long-term preferences (and not the things we happen to click on in the heat of the moment), we can gradually turn our preference for content that is not junk into something that has a dollar value.

And if we do, we can disrupt fake news.

Exit mobile version