PR trends and predictions for 2021 are beginning to emerge as corporate management teams try…
When considering the challenge of identifying and combating misinformation, it is helpful to understand a little bit about incentives and intent.
Why does information that is factually incorrect…exist? As it turns out, the incentives for the creation and amplification of manipulated, exaggerated, salacious, and otherwise patently false information are quite powerful. Misinformation, after all, has toppled leaders, swayed elections, cost lives, changed cultures, and moved mountains of money. It has changed the course of history countless times. With that kind of power, understanding intent is fairly simple: well-crafted and -targeted misinformation is very, very effective.
How Intent and Incentives Shaped The Misinformation Crisis, And How Consumers Can Take Action
Misinformation is the opposing player in a zero-sum game. For someone who benefits from the objective truth, someone else probably stands to benefit from twisting it. Considering misinformation from the context of incentives and intent goes a long way to helping the public not only identify it, but also, if they are so motivated, avoid amplifying it.
In the early days of the printed word, all the bad intentions in the world couldn’t overcome a key inhibitor in the publishing and distribution of misinformation: cost. From paper, typesetting, ink, and slow distribution, printing and distributing misinformation was actually more expensive than printing and distributing the truth. This created a disincentive to widely publish and distribute misinformation. Think about it: even in television and radio, the publisher needed to have the licenses, technology, and equipment to broadcast.
Fast forward to just over 25 years ago, when electronic access to digital information became democratized and nearly ubiquitous. A key element in the intent and incentives equation changed. Now, anyone could publish whatever information or misinformation they wanted on the Internet at effectively no cost – paper, ink, and delivery trucks were out of the equation. But for the first decade or so of the Internet’s consumer existence, publishing was where it ended. Mechanisms for amplifying content were fragmented and unorganized.
It wasn’t until the Internet evolved from being a network of machines to communities of humans did the equation change – in dramatic fashion – again.
When social media giant Facebook launched in 2004, it changed the paradigm of information exchange. To Facebook, the Internet was not pages of content, but communities built and connected by common interests. In its early stages, Facebook was a platform used to connect college students within their respective universities. We know how the story ends: it grew like wildfire through the college demographic, and then among the broader population. Within just a few years, hundreds of millions of people were using Facebook, all of them organized under a corporate mission of building a more open and connected world.
Social networks such as Facebook, Twitter, and others have largely achieved their goals of connecting large communities. But by their very nature of connecting what was once fragmented and disorganized, they have also constructed highly efficient distribution and amplification networks.
In recent years, platforms such as Facebook, Twitter, Instagram, Snapchat, TikTok, etc. have become breeding grounds for misinformation campaigns ranging from conspiracy theories, political misinformation, sensationalism, and most recently, medical misinformation during the Covid-19 global pandemic.
Ironically, the incentives among those who wish to spread misinformation and the social networks providing the amplification engine are actually someone aligned: sensationalized and salacious information garners greater attention. Greater attention increases the likelihood of amplification. Greater amplification results in greater engagement on the social network. The cycle repeats in perpetuity.
Russian meddling in the 2016 election and the Cambridge Analytica scandal created a spike in awareness among consumers and a new sense of urgency among social networks that they themselves bore the burden for removing or deprioritizing misinformation. At the same time, dozens, if not hundreds of tools began to spring up to help consumers detect, weed out, and even crowdsource the identification of purposefully inaccurate content.
With the prospect of increased scrutiny among lawmakers and the potential for increased regulation or even divestiture, social networks have dramatically ramped up their misinformation tracking efforts. In fact, Facebook now works with over 43 fact-checking organizations around the world, covering 24 languages. Twitter’s approach is to open-source fact-checking so that, as stated by the company’s CEO Jack Dorsey, it can be “verified by anyone.” To be sure, every social network in existence today has placed a high strategic priority on the topic.
Changing the root structure, or even the business models, of social networks is unlikely. The platforms are too big and too powerful. And candidly, they provide too much value within their original stated goal of seamlessly connecting large communities. Regulators continue to examine the role of policy, within their own countries and globally, and are realizing how complicated the problem has become.
Ultimately, the power and responsibility of reducing the spread of misinformation remain in the hands of consumers themselves. The intent must be our own in order to effectively shift the incentives, and make it more difficult for bad actors to create and amplify misinformation.
This requires a new form of literacy education on media and other shared information, starting at early childhood, and continuing throughout adulthood. It must become second nature to consumers that all information shared on social networks (and across media sources) has intent fueled by incentives.
At the most basic level, asking, “why is this information here, and who stands to gain from it?” is an adequate place to start, and is already a far greater effort than most consumers are making today. Going to the next level, Erin Calabrese, a producer for ABC News, compiled this list of questions:
- Is this the original account, article, or piece of content?
- Who shared this or created it?
- When was this created?
- What account is sharing this? When was the account created? Do they share things from all over the world at all times during the day and night? Could this be a bot?
- Why was this shared?
It also helps to put the information you consume in context. A friend posting a photo of their kids on a summer trip is much less likely to raise a red flag than content related to the election, the economy, racial injustice, or any of the other hot bed topics taking place today. As NPR’s Miles Park stated, “Misinformation is most effective on hot-button issues and immediate news. Ask yourself: Is this a complicated subject, something that’s hitting an emotional trigger? Or is it a breaking news story where the facts aren’t yet able to be assembled? If the answer is yes, then you need to be ultra-skeptical.”
New tools and software will continue to make strides in helping consumers identify and discard misinformation. And new publications are coming online with a primary focus of highlighting variances in reporting and journalism to better train readers to spot misinformation.
Consumers must realize that the ultimate control resides with them. If they demand the truth, the incentives rest with social networks and media organizations to build the mechanisms to present it. While we may seem a long way off from that reality, it is our collective awareness and efforts that will ultimately shape these platforms into spaces we can trust to inform us.