×

Our award-winning reporting has moved

Context provides news and analysis on three of the world’s most critical issues:

climate change, the impact of technology on society, and inclusive economies.

OPINION: Prebunking: how to build resilience against online misinformation

Monday, 5 September 2022 08:28 GMT

A man opens the Facebook page on his computer to fact check coronavirus disease (COVID-19) information, in Abuja, Nigeria March 19, 2020. REUTERS/Afolabi Sotunde

Image Caption and Rights Information

* Any views expressed in this opinion piece are those of the author and not of Thomson Reuters Foundation.

Rather than debunking misinformation after it’s already spread, prebunking – or preemptive debunking - is effective at reducing susceptibility to misinformation at scale.

Jon Roozenbeek is the British Academy postdoctoral fellow at the Cambridge Social Decision-Making Lab; Stephan Lewandowsky is the chair of cognitive psychology at the School of Psychological Science, University of Bristol and Sander van der Linden is director of the Cambridge Social Decision-Making Lab.

The spread of misinformation continues to pose a significant challenge to societies worldwide. Belief in misinformation has been linked to reduced willingness to get vaccinated against various diseases, lower compliance with health guidance measures, and political extremism.

Developing interventions that effectively counter misinformation, without imposing an undue burden on freedom of expression, is a task that scientists, policymakers, and the private and non-profit sectors have taken to heart. Until now, however, the scalability of such interventions has remained highly elusive.

Fact-checking, or debunking, is a popular approach to tackling misinformation, as witnessed by Meta’s third-party fact-checking programme. However, although fact-checking is valuable and mostly effective at reducing belief in specific falsehoods, it has certain limitations: people who believe in or spread misinformation don’t engage much with fact-checks; people’s political beliefs appear to interfere with their views on fact-checking; and misinformation can spread faster and further on social networks than fact-checked content.

Other approaches have therefore focused on building resilience at the individual level, equipping people with the skills needed to resist misinformation. Media literacy programs such as the Stanford Civic Online Reasoning project and the University of Uppsala’s News Evaluator are promising tools to help children and adults better navigate online spaces, but their main limitation is their lack of immediate scalability: learning skills such as lateral reading and source evaluation takes time and significant effort, and hence cannot be rolled out at large scale overnight.

Alongside debunking and media literacy, prebunking has gained prominence as a way to build resilience against online misinformation at scale. While there are many kinds of prebunking, our approach is grounded in inoculation theory, which posits that you can build resistance against unwanted persuasion attempts or manipulative messages by preemptively exposing people to a “weakened dose” of misinformation, much like a medical vaccine builds resistance against future infection.

Psychological inoculations typically include both a forewarning of an impending persuasive attack on one’s beliefs, and a refutation of the attack. Research has shown that inoculation interventions are overall effective at reducing vulnerability to future unwanted persuasion.

To counter misinformation on social media, we designed five short prebunking videos, each of which “inoculates” people against a specific manipulation technique often used in online misinformation: emotionally manipulative language, false dichotomies, incoherence, ad hominem attacks, and scapegoating.

The videos seek to improve people’s ability to detect the tropes and tactics that often underlie misinformation. They avoid talking about what’s true and false, and which sources are reliable and unreliable, as doing so often evokes strong disagreement and possibly negative reactions to the interventions themselves.

Instead, each video features a non-political example of the use of a manipulation technique from pop culture, for example Anakin Skywalker telling Obi-Wan Kenobi: “if you’re not with me, then you are my enemy” in Star Wars: Episode III, which is a textbook example of a false dichotomy.

In our recent study, we found that the prebunking videos were highly effective, not only in a laboratory setting, but also on YouTube. We showed two of the videos as ads to about 1 million YouTube users, and found that watching the ads significantly boosted users’ ability to detect manipulative content even though they could skip the ad, turn off the volume or switch to another tab if they wanted.

These results are highly promising. Social media companies don’t need to make changes to their user interface - for example by introducing friction, such as asking people whether they’re sure they want to share something prior to sharing - but can run prebunking videos as ads on their platform, for instance ahead of elections. Google, for example, is now running a prebunking campaign in Poland, the Czech Republic and Slovakia to counter disinformation about refugees from Ukraine.

Governments and companies could also easily buy ad time on social media or television to prebunk misinformation. And while we don’t see prebunking as a replacement for other countermeasures, it can reduce susceptibility to misinformation at scale.

Our Standards: The Thomson Reuters Trust Principles.

-->