Building a Global Debunker for Fake News

By Walter Quattrociocchi - 19 May 2017
Walter Quattrociocchi explores the growing fight back against fake news.

The spread of false information has always been with us, but it is something we face every day in the digital age. At the core of the problem of misinformation is confirmation bias – we tend to seize on information that confirms our own view of the world, ignoring anything that doesn’t - and thus polarization [1,2].

In an environment where middlemen such as the mainstream media are now frequently removed, the public deals with a large amount of misleading information that corrupts reliable sources. Recent studies in computational social science suggest users online tend to select information by confirmation bias and to join virtual echo chambers, which reinforce and polarize their beliefs.

At the extreme end of the spectrum, conspiracy theorists tend to explain significant social or political aspects as plots conceived by powerful individuals or organizations. These plots share an important characteristic with so-called urban legends: the objects of the narratives inevitably threaten the established social order and focus on what communities and social groups deeply fear. Fake news are also a major problem as regularly raised by several policymakers, among whom for example the President of Italian Chamber of Deputees Laura Boldrini in a recent open consultation with stakeholders from all sectors.

Discussion within groups of like-minded people seems to negatively influence users’ emotions and to enforce group polarization. Experimental evidence on millions of users (in relation to sensitive topics such as alternative medicine) suggests that any information that confirms a narrative gets accepted even when containing deliberately false claims, whereas information dissenting from that narrative is either ignored or might even increase group polarization.

In such an environment, social media is obviously central and so is its ambiguous position: on the one hand it has the power to inform, engage and mobilize people as a sort of freedom utility; on the other, it has the power – and for some, unfortunately, the mission – to misinform, manipulate or control.

Fact-checking is a useful tool but, unfortunately, it remains confined in specific echo-chambers and might even reinforce polarization and distrust.

So thinking that fact-checking is a solution is kind of a confirmation bias itself. Indeed, debunking efforts or algorithmic-driven solutions, based on the reputation of the source, have proven ineffective so far – even The Washington Post closed its weekly column on the topic of debunking. To make things more complicated, users on social media aim to maximize the number of likes their posts receive and often information, concepts and debate get flattened and oversimplified.

So can anything be done? Hopefully, yes – first, and as simple as that might sound, more collaboration between science and journalism is crucial. On one hand, scientists should communicate better with society (take concepts like uncertainty and complexity, for example). On the other hand, journalists should gain better qualifications to report on complex phenomena such as misinformation and its consequences, economic issues, technology and health.

Second and related, computational social science can now study, quantitatively characterize and model the processes by which news spreads [3] and is consumed, providing an early detection system for trends in public opinion. More concretely: take, say, an anti-vaccine echo-chamber. We can now know what the main attractive topics for the users are; what value the users attach to those (in other words: why not vaccinating is supposedly good); and the beliefs underpinning their narrative. Based on that, it is easier to detect potential informational cascades and potentially neutralize them.

Currently, we have software to analyse the information flow on social media (Facebook, Twitter and Youtube) and a set of quantitative indicators allowing us to sense social dynamics, for example, users’ polarization in an echo chamber, topics of interest, emerging topics and sentiment.

With these tools at disposal one could set up a sort of global debunker, which would real-time monitor how news spreads – allowing us to identify echo chambers, users’ polarization on specific narratives, the sentiment of users and the evolution of narratives in specific echo chambers. From here, it would suggest ways to debunk these features effectively by intervening at the level of the user with new ways of framing the narrative and tactics to break into the echo chambers.

This is vital, because as people will always naturally select information according to their personal beliefs and emotional states (confirmation bias), understanding how the news spreads and how consumption happens would enable us to design effective communication strategies and tools that account for the cognitive needs of users. Supported by quantitative research, the debunker would offer a set of tools for the study and measurement of information consumption and the impact of different communication strategies for combatting how misinformation spreads online.

This could be extended to multiple languages and different narratives, and to specific issues.

Some may be wondering about the privacy issues experiments involved in this kind of approach would bear. At an aggregate level that would not be the case as analyses on the aggregate level would suffice story-tellers (e.g. debunkers) to meet the needs of (or: intervene on) the audiences they are supposed to target. On a second level, it would certainly be interesting to see the extent to which more tailored live experiments will in fact touch on privacy. For instance, imagine a tool in the form of a virtual friend (a sort of talking cricket) warning you with a message here and there on things that you read or follow that are presumably fake. Will our inner Pinocchio throw a hammer at him?

 

 

Walter Quattrociocchi earned his Master's Degree in Computer Science (Summa cum Laude) at the University of Parma and his Ph.D in Logic and Computer Science at the University of Siena with the thesis "Computational Aspects of Social Dynamics" under the supervision of Prof. Nicola Santoro (Carleton University, Canada). From 2007 to 2010 he worked at the Institute of Cognitive Sciences and Technologies (ISTC) of the Italian National Research Council (CNR). In 2012 he worked Northeastern University as Postdoctoral Research Associate at the Mobs Lab of Alessandro Vespignani. He serves as teacher and/or collaborator in courses on Algorithms, Programming, Distributed Computing, Social Networks for graduate and undergraduate students. This post first appeared on the Agenda blog.

Photo credit: outtacontext via Foter.com / CC BY-NC-ND

 

References

1) Quattrociocchi, W., Scala, A., & Sunstein, C. R. (2016). Echo chambers on facebook.
2) Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., ... & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554-559.
3) Schmidt, A. L., Zollo, F., Del Vicario, M., Bessi, A., Scala, A., Caldarelli, G., ... & Quattrociocchi, W. (2017). Anatomy of news consumption on Facebook. Proceedings of the National Academy of Sciences, 201617052.

 

Disqus comments