Humans and Algorithms Detecting Fake News: Effects of Individual and Contextual Confidence on Trust in Algorithmic Advice

Chris Snijders (Corresponding author), Rianne Conijn, Evie de Fouw, Kilian van Berlo

Research output: Contribution to journalArticleAcademicpeer-review

14 Citations (Scopus)
174 Downloads (Pure)

Abstract

Algorithms have become part of our daily lives and have taken over many decision-making processes. It has often been argued and shown that algorithmic judgment can be as accurate or even more accurate than human judgement. However, humans are reluctant to follow algorithmic advice, especially when they do not trust the algorithm to be better than they are themselves: self-confidence has been found as one factor that influences the willingness to follow algorithmic advice. However, it is unknown whether this is an individual or a contextual characteristic. The current study analyses whether individual or contextual factors determine whether humans are willing to request algorithmic advice, to follow algorithmic advice, and whether their performance improves given algorithmic advice. We consider the use of algorithmic advice in fake news detection. Using data from 110 participants and 1610 news stories of which almost half were fake, we find that humans without algorithmic advice correctly assess the news stories 64% of the time. This only marginally increases to 66% after they have received feedback from an algorithm that itself is 67% correct. The willingness to accept advice indeed decreases with participants’ self-confidence in the initial assessment, but this effect is contextual rather than individual. That is, participants who are on average more confident accept advice just as often as those who are on average less confident. What does hold, however, is that a participant is less likely to accept algorithmic advice for the news stories about which that participant is more confident We outline the implications of these findings for the design of experimental tests of algorithmic advice and give general guidelines for human-algorithm interaction that follow from our results.

Original languageEnglish
Pages (from-to)1483-1494
Number of pages12
JournalInternational Journal of Human-Computer Interaction
Volume39
Issue number7
Early online date24 Jul 2022
DOIs
Publication statusPublished - 1 Apr 2023

Fingerprint

Dive into the research topics of 'Humans and Algorithms Detecting Fake News: Effects of Individual and Contextual Confidence on Trust in Algorithmic Advice'. Together they form a unique fingerprint.

Cite this