Do ‘fake news’ warning labels make other stories more believable?
Image credit: Roman Samborskyi | Dreamstime
A study by the Massachusetts Institute of Technology (MIT) has found that disclaimers on some fake news stories make people more readily believe other false stories.
Following the 2016 US presidential election, Facebook began putting warning tags on news stories which its fact-checkers judged to be false.
However, it has been suggested that there’s a catch: tagging some stories as false makes readers more willing to believe other stories and share them with friends, even if those additional, untagged stories also turn out to be false.
The study was based on multiple experiments with news consumers, with its researching calling this unintended consequence – in which the selective labelling of false news makes other news stories seem more legitimate – the “implied-truth effect” in news consumption.
“Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” said David Rand, Erwin H. Schell Professor, MIT Sloan School of Management.
“There’s no way the fact-checkers can keep up with the stream of misinformation, so even if the warnings do really reduce belief in the tagged stories, you still have a problem because of the implied-truth effect,” he added.
Moreover, Rand observed that the implied-truth effect “is actually perfectly rational” on the part of readers, since there is ambiguity about whether untagged stories were verified or just not yet checked. “That makes these warnings potentially problematic,” he explained, “because people will reasonably make this inference”.
The study not only found the issues regarding fake news, but also suggested a solution for it: placing “verified” tags on stories found to be true eliminates the problem.
As part of the study, the researchers conducted a pair of online experiments with a total of 6,739 US residents, recruited via Amazon’s Mechanical Turk platform. Here, participants were given a variety of true and false news headlines in a Facebook-style format.
The false stories chosen for the experiment were from the website Snopes.com and included headlines such as “Breaking News: Hillary Clinton Filed for Divorce in New York Courts” and “Republican Senator Unveils Plan To Send All Of America’s Teachers Through A Marine Bootcamp.”
During this, the participants viewed an equal mix of true stories and false stories and were asked whether they would consider sharing each story on social media.
Some participants involved were assigned to a control group in which no stories were labelled, while others saw a set of stories where some of the false ones displayed a 'False' label. Furthermore, some participants saw a set of stories with warning labels on some false stories and 'True' verification labels for some true stories.
In the first instance, stamping warnings on false stories does make people less likely to consider sharing them. For example, with no labels being used at all, participants considered sharing 29.8 per cent of false stories in the sample. That figure dropped to 16.1 per cent of false stories that had a warning label attached.
However, the researchers also saw the implied-truth effect in action, with readers willing to share 36.2 per cent of the remaining false stories that did not have warning labels, up from 29.8 per cent.
“We robustly observe this implied-truth effect, where if false content doesn’t have a warning, people believe it more and say they would be more likely to share it,” Rand noted.
When the warning labels on some false stories were complemented with verification labels on some of the true stories, the team found that participants were less likely to consider sharing false stories, across the board. In those circumstances, they shared only 13.7 per cent of the headlines labelled as false and just 26.9 per cent of the nonlabelled false stories.
“If, in addition to putting warnings on things fact-checkers find to be false, you also put verification panels on things fact-checkers find to be true, then that solves the problem because there’s no longer any ambiguity,” Rand explained. “If you see a story without a label, you know it simply hasn’t been checked.”
The findings of the study, however, come with one additional twist that Rand has emphasised: namely, that participants in the survey did not seem to reject warnings on the basis of ideology. They were still likely to change their perceptions of stories with warning or verifications labels, even if discredited news items were “concordant” with their stated political views. “These results are not consistent with the idea that our reasoning powers are hijacked by our partisanship,” he said.
Furthermore, Rand noted that, while continued research on the subject is important, the current study suggests a straightforward way that social media platforms can act to further improve their systems of labelling online news content.
“I think this has clear policy implications when platforms are thinking about attaching warnings,” he said. “They should be very careful to check not just the effect of the warnings on the content with the tag, but also check the effects on all the other content.”
The paper ‘The Implied Truth Effect’ was published in the journal Management Science.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.