Facebook rates users based on ‘fake news’ judgement, company confirms
According to a Washington Post report, Facebook has started to rank users based on trustworthiness according to which news stories they report as fake. The social media company has confirmed the report.
The news of the trust rankings has emerged amid a public, drawn-out scandal surrounding the proliferation of fake news, abuse and extremist content being spread on social media platforms such as Facebook and Twitter, often with the backing of the Russian government. This content is widely believed to undermine democratic processes, such as by putting women off running for office, or influencing voting behaviour in major elections.
In response to the scandal, governments have threatened regulation if internet companies do not take sufficient measures to regulate deceitful and hateful content on their platforms. Facebook is under heavy pressure to handle these problems, and has employed thousands more moderators to assess flagged-up content in addition to other actions.
Facebook’s new system has been under development over the past year, and assigns a number from 0 to 1 to users. This is calculated based on their reporting of lies masquerading as news content on the platform, and will be one metric that Facebook will take into account when assessing risk and trust associated with its billions of users.
Facebook has allowed users to flag misinformation and other types of inappropriate content since well before the 2016 US presidential election, which forced the issue of ‘fake news’ onto the political agenda. Reported content tends to be reviewed by a human moderator, who decides whether the content should be allowed to remain on the platform.
It is “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher”, Tessa Lyons, manager in charge of fighting misinformation at Facebook, told the Washington Post.
Lyons told the Post that she noticed that many users were reporting posts not because they were false or otherwise inappropriate, but simply because they disliked their content. This concern led Lyons and her colleagues to develop an assessment for users who frequently report content.
“If someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight the person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true,” Lyons said.
Users who flag up content which is likely to be deceptive – as indicated by a known untrustworthy source, by the judgement of independent fact checkers or by other users – will earn a higher reputation rating. However, the algorithms which determine how users’ scores are calculated will not be made public, in order to avoid users attempting to game the system.
This week, Facebook announced that it has removed more than 650 pages, groups and user accounts linked to deliberately misleading actors in Russia and Iran, in a move to prevent social media meddling in the upcoming US midterm elections.