Fake news: the appeal of a good story, true or false

No-one willingly admits to spreading fake news, but we simply can’t help ourselves.

When he wrote a seemingly prophetic article in The Atlantic magazine towards the end of the Second World War, the then head of the US Office of Scientific Research and Development Vannevar Bush worried about how easy it is to bury knowledge: “[Gregor] Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.”

Bush proposed the memex, a machine that would bring information to everyone. “Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.”

The grand vision behind the memex seemed to come true in the early 1990s with the first web browsers. Repositories of information at institutions such as CERN became accessible to anyone in the world with access to a modem and telephone line. A decade later, the accepted idea of the web as a one-way flow of knowledge from publishers to consumers changed. Social media, in the form of blogs and sites like Facebook, appeared.

A digital manifesto forecast the change in direction. Rick Levine, Christopher Locke, Doc Searls and David Weinberger, writing in ‘Cluetrain Manifesto: The End of Business As Usual’, declared: “Hypertext is inherently non-hierarchical and anti-​bureaucratic. It does not reinforce loyalty and obedience; it encourages idle speculation and loose talk. It encourages stories.”

And what stories hypertext encourages. Reality simply cannot keep up.

People love stories and when they see a story they love, they love to share it. Where better to share stories with friends, colleagues and people you only vaguely know than through social media?

Stories such as the EU banning bendy bananas are considerably more entertaining – and therefore shared – than the far less enthralling reality of the way in which European legislation distinguishes between two classes of the yellow fruit. To the everyday shopper, the distinction seems pointless – surely the story of a ban makes more sense?

Similarly, landing on the Moon we all know to be hard. How could Nasa achieve it in the 1960s? For some, it makes more sense to assume the landings were staged to win a propaganda battle in the Cold War. The stories seem harmless enough at first but they enter the collective psyche and become treated as real.

More than 25 years after the sudden burst in usage of the web made possible by DARPA’s decision to allow universal access to the internet and the development of the web browser, the world seems to be drowning in artificial stories. Rumours and fake news that warn of dark conspiracies spread through internet echo chambers like legionnaires’ disease through dodgy air-conditioning ducts. The lofty goal of the memex has given way to the internet meme: a pithy slogan in compressed, reversed-out type superimposed on a photograph and spread through the channels of social media.

In 2014, researchers working for Facebook and Stanford University came up with the moniker ‘rumour cascades’ after they started analysing the way in which stories, both fake and true, propagated through the social network. The rumours spread easily through social ties “even when of dubious veracity”, Adrien Friggeri and colleagues concluded.

In practice, users would delete fake rumours more readily once they had been ‘snoped’ – alerted with a link to the myth-busting sites Snopes. But such false rumours would spread in bursts from those who did not see the correction or discounted it. For most false rumours, or those with fake and true parts, most reshares on Facebook took place after the first link to Snopes was posted and often in bursts long after the story first appeared.

Can society do something and can technology help? The World Economic Forum set up a group of more than 20 people to look at the way that large internet companies are run and whether the algorithms that underpin their presentation of information can act as a “form of de facto governance”, according to co-chair Michael Posner.

The UK’s Culture, Media and Sport parliamentary select committee launched an inquiry into fake news and ways to combat its effects at the start of this year. Chair Damian Collins said at the launch: “Just as major tech companies have accepted they have a social responsibility to combat piracy online and the illegal sharing of content, they also need to help address the spreading of fake news on social media platforms. Consumers should also be given new tools to help them assess the origin and likely veracity of news stories they read online.”

Fighting fake news and rumour-mongering with technology looks to be easier said than done. In 2013, Mounia Lalmas and Daniele Quercia of Yahoo Labs and Eduardo Graells-Garrido, then a researcher at University Pompeu Fabra in Barcelona, looked at ways to combat polarisation by connecting Twitter users with opposing viewpoints. Doing so “had a negative emotional effect”.

Not only is challenging viewpoints a problem, but people become highly aware of their lack of privacy online if the machine starts to intrude on their sessions. In 2014, with the best of intentions, the charity Samaritans launched an app called Radar. Supported by academic research and with support from experts in the field, the app monitored the Twitter streams of users and their contacts, watching for evidence of mental-health problems that might indicate suicidal thoughts. The app would offer suggestions to the user on how to approach their issues and provide contacts at the charity. Samaritans pulled the app after just nine days amid a backlash over privacy.

The problem that technologists face in trying to help users online is that they may be fighting fundamental processes within society.

Mooers’ Law is not a misprint of the much more famous rule of thumb of electronics. Calvin Northrup Mooers was a US computer scientist who analysed the ways humans find and use information long before the worldwide web deluged the world with disinformation – he died in 1994 just as the web was becoming mainstream.

Mooers found that humans will often deliberately avoid information: “It is now my suggestion that many people may not want information, and that they will avoid using a system precisely because it gives them information... if you have information, you must first read it, which is not always easy. You must then try to understand it... understanding the information may show that your work was wrong, or may show that your work was needless. Not having and not using information can often lead to less trouble and pain than having and using it.”

Mooers’ Law, as summarised in a 1960 paper for the journal American Documentation, claimed: “An information retrieval system will tend not to be used whenever it is more painful and troublesome for a customer to have information than for him not to have it.”

A barrage of research at the intersection between psychology and economics has borne out Mooers’ Law. Economists have been fascinated with the idea of information avoidance for decades because of the effect it has on investors and business managers. Joshua Lederberg of Rockefeller came up with a version that presents it in economics terms: “People will resist information unless the price of not knowing it greatly exceeds the price of learning it.”

Russell Golman and colleagues from Carnegie Mellon University (CMU) put together a review paper, published earlier this year in the Journal of Economic Literature, that culled work from the past half-century into why humans often go out of their way to avoid knowing things. The apparent consequences run from groupthink to the politics of climate change – and they are widespread.

Take an example that involves Facebook. Working with Cass Sunstein of Harvard Law School, Walter Quattrociocchi and Antonio Scala of the IMT School for Advanced Studies at Lucca, Italy, last year looked for the existence of echo chambers on Facebook – the kinds of self-​reinforcing group that Mooers’ Law predicts. The team put together a massive data set to look at the way in which conspiracy theories and views on science develop on the social network. They found users seek out information that strengthens their preferred narrative and reject any that undermines it, even to the point of absorbing fake contributions from trolls. “Confirmation bias operates to create a kind of cognitive inoculation,” they argued.

An earlier paper authored by Sunstein with Edward Glaeser of Harvard University found even balanced news sources feed into highly unbalanced views, in a process they called ‘asymmetric Bayesianism’ – referring to the way that Bayesian theory uses interactions between data and a prior position to calculate a probability. “The same information can have diametrically opposite effects if those who receive it have opposing antecedent convictions,” Glaeser and Sunstein argued. “Recipients whose beliefs are buttressed by the message, or a relevant part, rationally believe that it is true, while recipients whose beliefs are at odds with that message, or a relevant part, rationally believe that the message is false.”

A second explanation they put forward is the ‘memory boomerang’, in which the same information can recall very different memories and the convictions that accompany them. These cause people to put a different complexion on the new piece of news from what might be expected.

The problem that any technological solution faces is that information avoidance and storytelling are both powerful contributors to the way people and societies work. The CMU team argues that people value information avoidance. It stops them collapsing under the weight of many conflicting ideas; it helps them manoeuvre through times of stress; and it lets them avoid conflict. The same characteristics also allow fraudsters to live with themselves.

Researchers have set about building algorithms that can provide social-media users with information that conflicts with that in their respective echo chambers. The problem remains one of acceptance.

In Glaeser and Sunstein’s theory, “surprising validators” offer a way out. If you want to convince someone to change their mind, get someone who they think is on their side to present the message. It is an extension of the idea that “only Nixon could go to China” or that the best way to convince an alcoholic to reform is to put them in a room with former alcoholics.

In practice, the opposite seems to have been happening. Political campaigns have found they can use analysis of social media to strengthen their hold on key groups of voters.

In February 2016, Cambridge Analytica’s chief executive Alexander Nix boasted in a column for advertising-trade newspaper Campaign that his company worked for US senator Ted Cruz’s team in his bid for the Republican nomination “to develop predictive data models in order to identify, engage, persuade and turnout voters for Cruz... Using CA’s creative guidance and voter targeting, the campaign created Facebook ads, phone scripts and even messages for door-to-door canvassers to communicate with the right voters in the right way.”

At a US Senate Intelligence Committee hearing at the end of March, George Washington Center for Homeland Security senior fellow Clint Watts claimed Russian operatives went straight to the top of the chain of command. “I can tell you right now today, grey outlets that are Soviet-pushing accounts tweet at President Trump when they know he’s online and they push conspiracy theories.”

During and since his campaign, Trump has seemed only too happy to push the theories to his supporters.

Some research suggests people may stumble on reality themselves if they are offered opportunities – just as long as opposing viewpoints are not rammed down their throats. Graells-Garrido and the Yahoo Labs researchers, among others, suggest using the idea of data portraits to present users with wider ranges of information without challenging them over their views. The data portrait borrows from the word and concept clouds that became popular on blogs in the mid-2000s. The portraits pull in links to many different sources, many of which will be outside the user’s own direct connections.

One option explored by the Yahoo team was to connect users with shared interests but very different political views – a technique that is reminiscent of Glaeser and Sunstein’s surprising validators. Although the less confrontational approach is less likely to upset users, work still points to them taking more notice of the points with which they already agree and there are underlying risks of users feeling that their service providers are not just spying on them but manipulating them.

Technology may fall short in dealing with a problem it seems only too good at amplifying, but  society has other ways to deal with fake realities. Well before Bush penned his article for The Atlantic, the US had already fought fake news – and won.

The closing decades of the 19th century represented the height of ‘yellow journalism’ in which politicians and businessmen bought favourable coverage for themselves and damaging allegations against their enemies.

The award for quality journalism that Joseph Pulitzer sponsored has little in common with his early career as a publisher. Together with competing media baron William Randolph Hearst, his publications campaigned for the Spanish-American War that closed the century.

However, before his death in 1911, Pulitzer moved away from this type of journalism and tried to atone for his “yellow sins”. Others followed suit. Analysis of stories between 1870 and the early 1920s by Glaeser Harvard colleague Claudia Goldin and Matthew Gentzkow of the University of Chicago show that mainstream media in the US changed dramatically. The open issue is whether it takes wars to get there.

The economists’ view of Mooers’ Law may prove to be the most useful: at some point the cost of ignoring information simply becomes too great.

Read more

E&T news reported on the planned new global centre to combat fake news and disinformation.

Paying for popularity

Even a quick perusal of political forums and social-media sites will quickly reveal claims of accounts posting under assumed names being Kremlin trolls or ‘Putinbots’. Their calling card is aggressive support for so-called populist candidates and criticism of a shady international elite. Although researchers claim to have identified many online accounts as being controlled by foreign intelligence agencies the people involved have yet to be unmasked.

In China, however, an accidental release of an email archive by the Internet Propaganda Office of Zhanggong, a city 300km north of Hong Kong, seems to have unmasked a group of domestic propagandists known as the ‘50 Cent Party’. Gary King of Harvard University, Jennifer Pan at Stanford University and Margaret Roberts of the University of California at San Diego analysed the cache of data and tried to align it with other online activity in China. Although it seems to confirm there is some form of government-sponsored posting, it works in a different way from the popular picture of trolls arguing with enemies.

Rather than antagonising dissenters, the members of the 50 Cent Party focused primarily on tactics not unlike those of conventional PR – writing messages that support government actions. Even the core idea that gave the 50 Cent Party their name – that they are paid per item posted – may be wrong. These were not pieceworkers in an agitprop version of the Mechanical Turk. Most seemed to be full-time government workers posting on social media as part of their daily occupation and following a familiar pattern of organised political campaigning – bolster the loyalists, shun dissent.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close