Illustration of Mark Zuckerberg

Are political social media campaigns a threat to democratic elections?

Image credit: getty images

Political parties and their supporters are aggressively using social media to try and win elections. Is this truly democratic and, if not, can it be stopped?

About an hour after I liked Nigel Farage’s Facebook page (for research purposes only), a video of the politician arguing with Tony Blair appeared on my news feed. Directly beneath was a screen shot of Talos, the bronze giant from the 1963 film ‘Jason and the Argonauts’. In Greek mythology, Zeus placed Talos on the island of Crete in the eastern Mediterranean Sea to protect Europe against pirates and invaders from a part of the world that today we call the Middle East.

The Farage video came from the former UKIP leader’s Facebook team. The Talos photo, via a Facebook contact, from a magazine that celebrates past British pop culture. Was this just random ordering of content, or a subtle attempt by persons unknown to evoke affinity with political parties who have pledged to control immigration at the forthcoming general election?

In the past, if political candidates wanted to reach voters during election campaigns, they had to hand out leaflets, knock on doors, hold meetings and drive around in vans shouting through loud-hailers.

Advancements in online technology have given political parties and supporters new ways of getting their message across. However, according to many experts, events during recent elections show that social media campaigns are influencing proceedings to such an extent that the democratic process itself is under threat.

Over 43 million people voted at the last UK general election. Yet, as our First Past the Post electoral system produces lots of local winners, not one national winner, around one million undecided voters in marginal seats tend to decide who forms the government.

UK electoral law limits the money that parties can spend on campaigning at a local level. National spending limits are higher, but with social media campaigns no one can monitor who is targeted, from where, or how much is spent on what in a specific place. In short, carefully targeted large amounts of national money, way over the local threshold, can be and have been used to influence local results.

Peter Bull, an expert in the microanalysis of interpersonal communication from York University, explains that political campaigners do this because they know many modern voters no longer choose their political party at an early age and stick with it. “Today, people are more likely to change their allegiance because they feel strongly about an issue or think that one party is more likely to govern the country better than another,” he says.

Simon Moores, managing director of Zentelligence (Research) Ltd and former technology ambassador for the British government, adds: “More than three-quarters of the British people get their opinions and impressions from friends and family, social media, soap operas and other TV shows, not newspapers or television.”

Moores, who was once on the general election candidates list for the Conservative party and spent 10 years as a local politician, explains that because of this, industries, advertisers and governments are always looking to find new ways of exploiting new media and technologies.

All political parties use social media to target voters. Often the message is direct and above board. Sometimes, parties use seedy advertising strategies and tabloid humour to get their point across, promote themselves or undermine the opposition.

What’s also emerging, though, is the alarming possibility that outside interests are working to undermine the democratic process in a much more subtle and surreptitious way.

In April, Facebook admitted its platform had been exploited by governments and other interests during US and French presidential elections.

A report from the company’s security team outlined what it calls ‘information operations’ – coordinated efforts by malicious actors to spread misinformation and sow distrust, for political ends.

Around the same time, a London School of Economics working group warned that uncontrolled ‘dark money’ poses a threat to fundamental principles of British democracy. Academics describe dark money as money undeclared as a political donation which is nevertheless used by outside interests to influence political debate and proceedings.

Moneyed interests have always backed the politician, party or cause they believe will introduce policies that are good for business.

Yet whereas UK electoral law requires parties to disclose financial contributions from donors, social media campaigns run by outside groups and individuals can get round this because no one really knows who has said what to whom, with what intent or effect. Or who’s paying for it.

“Citizens sharing political views on social media is a good thing,” says Dr Martin Moore, director of the Centre for the Study of Media, Communication and Power at King’s College London. “The problem comes when you get a significantly resourced outside organisation spending a lot of money to send advertising and propaganda to millions of people across social media platforms to influence or distort an election for reasons of their own.”

Left-wing and populist activists have been accused of doing the same thing, albeit on a smaller scale, to show governments as unresponsive to people’s needs. So, too, have clever individuals looking to make some money, selling ads for clicks on their web pages. The US Central Intelligence Agency (CIA), the UK Parliamentary Public Administration Constitutional Affairs Committee and French cybersecurity firm Trend Micro has alleged that foreign governments, possibly the Russians, have tried to hack recent Western elections.

UK electoral law also prohibits political candidates and parties from making false claims against each other. However, this restriction doesn’t apply to outside commentators.

Earlier this May, Facebook published adverts in UK national newspapers giving readers tips on how to spot fake news. Facebook did the same before the French presidential election. The company had been widely criticised for not doing enough to stop the spread of fake news during both the US presidential election and last year’s EU referendum.

Conservative MP Damian Collins claims that during the run-up to the US election, the top 20 fake news stories were shared on social media more than the top 20 genuine news items. Half the online news read in the state of Michigan was bogus, according to a recent Oxford Internet Institute study.

Countless thousands of fake news items designed to promote favourable opinion or demonise opposition are sent straight to people’s social media platforms every day. This content might contain ambiguous spun half-truths, opinions masquerading as facts or, in some cases, just outright lies.

During the race for the White House, Americans were told that Hillary Clinton was selling weapons to Isis and that Pope Francis endorsed Donald Trump. Throughout the EU referendum, Brits were told that the National Health Service (NHS) would get the £300m the country would save if Britain left the EU.

“The NHS story had no basis in truth whatsoever, it was never going to happen,” says Labour MP Paul Flynn, a member of the Public Administration and Constitutional Affairs committee. “Yet in the end, that’s all people remembered.”

Dr Darren Lileker, an expert on political communication at Bournemouth University, explains that fake news is deliberately used to exploit people’s fears, exacerbate tensions and encourage emotional reactions. “For instance, a few weeks ago there were rumours circulating that Theresa May was investigated by the police,” he says.

Lileker, who reported on the use of social media in political campaigning after the 2015 election, adds that parties themselves are not above spinning a line or two. During the 2015 election, Conservatives targeted Liberal Democrat seats, saying that if people didn’t want a Labour-SNP coalition, they must vote Conservative. “This was fake because it wasn’t clear that such a coalition would ever happen,” Lileker says.

However, he adds that parties have to be careful they’re not seen to be using unethical tactics. “UKIP, for instance, have to be careful about not appearing to be racist,” he says.

York University’s Bull agrees: “Messages must be credible and believed.”

The natural spread of fake news via sharing, clicking and liking is not sufficient for some, who also use bots that mimic and control social media accounts and botnets – collections of internet-connected devices that are infected and controlled by a common type of malware, often without the user’s knowledge – to spread their online propaganda.

In March, researchers from the University of Southern California claimed there are around 48 million bots on Twitter alone - that amounts to between nine and 15 per cent of all Twitter users. Some bots, the researchers said, had the ability to emulate human behaviour and could spread rumours and manufacture fake political support.

“There are hundreds of thousands of bots, more than we thought,” says computer scientist Dr Shi Zhou from University College London, who earlier this year released a paper claiming to have discovered hundreds of thousands of dormant Twitter bots.

Zhou explains the main purpose of these bots is to commit advertising fraud. A bot might be programmed to click 1,000 times on a website to secure advertising revenue for the bot’s creator. “Other bots are created for fun or to cause mischief, but they can be reprogrammed for political use,” he adds.

Clementine Desigaud from the Oxford Internet Institute adds that bots can be pre-programmed to distribute information on social media at certain times. “They’re good at infiltrating networks of friends and spreading information around networks,” she says.

The Oxford Internet Institute claims that bots accounted for one-third of Twitter traffic immediately before the EU Referendum, all promoting Leave. #TrumpWon appeared all over Twitter after every TV debate during the presidential election. Throughout the 2012 US election, Republican nominee Mitt Romney acquired 116,000 Twitter followers in one day. Zhou wonders how many of these were bots and whether it was Romney’s campaigners who had bought them.

Flynn adds that botnets have another use: to uncover, aggregate and pass on people’s passions and prejudices.

From a person’s clicks, likes, tweets and Google searches, inferences can be made about their interests, personality, motivations, likely emotional responses, beliefs and, of course, their voting intentions.

Using such profiles, individuals can be sent tailor-made online propaganda that seeks to evoke anger, moral shock and outrage and desensitise empathy and moral reasoning to the point where people might think it’s OK to take sick people’s benefits away because they’re drug-dealing fakers whose scrounging gives them a better standard of living than people who work. Or that the homeless can be ignored because some of them actually choose to live on the streets.

Using indirect propaganda to manipulate is nothing new. As far back as the 1920s, US public relations guru Edward Bernays was providing corporate elites with strategies they could use to control and regiment the masses for their own political and economic gain.

Bernays applied the psychoanalytic principles of Sigmund Freud (his uncle) to mass marketing. The PR man believed conscious and intelligent manipulation of organised habits and opinions of the masses was an important element in a democratic society. After all, the masses were driven by basic instincts outside their understanding.

Bernays helped the Woodrow Wilson administration sell the idea that American involvement in the First World War was driven by a philanthropic desire to safeguard democracy in Europe. Nothing to do with stopping Germany and other Central Powers from imposing restrictions on American trading in European markets, should they win the war. Bernays promoted, among other things, cigarettes as torches of freedom to young emancipated women and used doctors’ endorsements to convince people to eat bacon as part of a healthy breakfast.

Saddled with democratic elections and the need to secure popular approval, Bernays believed it was right and proper that a hidden, invisible group of elites should seek to control popular sentiment. He called this the ‘engineering of consent’.

Supreme Court Justice Felix Frankfurter called Bernays and his colleagues “professional poisoners of the public mind, exploiters of foolishness, fanaticism and self-interest”.

Zentelligence’s Moores explains it’s the delivery of propaganda techniques that has become more sophisticated in modern times, not techniques themselves. “The same propaganda techniques previously used on radio and television are applied on social media at a much more granular level,” he says.

He adds that, in the 1960s, US advertisers bragged they could target 40 different personality types and “today’s propagandists can narrow down measures in thousands of different ways”.

Official investigations into some of this are under way. The Information Commissioner’s Office, the UK’s official privacy watchdog, is considering whether using data analytics for political purposes infringes data protection laws. The Electoral Commission is investigating the role played by Cambridge Analytica in the Leave campaign at last year’s EU referendum. That company uses psychographics and big data to influence target audiences and is run by billionaire Robert Mercer, one of Trump’s biggest donors. It was also active in the recent US presidential election.

In January, Parliament’s Culture, Media and Sport Committee launched an enquiry into fake news. Labour MP Flynn wants the Public Administration and Constitutional Affairs Committee and GCHQ to look into how social media is used to influence elections and by whom.

Dan Nesbit, from privacy activists Big Brother Watch, wants more questions to be asked publicly about how people’s data can be collected without their knowledge. “New EU data protection legislation is coming next May that requires a company to get informed consent to track an individual’s data,” he says. “Political parties seem keen to maintain that law after Britain leaves the EU.”

“Government needs to work out why these platforms have become so dominant and try to figure out a way of making them less dominant,” says King’s College London’s Dr Martin Moore. He is concerned, though, that censorship could go too far in inhibiting free speech.

The German government has recently drafted a law to impose fines of up to €50m on social media networks that fail to take fake news down in a 24-hour period. “Inevitably, social media will have no choice but to err on the safe side, which means they’ll more likely censor and take down lots of information,” Moore says.

Action by social media providers can also have an impact. The Oxford Internet Institute found that only five per cent of news during the French presidential election was fake. Just before the election, Facebook had suspended 30,000 accounts in France for spreading spam, misinformation and other deceptive comment. It targeted suspect accounts that posted regularly to big audiences.

Moore believes social media users need to be much more aware of how social media works, how information appears on their pages and feeds and why, and what happens to content they engage with.

That means avoiding personality quizzes, which look harmless but provide data-mining companies with valuable profiling data, and only sharing and liking content that the user has actually read. Similarly, users should check through newspapers themselves for news, rather than simply scrolling down social media pages. Moore admits we need to find a way to make professionally produced news more sustainable, particularly at the local level.

Fake Twitter accounts can be identified. The Oxford Internet Institute’s Desigaud says they tend to send out 50 or more tweets per day from the same hashtag. Moore adds that algorithms and language processing software can identify particular phrases associated with fake news, but warns the success rate is low. The Oxford Internet Institute is designing a tool that detects these bots. Yet this is a long-term project. Zhou suggests that more needs to be done to discover these bots before we can start to look deeper into what they do, how they were created and how to fight them.

“We’re in the midst of a transition in the way people communicate,” Moore says, adding that people need to learn new skills and etiquettes and take new responsibilities.

According to Flynn, we need some sort of fact checker in the meantime. “Someone or something authoritative, to come out and say when something is bulls**t,” he says. “Otherwise elections become a contest between two sets of liars.” 

Facebook’s guide to spotting it

Fake News

Watch out for
  • Catchy headlines with exclamation marks that make shocking claims
  • Phoney or look-alike URLs
  • Is the source reputable? If unfamiliar, check their “About” section to learn more
  • Unusual formatting with misspellings or awkward layouts
  • Photos or videos manipulated or taken out of context
  • Dates and timelines that make no sense, or altered event dates
  • Inaccurate information sources and unnamed experts
  • Other news sources reporting the same story
  • Content intended to be a joke or satire

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close