Welcome Your IET account
Severely distraught young woman sitting in front of a computer with a judgmental hand pointing at her from within the computer monitor cyber bullying her

Online hate speech could be ‘contained like a computer virus’

Image credit: Mike2focus - Dreamstime

The spread of hate speech via social media could one day be tackled using the same ‘quarantine’ approach deployed to detect and combat malicious software.

A study on the subject matter, conducted by an engineer and linguist from the University of Cambridge, used databases of threats and violent insults to build algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.

As these algorithms get refined, potential hate speech could be identified and “quarantined”. Here, users would receive a warning alert with a 'Hate OMeter' – a hate-speech severity score – the sender’s name, and an option to view the content or delete unseen.

This method is similar to spam and malware filters, and researchers from the Giving Voice to Digital Democracies project believe it could dramatically reduce the amount of hate speech people are experiencing. The team aim to have a prototype of the technology ready in early 2020.

“Hate speech is a form of intentional online harm, like malware, and can, therefore, be handled by means of quarantining,” said linguist Dr Stefanie Ullman. “In fact, a lot of hate speech is actually generated by software such as Twitter bots.”

Definitions of hate speech vary depending on nation, law and platform, and experts argue that just blocking keywords has proven to be ineffectual: for example, graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats.

As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended “psychological harm” is inflicted, with swarms of moderators required to judge every case.

This is the new front line of an old debate: freedom of speech versus poisonous language.

“Companies like Facebook, Twitter and Google generally respond reactively to hate speech," said engineer Dr Marcus Tomalin. “This may be OK for those who dont encounter it often. For others, its too little, too late.”

Tomalin, who is also the project manager of Giving Voice to Digital Democracies, added: “Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation.”

This is an example of a possible approach for a quarantine screen, complete with Hate O'Meter

This is an example of a possible approach for a quarantine screen, complete with A Hate O’Meter shown on the right-hand side

Image credit: Stefanie Ullman

Former US Secretary of State Hillary Clinton recently told a UK audience that hate speech posed a “threat to democracies”, in the wake of many women MPs citing online abuse as part of the reason they will no longer stand for election.

Meanwhile, while addressing a crowd at Georgetown University, Facebook CEO Mark Zuckerberg spoke of “broad disagreements over what qualifies as hate” and argued: “we should err on the side of greater expression.”

With all this in mind, the researchers said their proposal is not a magic bullet, but it does sit between the “extreme libertarian and authoritarian approaches” of either entirely permitting or prohibiting certain language online.

Also, most importantly, the user becomes the arbiter, the researchers added. “Many people don’t like the idea of an unelected corporation or micromanaging government deciding what we can and can't say to each other,” said Tomalin.

“Our system will flag when you should be careful, but it’s always your call. It doesn't stop people posting or viewing what they like, but it gives much-needed control to those being inundated with hate.”

In the paper, published in the journal Ethics and Information Technology, the duo refers to detection algorithms achieving 60 per cent accuracy – not much better than chance. Tomalin’s machine-learning lab has now got this up to 80 per cent, and he anticipates continued improvement of the mathematical modelling.

Meanwhile, Ullman is gathering more “training data”: verified hate speech in which the algorithms can learn. And such data will help refine the “confidence scores” that determine a quarantine and subsequent Hate O’Meter read-out, which could be set similarly to a sensitivity dial – depending on users' preferences.

A basic example of this score might involve a word like “bitch”: a misogynistic slur, but also a legitimate term under contexts such as dog breeding. The researchers said the algorithmic analysis of where such a word sits syntactically – the types of surrounding words and semantic relations between them – determines the hate speech score.

“Identifying individual keywords isn’t enough, we are looking at entire sentence structures and far beyond, said Ullman. “Sociolinguistic information in user-profiles and posting histories can all help improve the classification process.”

Tomalin added: “Through automated quarantines that provide guidance on the strength of hateful content, we can empower those at the receiving end of the hate speech poisoning our online discourses.”

However, the duo, who work in Cambridge’s Centre for Research into Arts, Humanities and Social Sciences (CRASSH), said that – as with computer viruses – there will always be “an arms race” between hate speech and systems for limiting it.

The project conducted at the university has also begun to investigate “counter-speech”: the ways people respond to hate speech. The researchers also intend to feed into debates around how virtual assistants such as ‘Siri’ should respond to threats and intimidation.

Earlier this week, researchers from the University of Waterloo said they have developed a machine-learning tool which detects and determines whether claims made in news stories or social media posts are supported by other content on the same subject.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them