Computers taught to recognise hate speech and violent language

Based on text posted on forums and social media, a new machine learning method has been developed to detect antisocial behaviours such as hate speech or indications of violence with high accuracy.

The rise and rise of social media and Web 2.0 encourage us to form into online groups, based on our common interests and behaviour patterns. While mostly harmless this has the potential to pose threats, such as through the emergence of online communities rife with aggressive, hateful discussions.

This is increasingly being considered an example of antisocial behaviour; behaviour likely to cause harm, distress or harassment. In extreme cases, perpetrators of school shootings or other acts of terror post angry or boastful messages to niche forums before they act.

So far, efforts to address this behaviour have used educational, social and psychological approaches. A new study, carried out at the University of Eastern Finland, suggests that a computational approach could be another useful weapon against hate speech and expressions of violence.

The new method, developed by Myriam Douce Munezero, is based on natural language processing, a field which seeks to improve and expand on the ability of computers to understand human speech and writing.

Based on our choice of words and our writing style, we can determine a lot about our thoughts, preferences, emotions and behaviours. Munezero aimed to use natural language processing techniques to develop a way to identify, extract and use linguistic features which indicate antisocial behaviour.

Her method put particular emphasis on expression of negative emotions like anger, as well as searching for other distinguishing features of antisocial language, such as swearing and insults.

The models were able to rapidly identify samples of text which expressed hatred or violence, with some models showing accuracies of over 90 per cent, demonstrating that natural language processing can have a role in intervention of antisocial behaviour.

These techniques could be integrated into forums or social media platforms to automatically or semi-automatically detect potential incidents of antisocial behaviour and send out warnings, flagging up posts and users that may need further investigation.

Munezero’s technique is the latest in a series of efforts to analyse social media posts to prevent crime. Last year, a social media analysis tool was rolled out by the US Department of Justice in an effort to predict incidents of hate crime based on tweets.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close