Welcome Your IET account
Twitter logo

‘Detection engine’ grades tweets based on potential for harm

Image credit: Dreamstime

Researchers from the University of Exeter Business School have developed a tool, 'LOLA', which they say can detect misinformation, cyberbullying and other harmful online behaviour with very high accuracy.

LOLA, named after the character from children’s television series Charlie and Lola, was developed by a team led by Exeter’s Initiative in the Digital Economy.

The tool can analyse 25,000 text samples per minute, detecting harmful behaviour such as cyberbullying and misinformation, as well as hate speech such as Islamophobia, with up to 98 per cent accuracy.

Social media companies like Facebook and Twitter are grappling with the proliferation of misinformation and hate speech on their platforms, using a combination of human moderators and software to detect inappropriate content at the scale necessary. However, the automated tools used to remove harmful content have sometimes been criticised as inaccurate; for instance, the algorithm used by Tumblr to remove images of nudity has been mocked for flagging mundane objects such as food, children’s cartoons, and landscapes.

“In the online world, the sheer volume of information makes it harder to police and enforce abusive behaviour,” said Dr David Lopez, who led the development of LOLA. “We believe solutions to address online harms will combine human agency with AI-powered technologies that would greatly expand the ability to monitor and police the digital world.”

Lopez explained that that LOLA makes use of recent advances in natural language processing and behavioural theory in order to extract 12 different emotional undertones in text – such as anger, fear, joy, love, optimism, pessimism, threat and trust – and infer online harms in text-based conversation.

LOLA grades tweets with a 'severity score' that indicates how likely the tweets are to cause harm. The highest-scoring tweets tend to be those which score highest for toxicity, obscenity, and insult.

In recent experiments, LOLA was shown to be capable of identifying the people who had bullied teen climate activist Greta Thunberg on Twitter, assessing how abusive and abused UK politicians are on Twitter, and spotting Twitter accounts sharing Covid-19 misinformation by focusing on the particular fear and anger associated with this behaviour.

“The ability to compute negative emotions - toxicity, insult, obscenrity, threat, identity hatred - in near real time at scale enables digital companies to profile online harm and act pre-emptively before it spreads and causes further damage,” Lopez said.

Lopez and his colleagues have been collaborating with the Spanish government and Google, using LOLA to detect misinformation. They hope that this sort of tool could also prove useful for cybersecurity services and social media companies.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them