Welcome Your IET account

AI tools could consider context when detecting hate speech

University of Sheffield researchers are developing tools that could detect and tackle online abuse in a manner which accounts for differences in language between communities.

The researchers are beginning by looking into machine-learning based methods already in use to detect abusive content in the gaming industry and in messages directed at politicians on social media.

They will consider the biases embedded in existing content moderation systems, which often use rigid definitions or determinations of abusive language and sometimes fail to tackle ‘borderline’ content, such as posts which implicitly support antisemitic conspiracy theories. There are concerns that existing systems can accidentally feed new forms of discrimination based on gender, ethnicity, culture, or religious or political affiliation.

The researchers intend to use their findings to develop new AI tools to counter hate speech in a way that is more effective and unbiased than before. If the project goes according to plan, the new tools will be based on more “context-aware” and flexible detection systems which use natural language processing (NLP) to account for differences in language within communities based on race, ethnicity, gender, and sexuality.

For example, a term which could be considered an element of hate speech under most circumstances could be used as a reclaimed slur by a small community.

“There has been a huge increase in the level of abuse and hate speech online in recent years and this has left governments and social media platforms struggling to deal with the consequences,” said computer scientist Professor Kalina Bontcheva, who is leading the project. “This large rise in abuse and hate speech online has sparked public outrage with people demanding governments and social media companies do more to tackle the problem, but there are currently no effective or technical processes that can tackle the problem in a responsible or democratic manner.”

“We are developing novel AI and NLP methods to address the problem while also developing a substantial programme of training for academics and early career researchers to build capacity and expertise in this key area of research.”

The new AI tools will be open source, allowing for any platform to adopt and adapt them.

Social media companies are under growing pressure from regulators, lawmakers, and campaign groups to clamp down on abuse on their platforms, from bullying to hate speech. Incoming legislation addressing “online harms” will give social media companies a statutory duty of care to tackle abuse on their platforms, with the likelihood of fines for companies that fail their users. Social media giants like Facebook tend to take a mixed approach to tackling online abuse, employing software and human moderators to flag up and review inappropriate content.

In the US, researchers from Carnegie Mellon University have created an AI tool which identifies positive comments on social media platforms. The researchers suggest that this tool could be used to find and promote these comments online in an alternative approach to combatting hate speech.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them