twitter logo

Auto-removal of white supremacist tweets unpalatable to key users, cautions Twitter employee

Image credit: reuters

According to a report from Motherboard, Twitter engineers designed an algorithm to tackle white supremacist material but the company ultimately decided that it would target too many prominent conservative politicians for it to be considered acceptable.

Since 2016, scrutiny of social media platforms and their passive role in the subversion of democracy has intensified. Facebook, Twitter, YouTube and other many social media companies have – among other problems – been accused of giving extremists a prominent platform to radicalise other users, as well as allowing for the sharing of hateful, violent, and abusive content. While companies like Facebook have hired thousands of additional content moderators, the largest social media companies hope to rely heavily on machine learning algorithms to detect and remove inappropriate material.

There is some scepticism about how effective these algorithms are. A ban on adult content on blogging platform Tumblr resulted in images of food, cartoon animals and landscape photographs being flagged as inappropriate and removed, while London’s Metropolitan Police have admitted that the machine learning algorithm used for detecting images of child abuse keeps accidentally marking photographs of deserts as child pornography.

At a company meeting in March, one Twitter employee asked why the platform had largely removed ISIS propaganda from its platform but had failed to do the same for white supremacist content.

According to Motherboard, a Twitter engineer explained that every automatic content filter comes with a trade-off. For instance, allowing an algorithm to automatically remove ISIS propaganda without the content first being reviewed by a human moderator can sometimes affect innocent content written in Arabic, such as legitimate news from Middle Eastern publications. The engineer said that in this case, the accidental censorship was considered an acceptable compromise in order to remove the terrorist material.

Motherboard verified that the Twitter engineer explained that taking a similarly aggressive approach to tackling white supremacist content on the platform would not be considered similarly socially acceptable, as it would target Republican politicians and their supporters.

Motherboard acknowledged that this was not necessarily indicative of Twitter’s policy. A Twitter representative said that this was not an “accurate characterisation” of its policies or enforcement procedures.

Speaking at the TED 2019 conference in Vancouver last week, Twitter CEO Jack Dorsey declined to respond directly to questions about why Neo-Nazis, such as former Ku Klux Klan leader David Duke, have not yet been removed from the platform, commenting simply that Twitter has “policies around violent extremist groups”.

Twitter has faced repeated calls to remove high-profile figures from its platform for hate speech, harassment and inciting violence, most notably US President Donald Trump. Twitter has stated that it will not remove Trump from its platform due to public interest in his social media activity, although it has suggested that it could begin to label tweets which violate its policies.

Earlier this week, New Zealand Prime Minister Jacinda Ardern announced that she and French President Emmanuel Macron would host a summit in Paris in May. At the summit, which will include world leaders and executives of the largest tech companies, she will aim to form an international agreement to stamp out terrorist content from internet platforms. Facebook agreed to ban all white nationalist and white separatist content from its platforms in March, following a devastating terrorist attack in Christchurch, New Zealand, which left 50 Muslims dead.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close