Facebook and Twitter fail to protect female public figures, MPs claim
Image credit: Anton Garin | Dreamstime.com
Social media giants Facebook and Twitter have been accused of failing to protect women – particularly those in the public eye such as politicians – from online abuse.
Appearing before members of both Houses of Parliament on the joint Human Rights Committee, MPs questioned the social networks’ response to abuse directed at women in the public eye.
In response, the firms admitted that they still had work to do to protect MPs and other public figures, with representatives of both firms giving evidence to the committee on democracy and free speech.
Citing a number of examples of prominent women being targeted by Twitter posts that were not initially taken down, Scottish National Party (SNP) MP Joanna Cherry asked if the companies would accept they had made mistakes in policing which had “failed to protect women”.
Cherry referenced research from Amnesty International, which analysed 228,000 tweets sent during 2017 to 778 female politicians and journalists from across the political spectrum in the UK and US. The research found that about one in every 14 contained abusive or problematic language.
Kay Minshall, Twitter’s head of UK government, public policy and philanthropy, said she was “horrified” by the stories of abuse she has encountered.
“There is clearly a number of steps that we want to take, we need to take – but we are in a different place to where we were even this time last year,” she said.
Minshall also said the platform was “acutely aware of its responsibilities” and now works closely with parliamentary authorities and law enforcement to improve on the safety of politicians on social media.
However, Cherry argued that some of the more high-profile incidents of abuse had only been removed after such posts and tweets were publicised by other prominent women.
“There seems to be a pattern of Twitter initially ruling that extremely offensive and violent tweets directed at women in public life are acceptable and that Twitter only reviews their decision when they are pressed by other figures in public life,” she said.
During the meeting, the two companies were also questioned by MPs on their ability to more quickly, and proactively, find and remove abuse that appears on their sites.
Rebecca Stimson, Facebook’s UK head of public policy, said that alongside thousands of human reviews, the social network used what she claimed to be “probably the most advanced automated systems in the world”.
However, Stimson admitted the nuance in language around harassment meant that these systems were not yet able to stop content in the same way that it deals with offensive material.
“There are places where we’re really, really good – terrorism, child exploitation, that kind of thing – our machines are able to find and remove around 99 per cent of that kind of content before it’s ever seen by anyone,” she said.
“Things like bullying and harassment and some of the subject that we’re discussing with you today are much harder for a machine to identify accurately what that is,” Stimson added when speaking before the committee. “It might be us just having an argument about something, it might be using some robust language.
“So there, we found about two million pieces of that kind of content, but only about 15 per cent of that was found by our machines and the rest we rely on individuals reporting to us and human reviewers because often it’s more about context and it’s more about intent and those can be nuanced decisions.”
Both firms argued that increased engagement with politicians such as their appearances before committees could only help them improve their content policing.
The two companies have both recently announced new tools to prevent malicious content being posted around the upcoming European Parliament elections.
Minshall also confirmed that in June, Twitter will test a new feature that allows authors of tweets to moderate replies – hiding those they did not wish to see.