Facebook boots out white nationalism in wake of Christchurch massacre
Following the deadly terrorist attack in New Zealand, Facebook - the world’s largest social media company - has announced that it will ban content promoting white nationalism and separatism.
The terrorist attack – which took the lives of 50 people congregating at two mosques in Christchurch, New Zealand – was streamed live on Facebook to 4,000 people and continued to be shared long after, despite efforts to remove the footage. Facebook has said that it has now blocked 1.2 million uploads and deleted a further 300,000. A group representing French Muslims is suing Facebook and Google (as the owner of YouTube) for failing to prevent the footage being posted on their platforms.
The suspect in the shootings was connected to far-right organisations, having engaged with extremist Facebook pages and published a manifesto promoting “white genocide” conspiracy theories and expressing hope for a “race war” before he carried out the massacre.
Following the attack, Facebook will broaden its definitions of hate speech to include white nationalist and separatist material. Previously, it had not classified these movements as hate speech, due to their association with “broader concepts” such as Basque separatism and American Pride. In a blog post, Facebook stated that it had engaged in conversations with civil rights groups and academics and had come to the conclusion that it could no longer “meaningfully” separate the two movements from white supremacy, particularly given significant overlap in its own review of hate figures and organisations.
Facebook’s policy director for counterterrorism, Brian Fishman, told Motherboard: “We decided that the overlap between white nationalism, white separatism and white supremacy is so extensive [that] we really can’t make a meaningful distinction between them. That’s because the language and the rhetoric that is used and the ideology that it represents overlaps to a degree that is not a meaningful distinction.”
“Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism,” Facebook said in a blog post.
The ban was agreed at a meeting of Facebook’s Content Standards Forum and will be enforced from next week. It will cover all forms of praise, support and representation for white nationalism and separatism on both Facebook and Instagram. Facebook users searching for terms associated with white supremacy such as “Heil Hitler” will be directed to advice from Life After Hate, a charity founded by reformed violent extremists which provides support for people leaving hate groups (particularly white supremacist groups).
According to a report from Motherboard, Facebook would ban explicit phrases like “I am a proud white nationalist”, while “implicit and coded white nationalism and white separatism” would not be immediately removed, partially because it is more difficult to identify.
New Zealand Prime Minister Jacinda Ardern commented at a press conference that material similar to that which inspired the Christchurch shooter should have been banned long ago, although she was pleased to see that Facebook had taken the decision to include it within its definition of hate speech.
“I still think that there is a conversation to be had with the international community about whether or not enough has been done. There are lessons to be learnt here in Christchurch and we don’t want anyone to have to learn those lessons over again,” she said.
Facebook has been robustly criticised by civil rights groups, governments and legislators for its failure to pro-actively combat misinformation and hate speech on its platforms. In recent months, it has acknowledged that it needs to improve its efforts. This week, the platform announced that it had identified and deleted thousands of shady accounts and pages with links to Russia and Iran.
The company claims to have been using machine-learning tools to detect offending material, although it has also employed thousands more content moderators to review flagged content in the past year.
“We need to get better and faster at finding and removing hate from our platforms […] we’re making progress, but we know we have a lot more work to do,” the blog post said. “Unfortunately, there will always be people who try to game our systems to spread hate. Our challenge is to stay ahead by continuing to improve our technologies, evolve our policies and work with experts who can bolster our own efforts.”
In a blog post, Keegan Hankes, senior research analyst at the Southern Poverty Law Centre, described the decision as a step in the right direction, but expressed concern that Facebook was not going far enough to stamp out content that “walks just under the line of blatant white supremacy”.
“The internet and social media in particular continues to be a powerful tool used by the radical right to accelerate the spread of hate into the mainstream,” Hankes wrote. “Silicon Valley companies have for too long failed to take seriously the toxic bigotry brewing on their platforms. Tech companies need to proactively tackle the problem of hateful content that is easily found on their platforms before hate-inspired violence occurs, instead of simply reacting to it.”
In November 2018, Hankes explained to E&T how crackdowns on hate speech on mainstream social media platforms have inadvertently encouraged the creation of new platforms intended as “safe spaces” for extremists, such as Gab and Voat.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.