facebook guidelines

Facebook’s leaked guidelines show policies for death threat and abortion posts

Facebook’s internal review procedure for posts on its site have to come light following an investigation centred around a set of leaked guidelines.

A dossier apparently containing dozens of training manuals and internal documents was obtained by the Guardian newspaper and offers an insight into how content posted by Facebook’s users is moderated. 

It was revealed that comments about killing Donald Trump are banned by the social networking site, although violent threats against other people are often allowed to remain untouched.

It shows “credible violence” such as posting the phrase “someone shoot Trump” must be removed by the staff because he is a head of state.

However, generic posts stating someone should die are permitted as they are not regarded as credible threats, the newspaper claims.

Staff are told videos of abortions are allowed to remain on Facebook as long as they do not contain nudity, while footage of a violent death does not have to be deleted because it can help create awareness of issues such as mental illness, the Guardian said.

All “handmade” art showing nudity and sexual activity is allowed, but digitally made art showing sexual activity is not, the newspaper claimed.

Facebook will also allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”, it added.

The leak is likely to reignite the debate between freedom of expression, safety and censorship on the internet.

Last week, Theresa May outlined plans for widespread reform of cyberspace.

She said the internet had brought “a wealth of opportunity, but also significant new risks which have evolved faster than society’s response to them”.

Outlining plans under a future Tory government, she said: “We want social media companies to do more to help redress the balance and will take action to make sure they do.

“These measures will help make Britain the best place in the world to start and run a digital business and the safest place in the world for people to be online.”

Under the plans, social media firms will have to take action to stop search terms directing users to inappropriate sites.

In March, tech giants Facebook, Google, Twitter and Microsoft pledged to join forces to tackle extremist content on their platforms after the former two started auto-blocking videos last year. 

Facebook monthly users jumped to more than 1.86 billion, according to figures released at the turn of the year.

Monika Bickert, head of global policy management at Facebook, said: “Keeping people on Facebook safe is the most important thing we do.

“[Founder] Mark Zuckerberg recently announced that over the next year, we’ll be adding 3,000 people to our community operations team around the world - on top of the 4,500 we have today - to review the millions of reports we get every week and improve the process for doing it quickly.

“In addition to investing in more people, we’re also building better tools to keep our community safe.

“We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.”

The contents of the dossier were described by children’s charity the NSPCC as “alarming to say the least”.

An NSPCC spokesman said: “[Facebook] needs to do more than hire an extra 3,000 moderators.

“Facebook, and other social media companies, need to be independently regulated and fined when they fail to keep children safe.”

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close