Samples of Russian sponsored Facebook posts displayed in Congress

Facebook accused of exaggerating success in tackling extremism

Image credit: Reuters/Aaron P. Bernstein

A whistleblower has confided to the US Securities and Exchange Commission that Facebook has been exaggerating its success in identifying and removing extremist content on its main platform.

Facebook has been under public pressure to remove extremist content, disinformation, and abuse from its platforms for several years. It has claimed to have made major steps forward by employing thousands of additional content moderators to review flagged content and deploying algorithms for detecting and removing inappropriate content.

Since the Christchurch terror attacks in March – in which an alleged white supremacist from Australia murdered dozens of people in two Mosques and livestreamed the rampage on Facebook – a fresh wave of condemnation has hit Facebook and other online platforms for facilitating terrorist propaganda.

Next week, it is expected that world leaders and chief executives of the world’s largest online platforms will meet in Paris at a meeting co-hosted by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron to pledge to stamp out terrorist content online.

Now, the Associated Press (AP) has obtained a complaint submitted to the US Securities and Exchange Commission which claims that Facebook has been exaggerating its success in identifying and removing extremist content.

Over the course of five months last year, researchers monitored publicly accessible pages controlled by users affiliated with designated terrorist groups. During this period, just 38 per cent of posts which featured prominent symbols of extremist groups were removed. While the study has limitations – notably because social media platforms normally decline to share anonymised data with researchers – the researchers said that Facebook’s claims that its systems remove most extremist content are almost certainly inaccurate.

“That’s just stretching the imagination to beyond incredulity,” said Amr Al Azm, one of the researchers involved. “If a small group of researchers can find hundreds of pages of content by simple searches, why can’t a giant company with all its resources do it?”

The AP independently reviewed extremist content on Facebook, finding that gory and extremist content – such as videos of executions, photographs of severed heads and Islamist propaganda – remained easy to find on the platform due to appearing in search results. One page which remained undetected by Facebook included a header which said “The Islamic State” in English above a photograph of a mushroom cloud; this page most likely slipped through Facebook’s systems due to the letters being embedded in a graphic block rather than being searchable text. Other pages which remained searchable included “The American Nazi Party”, the “New Aryan Empire” and the “Aryan Brotherhood Headquarters”.

The AP also found that Facebook was automatically generating videos using extremist material, such as animated videos containing anti-Semitic messages which state: “Thanks for being here, from Facebook” and flash Facebook’s instantly recognisable ‘Like’ symbol.

Hany Farid, a digital forensics expert at the University of California-Berkeley and an advisor to the Counter-Extremism Project, told AP that Facebook’s machine learning systems for detecting extremism were failing and that the company was unwilling to accept the expense of tackling the problem.

Last month, Facebook CEO Mark Zuckerberg announced that: “In areas like terrorism, for al-Qaeda and ISIS-related content, now 99 per cent of the content that we take down in the category, our systems flag proactively before anyone sees it. That’s what ‘really good’ looks like.”

In response to the study, Facebook has said that its mechanisms are not perfect, but they are improving: “After making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago,” a spokesperson said in a statement. “We don’t claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.”

Recently, Facebook banned a handful of high-profile far-right provocateurs, including Milo Yiannopoulos, Paul Nehlen and Laura Loomer: a move which drew criticism from US President Donald Trump. This followed an announcement that material promoting white nationalism and white separatism - extremist movements to which the Christchurch shooter was allegedly affiliated with - would be banned from the platform.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles