583 million fake accounts deleted by Facebook in spam crackdown
For the first time, Facebook has released figures about the number of accounts it has disabled for posting spam or other inappropriate materials.
The social network is currently under pressure from lawmakers and users to be more transparent in the wake of the Cambridge Analytica scandal.
Facebook’s enforcement report spans its takedown efforts between October 2017 to March 2018 and covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts.
“We took down 837 million pieces of spam in Q1 2018 - nearly 100 per cent of which we found and flagged before anyone reported it,” the social network said in a blog post.
“The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts - most of which were disabled within minutes of registration.
“This is in addition to the millions of fake account attempts we prevent daily from ever registering with Facebook. Overall, we estimate that around 3 to 4 per cent of the active Facebook accounts on the site during this time period were still fake.”
However, the social network admitted its automated tools were still struggling to pick up hate speech, with only 38 per cent of more than 2.5 million posts removed having been spotted by the firm’s technology.
Facebook’s vice president of product management Guy Rosen said more work needed to be done to improve such detection tools.
“We have a lot of work still to do to prevent abuse. It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important,” he said.
“For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.
“In addition, in many areas - whether it’s spam, porn or fake accounts - we’re up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts.”
The site also said it took down 21 million pieces of nudity and sexual activity-related content, 96 per cent of which was found and flagged by its systems before being reported.
Meanwhile, Twitter has launched a new behaviour monitoring system that will hide content from accounts identified as trying to distort or disrupt public conversation on the site.
The site said the new system was designed to deal with “troll-like” behaviour from a minority of users that are often reported as abusive, but in some cases are not in breach of Twitter rules, by instead relegating their posts.
As part of plans to improve the health of discussion on the platform, Twitter said it will use signals such as accounts without a confirmed email address or those that heavily tweet other accounts which do not follow them to spot and demote disruptive content that appears in conversations and search results.