
Social media censorship: filtering out extremist content
Image credit: Getty images
Are the social media companies finally getting their act together on filtering content?
In June, Google’s general counsel Kent Walker took to the influential op-ed pages of the Financial Times to announce that the company was instituting a more robust filtering strategy to remove or restrict extremist video content at its YouTube subsidiary.
By August, the company was forced to beat a partial retreat as several respected media outlets found that the new system had flagged and removed their content. Some reporters had had their accounts closed; others received warnings.
The deleted clips included footage of attacks during the Syrian civil war posted by independent investigative reporting group Bellingcat. After a very public spat with YouTube, Bellingcat’s videos were restored, as were those from other sources.
A YouTube statement said: “With the massive volume of videos on our site, sometimes we make the wrong call.” Many of those affected wearily responded: “Twas ever thus.”
This was hardly the first time that the media had found itself at odds with an internet giant over filtering and censorship. It is a decade since online censorship and abuse became public issues. The tensions behind it are proving slow and difficult to resolve.
Many were highlighted in a high-profile dispute during summer 2016. Facebook appeared to ban Nick Ut’s iconic ‘The Terror of War’ image. It shows a naked young girl fleeing a napalm attack during the Vietnam War and had been posted by a journalist working for Norwegian newspaper Aftenposten.
Although some might challenge the ethical implications of sharing an image of a child in a distressing or vulnerable position, for Phan Thi Kim Phuc, the child in question, the photo has become what she calls “a path to peace”.
In an interview with CNN in 2015, Phuc spoke of how she learned to accept the role that the photo has in demonstrating the horrors of war, and that her pain and terror have helped ensure that the past is not forgotten. “I realised that if I couldn’t escape that picture, I wanted to go back to work with that picture for peace. And that is my choice,” she said.
The image was only reinstated on Facebook after an aggressive front-page open letter to CEO Mark Zuckerberg from Aftenposten editor-in-chief Espen Egil Hansen. It followed private communication between the paper and the social media giant – and the deletion of a Facebook post that explained why the photograph had been used.
“Listen, Mark, this is serious,” Hansen wrote. “First you create rules that don’t distinguish between child pornography and famous war photographs. Then you practice these rules without allowing space for good judgement. Finally you even censor criticism against and a discussion about the decision – and you punish the person who dares to voice criticism.”
Many of the same arguments applied during this year’s confrontation between other outlets and YouTube. However, YouTube did bring a newer element into the delicate internet censorship equation: artificial intelligence.
In the FT article, Walker explained that AI was now one of four planks within the YouTube review process, alongside greater human analysis (Trusted Flaggers, including outside advice from members of respected NGOs), warning messages on extremist content and a mechanism to direct users away from content intended to radicalise them.
The problem is that, after another series of apparent errors, few active observers think YouTube is doing enough. Moreover, this kind of accidental deletion is only one side of the problem. What about truly offensive content?
The Fawcett Society, a leading UK campaigner for women’s rights, collated a series of misogynistic posts made on Twitter and reported to its moderators around 14 August 2017 and then analysed what response there was a week later.
“By the morning of August 21, they were still up on the platform, despite the fact that they clearly violate Twitter’s own community standards that do not allow direct or indirect threats or can be categorised as harassment or hateful content,” the society said. “No response has been sent to the people who reported them, and no action had [sic] been taken against the users who posted them.”
Again, Twitter is understood to be looking to AI to help it detect all forms of abuse, including such obnoxious techniques as dogpiling where a group of users coordinate an online attack on an individual woman. And again, the victims do not think they are doing enough.
At its root, the problem appears to have three fundamental components on both sides.
First, the internet companies do not consider themselves to be editorial organisations mandated to regulate comment in the way that a newspaper will, or consider its nuances.
Second, the internet companies do not staff their content review mechanisms sufficiently, giving human moderators sometimes scant time to evaluate posts.
Third, AI may be getting vaunted as an overarching panacea for the problem but it is not yet powerful enough. And maybe it never will be, though it could in time make things better.
Of the three, the editorial issue is about to become the most intriguing. Because, in US law at least, the internet companies were given a pass on this critical point more than two decades ago.
Section 230 (c) (1) of the US 1996 Communications Decency Act states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
From that point of view, you or I would be legally liable – in the US – were we to go onto a social media platform and spread lies about someone else. But the platform itself would have effective immunity.
How widely this immunity is globally valid will, however, face its strongest test to date this month (October 2017). In Germany, a new law comes into force that places a responsibility on social media companies to remove “illegal” content within 24 hours of notification, and “criminal” content within seven days. Repeated failure to do so will expose the internet companies to fines of up to €50m (£44m).
The unresolved question here, though, is just how practical the law will prove in terms of enforcement. Still, it could be a wake-up call.
But there is then the question of resources. A major criticism levelled at social media giants is that they roll out content review systems based on commercial pressures (e.g. advertisers quitting if featured alongside hate speech) rather than the scale of the abuse.
This spring, Facebook found itself embroiled in further controversy over the streaming via its Live service of two murders – one showing a Thai man killing his 11-year-old daughter – and the rape of a 15-year-old girl.
Facebook CEO Mark Zuckerberg, unquestionably horrified, pledged to boost the human element within his company’s content review process.
“Over the next year, we’ll be adding 3,000 people to our community operations team around the world – on top of the 4,500 we have today – to review the millions of reports we get every week, and improve the process for doing it quickly,” he said. “If we’re going to build a safe community, we need to respond quickly.”
But some were quick to ask whether even this would be enough. Last November, US National Public Radio undertook an investigation into the review volume facing Facebook. It found that the work was done largely by subcontractors, often in the Philippines or Poland.
“They are told to go fast – very fast; ...they’re evaluated on speed; and ...on average, a worker makes a decision about a piece of flagged content once every 10 seconds,” the report said.
The opportunities to properly evaluate, say, the Napalm Girl image as opposed to a piece of child pornography in such an environment are obviously limited. And even if Facebook were to double its review team, by this measure there would be just 20 seconds for each post.
So can AI come to the rescue, as the social media companies appear to hope? Not yet, but perhaps the good news is that the industry is climbing the learning curve.
Craig Fernsides, operations technical authority at web filtering and monitoring specialist Smoothwall, explains that while AI has – as YouTube’s experience shows –still significant limitations, machine learning (ML) is already helping. Albeit for now with human help.
“ML allows someone to train a program using a known dataset. So you can give it a list of known pornographic sites and get it to learn what those look like. This means that when you show a new or unknown site to the pornographic ML program, it will be able to give you a true/false based on its reference material,” he says.
“When we start connecting the ML programs together using something like a neural network, rather than a simple try/catch mechanism, then it gets interesting. That’s the point that you can show anything to the neural network and if it doesn’t know what something is, it will start compiling new categories and finding commonalities across multiple sites, spotting patterns or clues that no human could ever pick up on.”
However, as Fernsides also recognises, there is no such thing as a universal filtering system. “Classifying an image or a video with the aim of monitoring or filtering access to it becomes a matter of opinion, and causing offence is such a personal definition that no system will ever be perfect,” he says. “I’m not going to pretend I know what the answer is, but I know that our industry must try to provide tools and services that can adapt to the nuances of our many cultures and have faith that the tools will not be used for censorship or restricting freedom of information.”
And thereby remains a likely eternal challenge though change is nevertheless coming.
Law beyond the US borders may force social media giants to rethink their roles, although the idea that difficult content may become more subject to lawyerly than editorial standards may itself raise serious issues.
More people are being hired to address the issue and though the numbers could fall short, it is at least a start.
Then, in terms of technology, maybe we just need to separate hype from reality and again monitor for gradual improvement.
But the one last thing to remember is that for all the charges flying around, and even though there is an astonishing amount of offensive, threatening and dangerous content reaching leading social media sites, this is a universe that has expanded more quickly than we have ever seen before.
At this stage in its development, perhaps the best we can ask is whether things are moving in the right direction. Some significant issues remain, but it looks as if there is progress – though a bit more haste would be welcome.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.