View from Washington: Social media has a disease. There is a cure.
Only acknowledging the need for a proper investment in human resources will solve a fast-growing problem.
Halloween, festival of the trickster, is upon us. It’s the time of year that reminds us seeing is not believing. I’m half expecting a knock at the door from some kid dressed as the Facebook logo.
Issues around how social media combats propaganda, harassment, pornography and other deliberately misleading or offensive content are coming to a head as well. But are we getting any more than fine words?
In Washington, the leading companies are disclosing specifics about fake news and advertisements booked on their sites that slipped through filters during last year’s Presidential election and around other sensitive political issues (e.g. the Black Lives Matter campaign). In the UK, researchers at City, University of London claim to have identified 13,500 Twitter bots that sought to influence the Brexit referendum. Mea culpas all round.
Meanwhile, last week’s G7 interior ministers meeting saw a ‘great alliance’ (Google, Facebook and Twitter) institute an immediate clampdown on radical Islamist content. Its members will henceforth, they say, remove such material within two hours of it appearing.
Then, with the Harvey Weinstein scandal continuing to spread, the evils of sexual predation, harassment and abuse online are back in the spotlight.
Notably, Twitter CEO Jack Dorsey has announced stricter rules to prevent “unwanted sexual advances, non-consensual nudity, hate symbols, violent groups, and tweets that glorify violence”. He was responding in part to a #WomenBoycottTwitter hashtag that spread after his company imposed a temporary ban on one of Weinstein’s reputed victims, actress Rose McGowan.
(Here, I must state that Weinstein and his legal representatives reject charges that any of his relationships were non-consensual).
The latest initiatives and disclosures are welcome. There is honest concern within social media’s leadership over abuse and its volume. However, like many others, I remain a sceptic.
The problem is easy to spot. Rules of any kind only count if they can be meaningfully enforced. Right now, that enforcement requires people, human moderators, and the evidence is that the companies will not hire enough to match the challenge.
You may have read about these examples before, but they are revealing.
Responding to the live streaming of two murders and a rape on Facebook Live earlier this year, CEO Mark Zuckerberg committed to adding 3,000 more content reviewers to the 4,500-strong team. However investigative reporting by US National Public Radio recently found that each reviewer got an average of only 10 seconds to decide on whether a reported post was or was not compliant. At best, Zuckerberg’s increase would now give each about 15-20 seconds.
Meanwhile, Google’s YouTube subsidiary announced this spring that it was moving to combat terrorist propaganda with a shiny-shiny solution incorporating AI alongside consultancy with NGOs and other features.
However, the system quickly started flagging and removing videos posted by genuine – and easily verified – investigative journalists from outlets such as Bellingcat that were trying to illustrate the realities of, for example, the Syrian civil war (adding irony, Bellingcat has a deserved reputation for detailed video analysis). More apologies and a rethink.
It is tempting to accuse social media companies of being cheapskates, overly obsessed with clicks and profit. It is also true that the problem is hardly new. Former Nasa scientist Ariel Waldman posted about appalling experiences on Twitter in May 2008. Her harassment began more than a decade ago. There has been fair warning.
But throughout that time, we have also encountered a long-standing cultural problem, and still do now. In engineering, our ‘irrational exuberance’ is a belief that technology can do too much, too soon.
Nowadays, we strip everything down to a sort of digital enablement, then try to achieve the most we can with minimal human input. Within this model, there is a strong resistance to increasing the human resource beyond its initial base that is not just economic but philosophical.
Cost down, efficiency up. Shorter time-to-market. Greater reach. Virtual over bricks-and-mortar. And, of course, “Hail Big Data.”
Right now, AI and machine learning are the sexy enablers. Or maybe elsewhere, robotics. Then there’s the Internet of Things.
That’s not entirely a bad way of thinking. But when it comes to social media, the model falls flat.
Again, the reason is easy to determine. Social media is, by definition, an essentially human enterprise. When its networks are abused it is – forgive this next bit – because of human ingenuity serving base human sentiments. Even bots exist only to promote human positions.
At the same time, only a minority probably wants to see social media robbed of its liveliness. These are forums for often robust debate. But you need a sufficient human resource to understand the nuances.
What is a lively, even profane disagreement where the participants nevertheless accept the rules of engagement, and what is beyond the pale? What is disturbing content that nevertheless illustrates a reality that should be more widely known?
No machine can yet balance such challenges. I doubt one ever will. Perhaps no human can perfectly either. But a male or female moderator will always do a better job. It’s like a vaccine; you use the disease itself to defeat it.
This might seem obvious. But we face an entrenched mindset that obscures the distinction. Unless we acknowledge it, social media could soon be incurably sick.
Jack, Mark, Sundar – you and your colleagues have given us great tools. They can improve our world, our connections to one another. But the time really has come for you to man- – and perhaps more importantly – woman-up. Because if you don’t, this cancer won’t stop.