A man using a computer

Google pledges to employ 10,000 to fight extremist YouTube content

The number of staff dedicated to identifying violent, predatory, extremist and otherwise inappropriate content on YouTube will increase to more than 10,000 by the end of 2018.

In recent weeks, YouTube, which is owned by Google, has come under fire for allowing disturbing messages to reach children through videos featuring child-friendly characters, and for providing explicitly paedophilic autocomplete suggestions in its search bar.

In an article in the Daily Telegraph, Susan Wojcicki, CEO of YouTube, stated that while the platform was a force for creativity, education and social change, its openness could also be harnessed to do harm.

“I’ve also seen up-close that there can be another, more troubling side of YouTube’s openness,” she wrote. “I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.”

According to Wojcicki, more than two million videos have been reviewed for violent and extreme content, and 150,000 videos (and many comments) have been removed from the platform just since June 2017. This data can be used to train machine learning programmes  - which learn to identify patterns by processing huge quantities of images, videos or other data – to recognise this content.

Major platforms which host user-generated content, including Facebook and YouTube, are increasingly being equipped with such machine learning software to detect potentially abusive or otherwise harmful material and flag this up for moderators to review.

According to Wojcicki, the use of machine learning technology has allowed YouTube moderators to remove inappropriate videos far more efficiently than previously.

However, in recent months, high-profile reports of misleading, manipulative and violent content posted on Facebook and other platforms – such as Russian-backed political propaganda and live videos of shootings and suicides – has resulted in a push to employ more human moderators to check content. In an address to the UN general assembly in September, Prime Minister Theresa May rebuked tech giants for their passivity, which critics say has allowed misleading and extremist material to flourish online; May challenged these companies to remove extremist material within two hours of posting.

In May, Mark Zuckerberg, Facebook CEO, announced that a further 3000 moderators would be employed to police illegal and inappropriate posts.

In a similar move, Google will be taking “aggressive action” against extremist content, in part by increasing the number of people employed to review content to more than 10,000 in the next year.  

“Human reviewers remain essential to both removing content and training machine learning systems because human judgement is critical to making contextualised decisions on content,” Wojcicki said.

“We will continue the significant growth of our teams into next year, with the goal of bringing the total number of people across Google working to address content that might violate our policies to over 10,000 in 2018.”

Since June, YouTube has also been working with other social media companies and anti-extremism groups, such as the No Hate Speech Movement, to combat extremism online, such as by redirecting users searching for extremist content to videos which tackle these dangerous messages.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close