EU forces video-sharing platforms to fight hate speech
Image credit: Dreamstime
European legislators voted yesterday to force video-sharing platforms to take “appropriate, proportionate and efficient” measures against content containing hate speech and incitement to violence.
Members of the European Parliament (MEPs) belonging to its culture committee put forward the new legislation, which covers a range of issues relating to media regulation.
This is against the background of the much-discussed proliferation of hate speech, extreme content and fake news on social media platforms. Politicians have put pressure on these companies to remove dangerous content, but face criticism that this could amount to censorship.
MEPs have defined video sharing platforms as services which “play a significant role in providing programmes and user-generated videos to the general public, in order to inform, entertain or educate”. This means that social media platforms which additionally carry video – such as Facebook and Instagram – could be covered by the legislation.
These platforms would be forced to take measures against content deemed harmful, such as posts promoting terrorist activities or extreme ideologies.
Marietje Schaake, an MEP belonging to the liberal alliance of the European Parliament, was critical of the legislation, commenting that “social media should not be regulated through the back door.”
“Tackling hate speech on social media is important, but the [culture committee] should not jump the gun by adopting a far-reaching definition of video sharing platforms without any proper impact assessment.”
The legislation also raises the quota for European works on video streaming websites from 20 to 30 per cent, and regulates television advertising times. EU member states could also force Netflix and other video on demand platforms to contribute financially to the production of European works.
The legislation will be discussed further and will require the support of EU member states in the Council of the EU.
Social media companies are increasingly under pressure to address harmful user-generated content, such as the livestreaming of the murder of an 11-month old girl by her father on Facebook earlier this week. The video was available on Facebook for nearly 24 hours for being removed.
Mark Zuckerberg, co-founder, chairman and chief executive of Facebook, commented in a blog post that the company would do everything it could to prevent the posting of this content. He disclosed that Facebook was looking into using AI to review what was happening on the network more efficiently.
“Artificial intelligence can provide a better approach,” he wrote. “We are researching systems that can look at photos and videos to flag content our team should review.”
Meanwhile, tensions between social media companies and governments continue to grow with reports that Twitter has blocked the UK police and security services – including MI5 – from accessing potential counter-terrorism intelligence. A third-party company, used by the government to monitor Twitter activity for signs of potential terrorism, has been blocked. Amber Rudd, the Home Secretary, has previously demanded that in the interests of national security, the government be given access to encrypted messaging services such as WhatsApp.