Internet giants must be proactive in dealing with illegal content, says EU
Image credit: Dreamstime
Next year, major internet companies could face new EU laws forcing them to remove illegal content from their platforms if they continue in their passive roles.
Companies including Google, Facebook and Twitter are among those who could face punishment in the future.
The firms under scrutiny have, in recent months, made efforts to tackle the issue of extreme or illegal material, agreeing to an EU code of conduct to delete hate speech within 24 hours and forming an international working group to remove terrorist content from their platforms.
The killing of a civil rights protester at a far-right rally in Charlottesville, Virginia, provoked a burst of activity by internet companies, as the argument that criminal activity could be attributed to the hosting of illegal and extremist material grew in strength and profile. This led to the expulsion of Neo-Nazi news site The Daily Stormer and its associated social media profiles by Google, Facebook, Twitter, GoDaddy and other companies.
The spread of such illegal, hate-filled or violent content online has sparked debate in Europe and further afield between proponents of absolute free speech and those who believe that some material is not safe or morally acceptable to share online.
Existing EU legislation protects tech companies from liability for the content posted on their websites. This has limited the extent to which European policymakers can strong-arm companies into taking action.
The largest social media platforms have long stood as defenders of free speech. Twitter has an absolute free speech policy, while Facebook executives have argued that the company does not have an “editorial” role regarding its user-generated content.
Now, the EU executive – the European Commission – has produced draft outlines stating that these internet companies must become more proactive in dealing with illegal content, such as by establishing trusted “flaggers” (bodies with expertise in identifying illegal content), voluntarily introducing measures to detect and remove content, and releasing transparency reports with information on actions taken to remove illegal content.
“Online platforms need to significantly step up their actions to address this problem,” the guidelines say.
“They need to be proactive in weeding out illegal content, put effective notice-and-action procedures in place, and establish well-functioning interfaces with third parties (such as trusted flaggers) and give a particular priority to notifications from national law enforcement authorities.”
The guidelines are expected to be published at the end of September.
While the guidelines are non-binding, stricter legislation could be introduced in early 2018, depending on the amount of progress made by internet companies in response to the document.