France demands social media platforms expunge explicit content within an hour
Image credit: Dreamtime
The French Parliament has passed a hate speech law which would punish social media companies for failing to remove certain types of illegal content within 24 hours, and the most harmful content within an hour.
The regulations would allow companies like Facebook, Google, and Twitter to be fined if they fail to remove some illegal content (hate speech, abusive speech, sexual harassment, child pornography, and content provoking terrorist acts) within 24 hours of being flagged up by users. The most serious illegal content – the most explicit terrorist and paedophilic content – must be removed within just one hour of being flagged. Platforms could face finds of up to €1.25m for falling foul of the fines. Platforms will not be fined for cautiously removing content which is later ruled acceptable.
The regulation received strong support in the Assemblée Nationale, with 355 representatives voting in favour, 150 voting against, and 47 absentees.
France has earned a reputation for taking a powerful stance against US tech giants, placing a tax on large digital firms operating in France as the EU struggled to reach a consensus on a Europe-wide digital tax, and recently restricting Amazon’s sale of nonessential items amid the coronavirus pandemic.
The hate speech law is similar to a German law introduced in 2018, which requires social media platforms to remove hate speech and disinformation within 24 hours of it being flagged. This German law has been somewhat controversial, leading to accusations of social media companies becoming overzealous with content removal with the side effect of censoring acceptable content.
Some are concerned that the new French law could also lead to internet censorship. A spokesperson for anti-censorship advocacy group La Quadrature du Net told CNN that the law could give politicians a “tool to abuse their power and censor the internet”.
“One of the dangers of this law is that it could turn against journalists, activists, and researchers whom it claims to defend. No one knows exactly what content should be considered “manifestly illegal” online.”
A Facebook spokesperson said: “For many years, fighting online hate has been a top priority for Facebook. We have clear rules against it and have invested in people and technology to better identify and remove it. Regulation is important in helping combat this type of content. We will work closely with the Conseil supérieur de l’audiovisuel and other stakeholders on the implementation of this law.”
Twitter France public affairs director Audrey Herblin-Stoop wrote in a statement: “Improving the health of the public conversation has been our number one priority for several years, and we are committed to protecting an open internet and freedom of expression, and exploring opportunities to address abuse and misleading information at scale.”
Meanwhile, the UK government is in the process of preparing legislation to tackle “online harms” such as hate speech and child exploitation by online platforms a statutory duty of care to users. An independent regulator is expected to have the power to levy large fines when social media companies fail to remove harmful content in a timely manner.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.