
Government lays out proposals to prevent ‘Online Harms’
Image credit: Dreamstime
After months of consultation and the publication of two parliamentary reports, the government has laid out its plans for ‘world-first’ laws to regulate internet and social media safety.
The White Paper detailing the proposals was published jointly by the Home Office and the Department for Digital, Culture, Media, and Sport. It proposes a unified set of rules which could punish social media companies for failing to protect its users against a wide range of online harms.
This will make the UK the safest place in the world to be online, the government has claimed.
“The government wants the UK to be the safest place in the world to go online, and the best place to start and grow a digital business,” it stated. “Given the prevalence of illegal and harmful content online, and the level of public concern about online harms, not just in the UK but worldwide, we believe that the digital economy urgently needs a new regulatory framework to improve our citizens’ safety online.”
The past few years have seen growing concern over the use of social media platforms to propagate violent, hateful and extremist content, as well as disruptive state-backed disinformation campaigns. Anger reached a new peak in March when a suspected far-right terrorist used Facebook Live to stream the murder of 50 Muslims attending prayer at two mosques in Christchurch, New Zealand. The video was removed after an hour, and continued to be shared across social media platforms and viewed millions of times.
In June 2017, Germany introduced ‘NetzDG’, a controversial law which requires internet companies to remove “obviously illegal” content within 24 hours or face fines of up to €50m (£43m). Last week, Australia introduced legislation which could make executives personally liable if their companies fail to remove “abhorrent” violent content such as the Christchurch massacre video within a reasonable timeframe. Other countries are considering similar legislation to force social media companies to take responsibility for extreme material uploaded by users.
In January, the Commons Science and Technology Select Committee published a report recommending that online platforms should have a legal ‘duty of care’ to protect the wellbeing of young users. In February, the Commons Digital, Culture, Media, and Sport Committee published its own report calling for regulation of social media companies to tackle manipulative and harmful content, and describing companies like Facebook as “digital gangsters” which consider themselves beyond the law.
The White Paper has taken on many of these recommendations: “Currently there is a range of UK regulations aimed at specific online harms […] but this creates a fragmented regulatory environment which is insufficient to meet the full breadth of the challenges we face,” it said.
It proposes that internet companies which allow users to interact with each other and share content should be regulated, with new laws enforced by an independent regulator funded in the medium term by industry. Social media platforms, file-hosting sites, public forums, search engines, and messaging services will all be bound by the laws.
Smaller companies will not be exempt from regulation; the White Paper noted that while Facebook had removed millions of pieces of terrorist material in 2018, in the same year terrorist group Daesh had used over 100 platforms to share propaganda, including “permissive and smaller platforms”.
“The era of self-regulation for online companies is over,” said Jeremy Wright, secretary of state for digital, culture, media and sport. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action.”
The government will introduce a legal duty of care for internet companies, which will require them to actively tackle harm caused by content and activity on their platforms. They will be expected to work with law enforcement to counter illegal content and activity, comply with requests from the independent regulator, and establish complaints and appeals functions under the regulator’s direction.
Harmful content includes well-defined content such as terrorist propaganda, child sexual exploitation and abuse, state-backed disinformation, content uploaded from prisons, opioid adverts, gang propaganda inciting violence, revenge pornography, harassment and cyberstalking, encouraging or assisting suicide, and distributing of indecent images of minors. Other harmful content included within the scope of the proposals is less well-defined, such as anonymous cyberbullying, celebration of self-mutilation, and deliberately addictive apps. Companies will be required to respond to user complaints about this sort of content within an appropriate timeframe.
Harms not within the scope of the regulation include harm to organisations (such as companies), harm committed on the dark web, and harm due to data breaches and cyberattacks, which are covered by the government’s cybersecurity strategy.
The internet companies will be required to publish annual transparency reports about how they are tackling harmful content on their platforms; the largest social media companies already publish these reports.
Companies which fail to protect their users could be punished with significant fines, or even have their services blocking entirely. Under the proposed regulations, senior managers of negligent companies could be held personally responsible for failings.
The government has stated that it believes that technology is part of the solution, and calls for companies to invest in the development of “safety technologies”. A new tool for tackling online grooming will be licensed for free to other companies, and the government will support research into deliberately addictive technologies.
In a gesture to assuage fears of censorship and inappropriate constraint in the tech sector, the government has qualified that the regulator would be required to pay regard to ‘innovation’ when deciding how best to protect users, and will not be responsible for “policing truth and accuracy online”.
The government is still consulting on whether to create a new, dedicated regulator, or to use an existing one, such as Ofcom. They are also finalising decisions related to which online services should be included within the regulation, potential redress mechanisms for affected users, and measures to ensure that regulation is proportional and appropriately targeted. After 12 weeks, the government will publish its final proposals for the legislation.
“The Internet can be brilliant at connecting people across the world – but for too long these companies have not done enough to protect users, especially children and young people, from harmful content,” said Prime Minister Theresa May. “That is not enough, and it is time to do things differently. We have listened to campaigners and parents, and are putting a legal duty of care on internet companies to keep people safe.”
“Online companies must start taking responsibilities for their platforms, and help restore public trust in this technology.”
Home Secretary Sajid Javid said: “Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online. That is why we are forcing these firms to clean up their act once and for all. I made it my mission to protect our young people – and we are now delivering on that promise.”
The White Paper has been welcomed by children’s charities, including Barnardo’s and the NSPCC. NSPCC CEO Peter Wanless commented: “For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content. So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.” Meanwhile, the Children’s Commission for England, Anne Longfield, said that the proposed statutory duty of care was “very welcome” and should be implemented as soon as possible with a regulator that has “bite”.
The National Crime Agency also welcomed the legislation, commenting that the technology already exists for much prevention of abuse: “Industry must block abuse images upon detection and prevent online grooming; it must work with us to stop live-streaming of child-abuse; it must be more open and share best practice,” said agency director Rob Jones.
However, Tom Watson – Labour deputy leader and shadow secretary of state for digital, culture, media, and sport – criticised the proposals for taking too long to enforce and failing to address some other concerns: “We need action immediately to protect children and others vulnerable to harm,” he said. “These plans also seem to stop short of tackling the overriding data monopolies causing this market failure and do nothing to protect our democracy from dark digital advertising campaigners and fake news. This is a start but it’s a long way from truly reclaiming the web and routing out online harms.”
Chair of the Digital, Culture, Media and Sport Committee, Damian Collins, who has become an unlikely figurehead in the global effort to tackle disinformation, largely welcomed the White Paper. Collins said that the government should clarify how quickly platforms must remove harmful content, as well as doing more to address shady political advertising. Facebook is introducing some tools to reduce opacity in political advertising ahead of May’s European Parliament elections.
“It is vital that our electoral law is brought up to date as soon as possible, so that social media users know who is contacting them with political messages and why,” he said. “Should there be an early election, then emergency legislation should be introduced to achieve this.”
The Internet Association, a US-based trade group representing internet companies, criticised the proposals for their broad scope. Daniel Dyball, UK executive director of the Internet Association, said that: “the scope of the recommendations is extremely wide, and decisions about how we regulate what is and is not allowed online should be made by parliament”.
Representatives from Facebook and Twitter commented that they looked forward to working with the government to ensure that the regulations would be fair and effective.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.
Recent articles
