Facebook apps used in more than half of online child sex crimes
Image credit: Getty Images
More than half of the online child sex crimes in one year took place on Facebook-owned apps, according to data from the NSPCC, as the charity called for more to be done to tackle abuse in private messaging.
Facebook has previously revealed plans to make messaging across its apps, including Instagram and Facebook Messenger, end-to-end encrypted like another of its services, WhatsApp, in order to boost user privacy. End-to-end encryption is the practice of securing communications from everyone but the participants, including the platforms hosting the conversation.
The children’s charity argued that these latest figures, gathered through Freedom of Information requests to police forces, show that Facebook’s encryption plans will leave children at greater risk and accused the social media giant of “turning back the clock on children’s safety”.
The NSPCC said the data it received showed 9,477 instances of sexual or indecent image offences against children were recorded by police between October 2019 and September 2020 where the communication platform was known, with 52 per cent taking place on Facebook-owned apps.
The figures showed that Instagram was used more than any other Facebook platform – in more than a third of all instances, ahead of Facebook Messenger and WhatsApp. The data were gathered from 35 police forces in England, Wales and the Channel Islands.
The NSPCC argued that should Facebook go ahead with its encryption plans, many of these offences could go unreported in future unless new safeguards were put in place.
As a result, the charity has urged the UK government to strengthen the powers of the forthcoming Online Safety Bill to allow Ofcom, the proposed regulator, to take action against social media firms whose design choices could put children at risk.
It argues that although end-to-end encryption offers a number of benefits, including improved privacy, it will hinder the ability of platforms and law-enforcement agencies to identify and disrupt child abuse.
“Facebook is willingly turning back the clock on children’s safety by pushing ahead with end-to-end encryption despite repeated warnings that their apps will facilitate more serious abuse more often,” said Andy Burrows, head of child safety online policy at the NSPCC.
“This underlines exactly why [culture secretary] Oliver Dowden must introduce a truly landmark Online Safety Bill that makes sure child protection is no longer a choice for tech firms and resets industry standards in favour of children.
“If legislation is going to deliver meaningful change it needs to be strengthened to decisively tackle abuse in private messaging, one of the biggest threats to children online.”
Last month, a senior official at the National Crime Agency said Facebook’s encryption plan “poses an existential threat to child protection”.
In response to the research, a Facebook company spokesperson said: “Child exploitation has no place on our platforms and we will continue to lead the industry in developing new ways to prevent, detect and respond to abuse.
“For example, last week we announced new safety features on Instagram, including preventing adults from messaging under-18s who don’t follow them.
“End-to-end encryption is already the leading security technology used by many services to keep people, including children, safe from having their private information hacked and stolen. Its full rollout on our messaging services is a long-term project and we are building strong safety measures into our plans.”
In response to its latest report and looking to the Online Safety Bill, the NSPCC said it was calling on the government to shift the onus onto tech firms to show they were identifying and mitigating risk in products before rolling them out, rather than relying on the regulator to prove risk.
It was also calling for the regulator to be given the power to force firms to act before harm has occurred, rather than after, and to be able to consider design decisions that could be deemed risky to users.
A government spokesperson said: “Our Online Safety Bill will bring in world-leading measures to protect children and ensure there is no safe space for paedophiles to hide on social media.
“The burden will fall solely on social media companies to prove they’re doing all they can to keep children safe and they will not be able to use encryption as an excuse.
“End-to-end encryption risks blinding both social media companies and law enforcement to these dreadful crimes and tech companies must put public safety at the heart of their system designs or face heavy fines.”
Facebook recently announced its improved safety features for Instagram around teenage users, restricting the ability of adults to contact teenagers who do not follow them on the platform. The range of new measures being introduced will also see Instagram sending safety alerts to users aged under 18 to encourage them to be cautious in conversation with adults to whom they are already connected, but who may have exhibited potentially suspicious behaviour such as sending a large amount of friend or message requests to teenage users.
Instagram said it will also be making it more difficult for adults to find and follow teenagers on the site by restricting teen accounts from appearing in the 'Suggested Users' section and hiding content from teenage users in both the 'Reels' and 'Explore' sections.
Younger users are also being encouraged to make their accounts private, while Instagram said it was developing new artificial intelligence and machine learning technology to help it better identify the real age of younger users after it acknowledged that some young people were lying about how old they were in order to access the platform. The Facebook-owned site’s terms of service require all users to be at least 13 years old to have an account.
However, at the same time as increasing security for teenage users over the site's permissible legal age, it has also become publicly known this week that Facebook is planning a new version of Instagram specifically catering to the under-13 market.
According to an internal Instagram post obtained by BuzzFeed News, Facebook has "identified youth work as a priority for Instagram and [has] added it to our H1 priority list”.
Writing on an employee message board on Thursday, Vishal Shah, Instagram’s vice president of product, said he was "excited" about the prospect and that Instagram would "be building a new youth pillar within the Community Product Group", including "building a version of Instagram that allows people under the age of 13 to safely use Instagram for the first time”.
According to the internal post, the work would be overseen by Adam Mosseri, head of Instagram, and led by Pavni Diwanji, a vice president who joined parent company Facebook in December. Diwanji previously worked at Google, where she oversaw such child-focused products as YouTube Kids.
Facebook already offers a child-oriented version of one aspect of its flagship titular social media app, called Messenger Kids, designed for children between six and 12.
A recent study in Australia of teenagers' internet usage, published by the Australian eSafety commissioner in February, found that 57 per cent of Australian teenagers use Instagram, with 30 per cent reporting having been contacted by a stranger, while 20 per cent reported having been sent inappropriate unwanted content on the social media sites they used.
Commenting on a parent-controlled version of Instagram for 'tweenagers', Cindy White, CMO at digital identification company Mitek, said: “Child online safety needs to evolve and we all have a role to play: parents, teachers and certainly the social media networks that our children love. Children under 13 are not thinking about their identities being stolen. Rather, their focus is on the identities they can create through these social mediums. Kids will persist in asking their parents if they can join social media apps because it’s a social currency and a digital enabler of popularity (theoretically).
“Imposters understand what is needed to get the information they desire from their targets. Simple behaviours of different demographics are easy to mimic and particularly in children, this may easily go unnoticed. Details from the 2021 Javelin Identity Fraud Study reveal that fraudsters will target Gen Zs using the social media tools and messages that most appeal to them. This includes social 'friend' requests, DMs, tech support and P2P payments.
“Even while Facebook claims that the service must be 'managed by parents', we know that kids will find the means to access their accounts without adult supervision. Fraudsters expect this and will strategise accordingly. We encourage parents to take an active role in providing feedback to the social media platform on the security measures in place to protect the children.
“As children spend more time online and unaware of their vulnerability, helping to protect your child’s online identity can start with simple education. Starting early helps you stay ahead of fraudsters and building good identity protection practices into your child’s daily life when they are young helps prepare them to safely navigate more sophisticated environments in the years ahead.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.