Welcome Your IET account
Child watching something on a laptop in the dark

The child safety protocol

Image credit: Getty Images

In dark corners of the internet, there have been horrific consequences to children living more online during the coronavirus lockdown. Are tech giants doing enough to protect them? And will greater privacy measures allow abuse to go unchecked?

Everyone’s been online for longer during lockdown. For children, that can be akin to standing alone on a street corner at night, say activists. Some kids are being cajoled and tricked into providing images and videos, which make their way into the hands of child abusers.

Lockdown has proved a “perfect storm”, says Andy Burrows, NSPCC head of child safety online policy – and reports of child sexual abuse have spiked since March. “Many children will be feeling more anxious, vulnerable or sad. Abusers were quick to mobilise – we’re starting to see those [lockdown] risks translate into harm. No one foresaw the consequences of this pandemic – but tech firms have failed to fix the roof before the clouds rolled in.”

This is a problem chief constable Simon Bailey is too familiar with. He’s the National Police Chiefs’ Council lead for child protection, and a concerned grandparent.

“Too many parents believe a child online at home in their bedroom is ‘safe’,” he said at a safeguarding seminar hosted by Impero Software. “I’ve met parents – professionals – whose children have been subject to the most appalling abuse, and they didn’t know it was taking place. Child abuse knows no social barriers.”

Long before lockdown, one child – whom UK charity Internet Watch Foundation (IWF) call Becky, began appearing online. “She’s still in primary school,” says IWF’s report. “About 10 or 11... we see her green eyes firmly fixed on messages being written to her at the bottom of the screen.”

Ever since Facebook announced in March 2019 it would introduce end-to-end encryption on Facebook Messenger, campaigners and governments have lobbied against it – arguing that this will give abusers such as those preying on Becky a safer place to hide. Facebook has insisted it will take its time to “get it right” before introducing this level of privacy.

Becky is clearly being directed, says IWF, which uses AI to help analysts find and remove videos and images of children suffering sexual abuse. Becky’s bathroom is “clean and bright”, and she stops filming if she hears someone outside the door. In a third of the recordings they see, she’s encouraged to insert everyday items into her vagina. “In legal terms this is category A penetrative sexual activity,” says the charity. “We doubt she knows live streaming was being recorded, copied and shared.”

IWF, who work with leading tech platforms, have never found Becky, but they continue to remove hers and tens of thousands of other instances of child sexual abuse each year. Most victims are girls aged 11 to 13. And there’s seemingly insatiable demand for abusive images of children – they’re the price paedophiles pay to gain entry into abusers’ communities.  

By the time images of Becky, and those of thousands of others, emerge, it’s too late, say child protection charities – how can we stop this happening in the first place?

“There’s been a huge increase in volumes of self-generated content every year and we’re expecting this has got worse in lockdown,” says Fred Langford, IWF’s chief technical officer and deputy chief executive. “It can take up to six months before some images surface on the web.” A decade ago it was a struggle to upload photos from a phone – now it’s too easy, he says. “It’s possible individuals believe photos are private, but they might be posted online automatically.”

Our age is marked by the rise of the “super predator”, warns Jonny Pelter, founder of SimpleCyberLife.com. “They are now incredibly organised, and they teach each other how best to lure and target kids.”

In a four-week period, there were 8.8 million attempts to access child abuse images across UK networks, according to data from IWF. At a global level, the number of suspected child abuse cases reported to the US non-profit National Center for Missing & Exploited Children (NCMEC) has more than doubled to more than two million in the first four weeks of lockdown. “Analysis from [EU law-enforcement agency] Europol identified as soon as lockdown began that predators were saying this was an opportunity,” says Burrows.

In May, an online grooming manual was discovered by authorities in Australia, where investigators say they’ve seen a jump in searches by child sexual predators seeking to learn how to abuse children. The handbook detailed how to encourage children to share sexual images and videos, rather than face-to-face abuse, which was deemed more risky amid lockdown restrictions.

Teachers are often first to realise something is wrong – but with most children at home until September, and more limited contact with social workers, vital opportunities to spot problems are being missed.

In February, 129 child protection groups from more than 100 countries wrote an open letter urging Facebook to reconsider plans for end-to-end encryption on Messenger. They argue it would “blindfold” the platform, allowing it to flourish as a “one-stop shop” for abusers. “(This) could mean we lose 70 per cent of reports (made by Facebook to child authorities) – that’s 12 million a year,” says Burrows. In 2018, Facebook reports resulted in the safeguarding of 3,000 children in the UK. “But we’d suddenly lose tools built up over years to identify sexual predators.”

Many observers say the move is less motivated by privacy and more as a means to not fall foul of legislation around privacy and responsibility for individuals’ data. “It gets rid of a lot of issues simultaneously,” says Pelter. “Once data is encrypted, law enforcement can’t subpoena [Facebook] as it will no longer have access.”

End-to-end encryption exists already – on a lesser-known function on Facebook and on Facebook-owned WhatsApp, Instagram, and other channels – so why the storm of outrage against Messenger? Because, says the NSPCC, Facebook provides a steady supply of children to contact. “Aided by ‘friend of friend’ and ‘recommended friend’ algorithms, an abuser can send a large number of friend requests and begin building relationships,” says Burrows.

Currently, abusers need to move to an encrypted platform, which they can’t do without a phone number or email. And this is crucial friction which can flag danger at a critical moment. If Messenger is encrypted, then abusers can switch seamlessly into unhackable communication. “All that activity will take place under a single encrypted cloak,” says Burrows.

The new ‘Messenger rooms’ live video chat function introduced in April, which allows group video chats to be joined by up to 50 people, could be disastrous for vulnerable children, says Burrows. “Rooms has been rushed out to compete with Zoom. It would allow a groomer to get a child on video chat and immediately livestream them – without their knowledge – to up to 50 people who don’t even have to have Facebook accounts,” Burrows explains. “Think how difficult it is for a child to say ‘no’ to directions from an adult in the heat of a live situation.”

Facebook confirms it doesn’t listen in to chats on Messenger Rooms, but violators of community standards are blocked from using the function, as are fake accounts. “We’re working on ways to do more with [user-led] reporting,” says a spokesperson.

When children encounter abuse, material is often used to blackmail them, says Dr Juliane Kloess, lecturer in forensic psychology at the University of Birmingham. Sometimes they don’t even realise a crime has been committed against them – they’ve been befriended and manipulated by an abuser, who’s in online contact daily, sometimes for hours, often communicating with several children at the same time. Predators often pose as younger, and children might feel complicit rather than victims – and less likely to report these encounters. While research shows online encounters are as damaging as those in the real world, Dr Kloess challenges whether the law truly understands the nature or reflects the harm caused.

What are the solutions? Trying to balance the demands of a growing privacy lobby and the ability to police abusers “is like trying to square a circle”, says David Emm, senior security researcher at Kaspersky Lab. “No one in their right minds wants to see opportunities to abuse children.” But building a ‘back door’ – which privacy advocates vociferously reject – would allow potential exploitation by cyber criminals.

There are ways around encrypted communications, say Emm and Langford, by looking at the bigger picture – the structure of communications, the when, where and to whom and how often, as opposed to the content. Individuals want their data kept safe. “We can’t have it both ways.”

In May, Facebook announced new tools to alert users to potential abuse with warnings within Messenger itself. A machine-learning mechanism could spot and flag suspicious behaviour – an adult sending large amounts of message requests to young people, for instance. Detection is based on metadata rather than content of messages.

This could be a useful tool, says Burrows, but it won’t mitigate the impact of encryption. “Putting the onus on a child to block their abuser shows a misunderstanding of grooming at best and a ‘pass the buck’ attitude at worst.” It’s more like tidying the edges of a flawed design.

False identities and fake accounts aren’t allowed, says a company statement, and violators will be acted on. Facebook reports any suspected child exploitation to NCMEC, and uses AI to identify and triage harmful content. “We’re committed to building strong safety measures into our plans,” says a Facebook spokesperson, who says encryption keeps people safe from hackers and criminals.

“Facebook’s safety and security team has tripled to some 35,000 people in recent years.” But more content moderators have been working from home during the pandemic, which the NSPCC fears will limit their reach.

“If you were to ban encryption on Messenger, there will always be a way around it,” says privacy advocate Paul Bischoff. Child abuse, he believes, is a problem better tackled at source. “Encryption can’t be banned, it’s just not feasible. At some point the laws of maths will supersede the laws of man.”

A ‘back door’ - as lobbied for by law-enforcement agencies -within tech platforms will have sinister consequences. “Other people will find it, whether they are a nation state or a hacker. It makes everyone vulnerable,” says Bischoff. If people know they are being watched, they behave differently. It has a “chilling” effect upon free speech.

There are, agrees Langford, plenty of end-to-end encryption apps. “It’s just the sheer number of people on Messenger that makes it a bigger issue.”

Is this “military grade” level of security appropriate for social media and the average Joe, questions Pelter? “End-to-end encryption is like using a sledgehammer to crack a nut,” he says. “And it’s a powerful tool for sexual predators and terrorists. It’s not appropriate for privacy to become a right, come hell or high water.” Content is secure in transit, but devices are still vulnerable – we misunderstand levels of security, he says. “Everyone is complaining they don’t have encryption on Messenger but most don’t even have antivirus software or screen locks on their phones.”

Now the NSPCC and other signatories to the open letter are calling on Facebook to fulfil certain obligations before introducing end-to-end encryption. These include ensuring it doesn’t prevent Facebook’s ability to scan for child abuse images and identify abuse. They want a voluntary duty of care to protect children in any design decisions involving encryption – but critically, they want to stop the roll-out until sufficient safeguards are in place.

Engineers and designers also need to think more deeply when they are creating products, says Langford. “Don’t just concentrate on the technology, but on the design decisions – work through unintended consequences before you implement it.”

Last year, the government published the Online Harms White Paper, which proposes a new duty of care for parts of the tech sector – and is currently mulling a response to a consultation.

UK law-enforcement, says Bailey, is among the best in the world at tackling online threats, but too many children are still suffering. “Numbers keep on growing and growing. So much more has to be done by tech companies – not just during lockdown. There’s still too much risk.”

If you are worried about a child’s safety, please call the NSPCC helpline on 0808 800 5000 or email help@nspcc.org.uk

The NSPCC also offers parents and guardians guidance and advice on how to better protect their children on the internet.

 

Artificial intelligence

A game of cat and mouse

There are countless images of child abuse online, on the dark web and on social media. A decade ago, child abusers were careless about security and it was easier to track down shared abuse on forums and sites.

But paedophiles have grown far more cunning, though they are still driven by a need to share. Now 14 analysts at the UK’s Internet Watch Foundation (IWF) use artificial intelligence to root out millions of images and videos of abuse every year. “We invest heavily in technology, we couldn’t do what we do without it,” says a spokesperson. “But it can’t replace human judgement.”

Every image has a unique digital fingerprint – a hash – which allows it to be identified without having to look at it. When new images emerge, IWF uses tech such as Microsoft’s PhotoDNA and MD5 to create a hash, grade it and add it to a database. Last year, IWF agreed to pool hashes with the US agency NCMEC, which helps internet companies to identify abuse.

IWF has developed an intelligent crawler, or bot, which can methodically browse targeted areas of the internet, sparing analysts the trauma of having to view depraved images and videos. Just last year, IWF’s tech crawled almost 72 million webpages and two-thirds of a billion images.

IWF’s crawler is loaded with more than 470,000 hashes of known child sexual abuse images. It compares each image it finds to hashes of known images of abuse, so duplicates can be removed.

IWF has some 140 members, from tech giants such as Google, Amazon, Apple and Microsoft to gaming platforms, Zoom, mobile phone operators and more.

It’s not just images that the technology hunts for – it’s code word combinations and slang developed by paedophiles to refer to types of victims or abuse which aim to evade scrutiny. These words change constantly, and IWF is adding thousands more to its list of search terms. “Our analysts need to have rigorously tested them out manually before they go on the list to make sure they don’t end up blocking perfectly legitimate searches,” says an IWF spokesperson.

IWF is tipped off by mostly anonymous calls, and also from tech companies and from INHOPE – a global network of hotlines. Its analysts proactively search out instances of abuse. Once it identifies criminal content, IWF will inform police and work to remove it.

A human moderator must step in to identify situations which could, and sometimes do, lead to the rescue of a child, says IWF. They might look at what a child is wearing, what products are visible to help identify who and where a victim might be.

“We are aware hash matching has changed the behaviour of some paedophiles,” says Fred Langford, chief technical officer and deputy chief executive of IWF. “So having all these different data sets does help to identify more abuse.”

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles

Info Message

We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.


Learn more about IET cookies and how to control them