AI-based software could block livestreamed graphic content
Image credit: Getty Images
A UK start-up is developing live-threat detection software for identifying sexual and violent content in real time, with a view to using the software on children’s devices to minimise opportunities for child sexual exploitation.
According to the Internet Watch Foundation – a charity which aims to render online child sexual abuse content as inaccessible as possible – 29 per cent of child sexual abuse content it acted on last year was self-generated, with this proportion rising steeply. This would suggest that software for preventing children from creating, sending, or receiving graphic content could be valuable safeguarding tools.
This software, which is in development at UK start-up SafeToNet, could be installed on children’s phones to detect and block graphic content featuring sexual or violent images “before any damage is done”. It could also be used by social media companies to prevent graphic content being uploaded and shared, such as by interrupting violent livestreams.
The software (‘SafeToWatch’) analyses video content frame by frame to assess its risk, identifying high-risk images using a machine-learning algorithm. It has been trained to also detect graphic content in cartoon form, and will be trained to identify gore, weaponry, and extreme violence.
SafeToNet has not yet decided what measures should be taken once a threat is detected, but options could include locking the device for a period of time, blocking the app being used for filming, or blocking the phone’s camera.
In an initial analysis using millions of images and videos, the software detected 92 per cent of content involving nudity and 84 per cent of violent examples. In November, 2,000 families will start testing the software, with the start-up planning a release by mid-2021.
According to PA News, the software could be used by broadcasters to block out unforeseen violent content which may appear in frame while correspondents report on live events, or by video-conferencing companies to filter livestreams.
SafeToNet CEO Richard Pursey told PA News that the technology could help prevent grooming, bullying, and sexual exploitation before it is too late.
“There’s no point you being told that yesterday your 12-year-old son sent a naked picture of himself, because it’s too late you know, you’ve squeezed that tube of toothpaste, the paste has come out, and you can’t put it back in again,” said Pursey. “The ability to detect, to analyse live-video capture in the moment, and then do something if a risk is detected, for us that’s pioneering – we don’t know of another company in the world that’s done that.”
Internet Watch Foundation deputy chief executive Fred Langford told PA News that software like this should “absolutely” be pre-installed on all devices for children at point of sale.
“Everything is moving towards end-devices, and this piece of software has positioned itself in the perfect place. From what I saw in the demonstrations, it would absolutely stop anyone from being able to view potentially illegal content on their phone and also to take those pictures and upload them anywhere else,” Langford said. “The flip side is people could run it the other side to measure what people are doing as far as uploading content.”
He added that tools like these could help minimise exposure of moderators to traumatising content which could have a serious impact on their mental wellbeing.
Under the government’s proposed Online Harms legislation, social media companies will be given a statutory duty of care to its users, forcing them to more proactively remove harmful content such as terrorist propaganda and livestreamed suicides. Facebook and other social media companies tend to use a combination of human content moderators to review flagged content and automatic content moderation tools.
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.