E&T

Apple to search iCloud uploads for child sexual abuse content with ‘NeuralHash’

Apple has announced a set of child safety tools intended to protect young users and limit the spread of material depicting child sexual abuse.

Among the tools is a new technology that will allow Apple to detect known child sex abuse images stored in users’ iCloud Photo accounts and report them to law enforcement. The detection process will not involve indiscriminate manual inspection of users’ iCloud content.

Instead, it will use a new tool called “NeuralHash” which is based on a database of hashes – a digital fingerprint which allows a unique piece of content to be identified but not reconstructed – which represent known images from a database provided by child safety organisations; the hashes will be stored locally. Other major tech companies - including Facebook, Microsoft and Google - already use the same database to detect child sex abuse content on their own platforms.

The tool allows edited images similar to the originals to be detected, although by design it cannot detect new child abuse content.

The check occurs when a user attempts to upload an image to its servers; when a threshold for matches is exceeded, the suspicious content may be manually reviewed before a report is sent to law enforcement and the user’s account is suspended. Users who believe their account has been inappropriately suspended may appeal to have it reinstated.

Apple emphasised that the new tool would only apply to iCloud Photos and that it would not allow it or any third party to scan the images on a user’s camera roll. The company said that the system is designed to reduce false positives to one in one trillion.

The system seeks to find a compromise between child safety and user privacy, the latter being a pillar of the brand. John Clark, CEO of the National Centre for Missing and Exploited Children in the US, welcomed the new feature as a game changer: “With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material. The reality is that privacy and child protection can co-exist.”

While the reception to Apple's announcement has largely been positive, some privacy campaigners expressed concern that the tool could eventually be expanded to search phones more generally for prohibited content, such as politically sensitive speech.

John Hopkins University security research Matthew Green commented: “[Apple] has sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. This will break the dam – governments will demand it from everyone.”

The Electronic Frontier Foundation described the move as a “shocking about-face for users who have relied on the company’s leadership in privacy and security” while the Center for Democracy and Technology called on Apple to abandon the changes, which it said compromise its guarantee of end-to-end encryption.

The new tool will be introduced later this year as part of the iOS and iPadOS 15 software update due later this year. It will initially be introduced to the US, with plans to expand later to other markets. It will be joined by features in the Messages app to warn children and their parents when sexually explicit images are sent and received.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles