Apple to scan iPhones, iPads for images of child sex abuse

Apple Inc said it is launching new software later this year that will analyse iPad and iPhone photos for sexually explicit images of children and report any relevant findings to authorities.

Apple also announced a feature that will scan photos sent and received in the Messages app to or from children to see if they are explicit [File: David Paul Morris/Bloomberg]

Apple Inc. said it will launch new software later this year that will analyze photos stored in a user’s iCloud Photos account for sexually explicit images of children and then report instances to relevant authorities.

As part of new safeguards involving children, the company also announced a feature that will analyze photos sent and received in the Messages app to or from children to see if they are explicit. Apple also is adding features in its Siri digital voice assistant to intervene when users search for related abusive material. The Cupertino, California-based technology giant previewed the three new features on Thursday and said they would be put into use later in 2021.

If Apple detects a threshold of sexually explicit photos of children in a user’s account, the instances will be manually reviewed by the company and reported to the National Center for Missing and Exploited Children, or NCMEC, which works with law enforcement agencies. Apple said images are analyzed on a user’s iPhone and iPad in the U.S. before they are uploaded to the cloud.

Apple said it will detect abusive images by comparing photos with a database of known Child Sexual Abuse Material, or CSAM, provided by the NCMEC. The company is using a technology called NeuralHash that analyzes images and converts them to a hash key or unique set of numbers. That key is then compared with the database using cryptography. Apple said the process ensures it can’t learn about images that don’t match the database.

Apple said its system has an error rate of “less than one in 1 trillion” per year and that it protects user privacy. “Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account,” the company said in a statement. “Even in these cases, Apple only learns about images that match known CSAM.”

Any user who feels their account has been flagged by mistake can file an appeal, the company said.

To respond to privacy concerns about the feature, Apple published a white paper detailing the technology as well as a third-party analysis of the protocol from multiple researchers.

John Clark, president and chief executive officer of NCMEC, praised Apple for the new features.

“These new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,” Clark said in a statement provided by Apple.

The feature in Messages is optional and can be enabled by parents on devices used by their children. The system will check for sexually explicit material in photos received and those ready to be sent by children. If a child receives an image with sexual content, it will be blurred out and the child will have to tap an extra button to view it. If they do view the image, their parent will be notified. Likewise, if a child tries to send an explicit image, they will be warned and their parent will receive a notification.

Apple said the Messages feature uses on-device analysis and the company can’t view message contents. The feature applies to Apple’s iMessage service and other protocols like Multimedia Messaging Service.

The company is also rolling out two related features to Siri and search. The systems will be able to respond to questions about reporting child exploitation and abusive images and provide information on how users can file reports. The second feature warns users who conduct searches for material that is abusive to children. The Messages and Siri features are coming to the iPhone, iPad, Mac and Apple Watch, the company said.

Source: Bloomberg