Apple Posts Child Safety FAQ To Quell Concerns Over iCloud Photo Scanning And Privacy

apple publishes child safety faq
Late last week, we covered Apple’s new plan to screen text messages for explicit images and files uploaded to iCloud for child sexual abuse material. The announcement, however, garnered quite a bit of attention and concern rather quickly, contrary to what Apple likely wanted. Now, the Cupertino-based company has published a frequently asked questions document to hopefully quell concerns.

On August 5th, Apple announced that it would be scanning images for sexually explicit material sent to children. These photos would then be blurred, and a warning about the image’s content would be shown to the child. If the child decided to view the content, their parent would be alerted if they had the feature enabled. This worked quite similarly in reverse as well if the child wanted to send an image. Moreover, photos being uploaded to iCloud would be scanned by an on-device process that compares the image’s unique fingerprint to a list of fingerprints of known child sexual abuse material (CSAM). If there were a match, there would be a manual review and report if required.

messages 2 apple introduces concerning child safety features

Following this announcement, there was much concern about the privacy of such systems and the precedent set. First and foremost, Apple clarified that the two features are separate. The company would also have no access to the data unless a manual review of the CSAM photo information, not the photos themselves, is required. It is also assured that the changes do not break end-to-end encryption in messages, and Apple will not be scanning all photos on an iPhone or iPad, only ones uploaded to iCloud Photos.

snowden 2 apple introduces concerning child safety features
Even whistleblower Edward Snowden was rather concerned

Moreover, Apple explained that it would not use the iCloud scanning technology to scan for anything else, even with governmental demands. The FAQ states that Apple has “faced demands to build and deploy government-man-dated changes that degrade the privacy of users before, and have steadfastly refused those demands,” and will only continue to do so. Furthermore, it is mentioned that the process of detecting CSAM cannot be altered by non-CSAM images, so a malicious image could not be injected either.

Even if this sounds great, the system still needs an independent review as taking Apple’s word at face value is not a great idea. However, it would not be in the interest of the business to break the systems that are sought after in today’s day and age. Either way, let us know what you make of Apple’s FAQ responses in the comments below.