Apple will look for prohibited content in user photos: how it will work

How did it all start?

Most cloud services—Dropbox, Google and Microsoft and many others—already scan

custom files for itemcontent that may violate their terms of service or be potentially illegal, such as CSAM. But Apple has long resisted scanning users' files in the cloud, giving users the ability to encrypt their data before it reaches Apple's iCloud servers.

Apple will roll out a new system calledNeuralHash, in the USA. It will appear in iOS 15 and macOS Monterey, both scheduled for release in the next month or two. Whether and when the system will be rolled out internationally is unknown. Until recently, companies such as Facebook were forced to disable their child abuse detection tools across the European Union. 

How will it work?

Let's start with the fact that in the USA there isa special database of images, it is collected by NCMEC and other organizations. NCMEC, or National Center for Missing & Exploited Children, is the National Center for Missing & Exploited Children, a private, non-profit organization created in 1984 by the United States Congress. It is worth noting that Apple does not receive the images themselves, but their hashes. These are specially generated strings that will be the same for the same images and different for different ones, explains “T-Zh”.

It should be noted that the peculiarity of hashes is that theydo not allow you to completely restore the original file. However, they make it possible to compare the hashes of two files with each other. At the same time, the algorithm used in the process is resistant to photo manipulation. What does it mean? For example, even if you change the colors or size of the photo in the photo, its hash will be the same as that of the original file.

Regardless of Apple, the iPhone calculates similar hashes for all user photos. The algorithm does not take into account specific pixels, but the content of the image.

For the analysis to work, Apple devicesusers will be uploaded to iCloud along with photos and security vouchers—the hash of the photo and its visual derivative will be stored in them. At the same time, Apple does not explain what kind of derivative it is. Most likely, this is a fragment of the image or a processed version of it, which will be reviewed by a company employee during the final stages of analysis.

Security vouchers will also storefragments of a special encryption key. At the same time, it is generated on the device and is unknown to Apple from the very beginning. This key encrypts the hash and the visual component of the snapshot. By the way, in order to decrypt the data, you need to get several fragments of such a key.

Forbes notes that only one hash matchfrom the user's gallery with the hash from the database of images with scenes of child abuse is not enough to draw clear conclusions. To avoid “false positives,” Apple sets a minimum number of matches that are needed to “get things going.” However, the number of such matches is unknown.

If the number of matches is sufficient,and also if the hash is completely identical (the photograph actually matches the image from the database, then decryption will be successful. If the hash is different, then the picture has not been found in the NCMEC database, and decryption of the “voucher” will not work. That is, as Apple assures, the company will not get access to the photo until it finds a similar one in the database, Mediazona notes.

Project criticism

Cybersecurity professionals predictablygreeted Apple's initiative with indignation: numerous attempts to depersonalize photo analysis do not cancel the main problem - the very fact of constant access to user files. Even if the files that the algorithm analyzes are uploaded to the cloud storage, in fact, the user does not share them with anyone.

Matthew Green, security researcher atJohns Hopkins University, told Reuters that Apple's desire to create systems that scan iPhone users' phones for "prohibited content" could "break the dam" and lead to the US government "requiring it of everyone." He also told The Associated Press that there are concerns that Apple could be pressured by other international governments to scan for other information. The researcher announced the new technology in one of the first in a series of tweets.

No matter what the long term plans areApple, they send a very clear signal. In their (very influential) opinion, it is safe to create systems that scan users' phones for prohibited content. They send this signal to governments, competing services, China, to you personally. Whether they are right or wrong in this matter is hardly important. From now on, nothing is sacred, now the governments [of different countries] will demand the same from everyone, and when we realize that it was a mistake, it will be too late.

Matthew Green, professor at Johns Hopkins University

Why is this necessary?

Last year, the U.S. Department of Justicehas published a set of “voluntary principles” aimed at forcing social media technology companies to do more to combat child sexual exploitation and abuse. The agency urged companies to establish a thorough system to identify and take immediate action against illegal content and report it to authorities. For example, Microsoft has already created photoDNA to help companies identify child sexual abuse images online. Facebook and Google already have systems in place to check potentially illegal content. Facebook also said it is working on new tools to reduce the spread of child sexual abuse images on its platform.

According to a report by the US National Council onMissing and Exploited Children Facebook reported 20 million child sexual abuse images to law enforcement in 2020. This number includes reports from both Facebook and Instagram platforms. In 2019, there were 16 million such photographs.

Read more

The slowing down of the Earth's rotation caused the release of oxygen on the planet

Astronomers spot unusual structures in deep space

See more 60,000-year-old Neanderthal rock art