AI helps solve privacy issues that he himself created

The proliferation of AI raises a number of privacy concerns that people may not even be aware of. With another

On the other hand, it can help improve privacy issues, say cybersecurity experts Zhuyuan Chen and Arya Gangopadhyay.

The risks of secrecy from AI stem not onlydue to the massive collection of personal data, but also from the models of deep neural networks themselves, which provide most of the modern artificial intelligence. Data is vulnerable not only because of attacks on databases, but also because of “leaks” in the models on which they were trained.

Deep neural networks representingA set of algorithms designed to determine patterns in data consists of many layers. In these layers there are a large number of nodes called neurons, and neurons from neighboring layers are interconnected with them. Each node, as well as links between them, encode certain bits of information. These bits of information are created when a special process scans large amounts of data to train the model.

For example, a face recognition algorithm may betrained on a series of selfies so that he can more accurately predict the gender of a person. Such models are very accurate, but they can also store too much information - actually remembering certain people in the learning process. Attackers can identify people from training data by exploring deep neural networks that classify gender on the face depicted.

One of the protection methods that came upmachine learning experts, there was the addition of uncertainty in the AI ​​model. This was done so that attackers could not accurately predict what the model would do. Will it scan a specific sequence of data? Or will he launch the sandbox? Ideally, the malware will not know and unwittingly expose its motives.

Another way to improve AI privacy isexplore the vulnerabilities of deep neural networks. No algorithm is perfect, and these models are vulnerable. The reason is that they are very sensitive to small changes in the data they read.

These vulnerabilities can be used toimproving privacy by adding “noise” to personal data. For example, researchers at the Planck Institute for Informatics in Germany have developed ways to convert Flickr images into facial recognition software. The changes are incredibly subtle, so much so that they cannot be detected by the human eye.

Another way AI can help mitigateproblems with information security - keep data confidential when building models. One promising development is called federated learning. That is what Google uses in its smart keyboard, Gboard, to predict which word to enter next. Federated learning builds the ultimate deep neural network from data stored on many different devices, rather than in one central data warehouse. Its main advantage is that the source data never leaves local devices. Thus, privacy is still somewhat protected. Yes, this is not an ideal solution. Although local devices perform some calculations, they do not finish them. Intermediate results may reveal some data about the device and its user.

Federated learning provides insight intoa future in which AI is more respectful of privacy. Continued research may find more ways in which AI becomes part of the solution rather than the source of privacy concerns.

Read also

See which organs the coronavirus attacks first and how it happens

Who was the first organism on Earth? Forget everything you knew about the origin of life

See eclipse photos taken around the world