Crime fighter or convenient payment for transport: should we be afraid of AI in the subway

Digitalization cannot be stopped

The digitalization of cities is only gaining momentum. No one is surprised anymore

smart cameras that monitor the situationon the streets of megacities, access to Wi-Fi in public places, services for paperwork and receiving medical care remotely. But artificial intelligence is already penetrating the transport infrastructure. You can pay for your trip at Moscow metro stations using biometric data. In the first month and a half of Face Pay alone (automatic entry of a passenger through a turnstile with facial recognition), more than 160 thousand people used it. Connecting to this system is not difficult: the passenger needs to link his Troika card to the Moscow Metro application, and then upload his photo to it. According to experts, in 2-3 years up to 15% of metro passengers will use Face Pay, and given the current difficulties with paying through Apple Pay and Google Pay, we can assume that there will be many more such passengers.

It is very convenient:to get on the subway, you just need to look at the camera, you don’t need to take out your wallet, smartphone or bank card. But skeptics of new technologies cause concern. They believe that it is very dangerous to trust the verification and analysis of biometric data to artificial intelligence and a third-party provider. It is not known in whose hands they will end up and how they can be used against the owner himself, all of a sudden someone takes a loan under the guise of another person or commits a crime using biometric data. The writers go even further and warn of even greater trouble: what if AI rises up and starts hunting people, tracking them down on the retina scanned in the subway. But experts note that the threat of artificial intelligence in the mass consciousness is greatly exaggerated.

Impossible machine riot

"Intelligence" in the term AI - marketingexaggeration. There are still no mathematically clear criteria for what is considered intelligence, but most biologists and philosophers agree that intelligence is the ability to successfully solve non-trivial problems. At a higher level, you can independently set tasks that you have never met before and solve them. No computer system in the foreseeable future will be able to do this. No one has ever set a goal to create the so-called strong AI (Strong AI), because the potential economic effect of such development is less than the funds needed for it.

Already today, AI is able to recognize faces, looking for characteristic features

Today, AI is de facto understood as differentimplementation of machine learning systems. They are able to find patterns between data and recognize patterns. For example, a three-year-old child confidently distinguishes a cat from a dog in a photo. The neural network will achieve the same when it is “fed” with several tens of thousands of images, some of which will be correctly signed “cat”, and some “dog”. Already today, AI can recognize faces by looking for characteristic features, and it does this with an accuracy of 96-98% - such results can be achieved even if people wear standard medical masks.

Trust, but to whom?

What happens after the caridentifies a person? It depends on those who programmed the system and assigned tasks to IT specialists. You can use the binding of the identified person to the database and open the metro turnstile in front of him, deducting the amount for the passage from his account. You can correlate his biometrics with another base - and, having asked the question “Do you really want to take this loan?” in a synthesized voice, wait for the answer “Yes” and transfer the agreed amount to his account. In any case, the decisions following face recognition are not made by AI on their own, they depend on algorithms implemented by living programmers.

The Chinese system of "socialcredit” is just a huge database (more precisely, several scattered databases), where each registered citizen, company or state institution is given a conditional score. "Good" behavior is expressed in the growth of this score, "bad" - in its decline.

Such a system can operate automatically:it recognizes the face of a person in the stream and immediately either allows or forbids him to perform some actions: for example, apply for a loan, buy an air ticket. But the very rules by which the machine makes this or that decision are determined by people. Similar systems on a smaller scale operate in many corporations, not necessarily Chinese, and their reactions to the same action often differ dramatically due to different approaches of developers.

AI-based biometrics in itself is not good andnot bad. Technically, there is no difference between the large-scale catalogs of the pre-computer era (in 1944, the FBI file cabinet contained 23 million cards) and the biometric databases of today. A computer finds the necessary information incomparably faster than a person - that's why instant face recognition in a crowd shocks the average person. But this is only a psychological effect, and the real consequences of the actions of a digital system are determined by how the people who have access to it will treat the data received by it.

However, there is no reason to fear totalsubstitution of a living person with his "digital twin" - the same record in a computer database. It's one thing when someone with a low level of social credit swarms at the back shelf in the store: the algorithm for monitoring the behavior of people on the trading floor, having received a signal from a video camera with image recognition, will match the visitor's face with the police base - and issue a recommendation to the guards to approach the suspicious person. But it’s quite another when a person’s social credit suddenly resets to zero: either due to a computer error, or because of someone’s malicious actions.

Artificial intelligence technologies will continue to develop: attempts to stop progress are meaningless

Developers around the world want to create a reliableand verifiable AI, whose logical “mindsets” are explicitly recorded in the decision-making process and are available for verification by a human operator. The society has begun to form uniform criteria for the ethical actions of artificial intelligence: we can mention the Russian AI code of ethics or a draft law of the European Union similar in spirit. They will reduce the likelihood of errors or malicious intent when handling "digital twins".

Artificial intelligence technologies will continuedevelop: attempts to stop progress are pointless. Society should only make sure that convenient and useful digital tools are used for good and become a means not of total algorithmic control, but of increasing comfort and security.

Read more

The "fifth element" exists: a new experiment will confirm that information is material

Creepy sounds and mysterious creatures: the strangest finds in the Mariana Trench

Look at the best picture of the Sun: it consists of 83 million pixels