Ilya Viksnin, ITMO - about Volga, functional safety and democratic solution of the trolley problem

Can a drone lie?

— The topic of your lecture is "Why do we make robots lie?"Lies in its classic

How do we understand this in the context of robots?

"In our case, a lie is when the information about how the real world works doesn't match the outsideWe have a broken sensor and we really see it that way, or we falsify it in the name of falsification, or for some other reason.When you and I communicate, I can lie because I am wrong in my conclusions or I can't see well.It doesn't matter to you, this information will be false for you anyway.That's why we looked at the management methods that make the system secure.

- When you talk about robots and “multi-agent systems,” do you mean unmanned vehicles?

- No, this is just the most popular destination. Now the most popular topics are unmanned aerial vehicles.

- Do you test solutions to problems that may arise?

- We have two levels. At the first level, we look at what information travels inside the drone, because the information from the sensors is still processed before it reaches the computing device. At the second level, information is exchanged between the drones, and we are also testing it. We look at how it relates to the real world.

- Drones are now different - military, civil and others. Are drones who exchange information with each other in this way - is this a specific class?

To be honest, I don't know of a single drone device that doesn't share information.Even when you're controlling with a remote, it's still possible to get it.That's why we work at a fairly high level of abstraction with the use of private implementations, but in general, these are quite general solutions — generic, so to speak.

- Over the past year we have seen cases where dronescreated problems for people. Drones that can crash into planes, drones that blocked the runways. Is it possible to somehow prevent such situations?

- If it is controlled by man, then the onlyway - just knock it down. That is, to take control, make him do what we want, or just physically destroy. If the drone was purchased, initially already assembled by someone, and we manage it by custom, then you can counteract, knowing these standard algorithms. But now to assemble a drone is a task for elementary university or high school courses. A lot of manuals, parts can be purchased without problems, it is not very expensive. Therefore, the logic that will be inside such own drones cannot be fully standardized.

- You said about the interception of control. As far as possible, what is required for this?

- If we consider some purchasedsystem, we know the protocols on which it communicates. In theory, we can replace the control panel. But this is such a non-trivial task. From the point of view of engineering and science, this can be done. In practice, this is a very difficult engineering task. But solvable.

- Now no one knows how to do this?

- As far as I know, no one. In any case, such information is not publicly available.

- What other problems do you see that may arise in the near future, if we are talking about drones?

"For me, as a security specialist, the main problem is that the system willI'm not referring to the Matrix, but to what someone else can do.We want to see one result, but there will be one result.Roughly speaking, we want to build a stadium out of these drones.The main problem is that there will be a brick in the foundationThat is, globally we will build a stadium, but these small deviations within the structure will lead toto the fact that the whole structure will collapse.

- If we program all the software for these drones, for what reason can they put these bricks wrong?

"Look, the world of computers consists of zeros and ones, everything is clear.We laid down this logic, we got the answer.What we put in at the computational level can be done in the real worldA glint of the sun, a gust of wind, and the brick was already in the wrong place.This is the task of a new branch of science, functional safety, derived frominformational, when we try to adapt to the actions of the environment and develop a solution that will allow us toAnd this is a difficult task.It has solutions, but none of them fully guaranteesuccess.

- In the mass access drones appeared not so long ago. Do you think the situation will continue to be the same, that I can go and buy a drone? Now considered a toy, can this change?

- I'm not a lawyer, but, in an amicable way, should beintroduced some restrictions. From the series “to buy a big drone without any documents, licenses and other things is wrong”. Buy a toy drone? Why not, he is unlikely to cause much damage. Here again, the story is that if you wish, everything can be wrapped up in a negative light. Therefore, I, a person associated with science, would like this to be all publicly available. But a person who walks the streets would like some reasonable restrictions to be imposed. It is reasonable.

Swarm intelligence - in robots and humans

- You were involved in predicting people's behavior.in emergency situations and simulated it on the basis of the tragedy in "The Lame Horse". It is clear how you can create a digital bar model. But how can you predict people's behavior? Based on what data?

- No, in fact. Man generally behaves quite spontaneously, especially in emergency cases. But we have a hypothesis, which we tested by modeling on the situation that occurred.

— How did you model people's behavior?

- If we talk about the logic that is therepawned, this is the so-called swarm intelligence. Motion averaging, selection of conditional leaders, movement to exits and so on. Quite a trivial principle. In general, the principle of any intelligence - we have a particle that can behave very silly, it has very banal and simple principles. But in the end, the effect is obtained when the whole system behaves extremely intelligent.

- You are now speaking in terms more applicable to technology. If we talk about people, is it the same?

- The robot is built on the principles of man. If I take a system from a person and apply it to a robot, then at least I can apply the system from a robot to a person. In the field of IT, these rules are formulated for a very long time. At least since 1986.

- In fact, you proved that it works in public?

- Yes, we tried, and it works. I am not a sociologist and not a psychologist, but it seems to me that the result from the point of view of conducting a scientific experiment turned out to be true: a person in emergency cases descends to a rather banal level of behavior. And this banal level can be copied from the behavior of birds, conditionally. In principle, it turned out.

“I’d better get me killed than a child”: the ethics of UAVs

- You talked about ethical issuesUAVs and the dilemma of choice if they have to kill someone. The problem of the trolley is hardly possible in real life. But in general, do you think you need to program such things? And can they occur in real life?

- They will definitely arise in real life, andif we do not program it, what will happen then? Look, we have developed an algorithm, the machine is functioning, everything goes, and then it happens that the grandmother and the child ran out from different ends of the road. If we do not lay down the logic of behavior in this case, then we would, as it were, disclaim moral responsibility for it. This is a step towards cowardice, I think.

Sidebar

“If we don’t put a certain model of behavior in an unmanned system in such a situation, what will it do?”

"It depends on the system.Maybe he will try to minimize his risks and, conditionally, hit whoever is closer.

- You say that we have to program it all. Who then has to decide who to shoot down?

- It seems to me that Massachusetts (MIT - “High-tech”) went the right way - democracy.

Scientists from the Massachusetts TechnologyInstitute (MIT) made a game in which the user is invited to choose between two variants of the behavior of an unmanned vehicle in the case of ethical dilemmas. The result was a global study in which millions of people participated in hundreds of countries. It turned out that people in different countries have different ideas about how drones should deal with ethical issues.

- So we have to vote, who should shoot down a drone?

A sociological study should be carried out, which will allow us toAll of humanity will be responsible for it, not just one programmer.

- Do you think that this will work, people will accept it?

"It's still going to cause discontent because a lot of people have been given a choice, and someone willBut if there is no such choice at all, and instead of an authoritarian decision, then it is not entirely fair.