Can a drone lie?
- The theme of your lecture - "Why do we make robots lie." Lie in her classic
- In our case, a lie is when information abouthow the real world is organized does not correspond to what we read and transmit outside. For what reason? We have broken the sensor and we really see it, or falsify it in the name of falsification, or for some other reason. When we communicate, I can lie. Because I am mistaken in my conclusions or I see badly. It does not matter to you, for you this information will be false in any case. Therefore, we considered the management methods that allow the system to be made safe.
- When you talk about robots and “multi-agent systems,” do you mean unmanned vehicles?
- No, this is just the most popular destination. Now the most popular topics are unmanned aerial vehicles.
- Do you test solutions to problems that may arise?
- We have two levels. At the first level, we look at what information travels inside the drone, because the information from the sensors is still processed before it reaches the computing device. At the second level, information is exchanged between the drones, and we are also testing it. We look at how it relates to the real world.
- Drones are now different - military, civil and others. Are drones who exchange information with each other in this way - is this a specific class?
- No, in fact, everyone can share. I honestly do not know any of the UAV devices that do not exchange information. Even when you control from the remote, you can still get it. That is why we work at a fairly high level of abstraction using private implementations, but on the whole these are fairly general solutions - a generic, as they say.
- Over the past year we have seen cases where dronescreated problems for people. Drones that can crash into planes, drones that blocked the runways. Is it possible to somehow prevent such situations?
- If it is controlled by man, then the onlyway - just knock it down. That is, to take control, make him do what we want, or just physically destroy. If the drone was purchased, initially already assembled by someone, and we manage it by custom, then you can counteract, knowing these standard algorithms. But now to assemble a drone is a task for elementary university or high school courses. A lot of manuals, parts can be purchased without problems, it is not very expensive. Therefore, the logic that will be inside such own drones cannot be fully standardized.
- You said about the interception of control. As far as possible, what is required for this?
- If we consider some purchasedsystem, we know the protocols on which it communicates. In theory, we can replace the control panel. But this is such a non-trivial task. From the point of view of engineering and science, this can be done. In practice, this is a very difficult engineering task. But solvable.
- Now no one knows how to do this?
- As far as I know, no one. In any case, such information is not publicly available.
- What other problems do you see that may arise in the near future, if we are talking about drones?
- For me as a safe person, the main problemthat the system will not function the way a person wants. This is me not to the “Matrix”, but to the fact that the logic and behavior of the system can be violated somewhere. We want to see one result, but there will be another. Roughly speaking, we want to build a stadium out of these drones. The main problem is that the brick in the foundation will be put in the wrong way. That is, globally we build the stadium, but these small deviations inside the structure will lead to the fact that the whole structure will collapse.
- If we program all the software for these drones, for what reason can they put these bricks wrong?
- See, the world of computers consists of zeros andone, everything is clear. We laid down this logic, got the answer. The physical world consists of a range of values from zero to one. What we are laying on the computational level may not correspond in the real world. The glare of the sun, a gust of wind, - and the brick has already risen wrong. This is the task of the new section of science - functional safety, derived from the information, when we try to adapt to the actions of the environment and develop a solution that will allow us to assemble a brick without regard to external factors. And it is a difficult task. It has solutions, but none of them fully guarantee success.
- In the mass access drones appeared not so long ago. Do you think the situation will continue to be the same, that I can go and buy a drone? Now considered a toy, can this change?
- I'm not a lawyer, but, in an amicable way, should beintroduced some restrictions. From the series “to buy a big drone without any documents, licenses and other things is wrong”. Buy a toy drone? Why not, he is unlikely to cause much damage. Here again, the story is that if you wish, everything can be wrapped up in a negative light. Therefore, I, a person associated with science, would like this to be all publicly available. But a person who walks the streets would like some reasonable restrictions to be imposed. It is reasonable.
Swarm intelligence - in robots and humans
- You were involved in predicting people's behavior.in emergency situations and simulated it on the basis of the tragedy in "The Lame Horse". It is clear how you can create a digital bar model. But how can you predict people's behavior? Based on what data?
- No, in fact. Man generally behaves quite spontaneously, especially in emergency cases. But we have a hypothesis, which we tested by modeling on the situation that occurred.
- How did you simulate the behavior of people? Your results were extremely close to reality.
- If we talk about the logic that is therepawned, this is the so-called swarm intelligence. Motion averaging, selection of conditional leaders, movement to exits and so on. Quite a trivial principle. In general, the principle of any intelligence - we have a particle that can behave very silly, it has very banal and simple principles. But in the end, the effect is obtained when the whole system behaves extremely intelligent.
- You are now speaking in terms more applicable to technology. If we talk about people, is it the same?
- The robot is built on the principles of man. If I take a system from a person and apply it to a robot, then at least I can apply the system from a robot to a person. In the field of IT, these rules are formulated for a very long time. At least since 1986.
- In fact, you proved that it works in public?
- Yes, we tried, and it works. I am not a sociologist and not a psychologist, but it seems to me that the result from the point of view of conducting a scientific experiment turned out to be true: a person in emergency cases descends to a rather banal level of behavior. And this banal level can be copied from the behavior of birds, conditionally. In principle, it turned out.
“I’d better get me killed than a child”: the ethics of UAVs
- You talked about ethical issuesUAVs and the dilemma of choice if they have to kill someone. The problem of the trolley is hardly possible in real life. But in general, do you think you need to program such things? And can they occur in real life?
- They will definitely arise in real life, andif we do not program it, what will happen then? Look, we have developed an algorithm, the machine is functioning, everything goes, and then it happens that the grandmother and the child ran out from different ends of the road. If we do not lay down the logic of behavior in this case, then we would, as it were, disclaim moral responsibility for it. This is a step towards cowardice, I think.
Sidebar
“If we don’t put a certain model of behavior in an unmanned system in such a situation, what will it do?”
- It depends on the system. Maybe she will try to minimize her risks and, conditionally, bring down the one who is closer.
- You say that we have to program it all. Who then has to decide who to shoot down?
- It seems to me that Massachusetts (MIT - “High-tech”) went the right way - democracy.
Scientists from the Massachusetts TechnologyInstitute (MIT) made a game in which the user is invited to choose between two variants of the behavior of an unmanned vehicle in the case of ethical dilemmas. The result was a global study in which millions of people participated in hundreds of countries. It turned out that people in different countries have different ideas about how drones should deal with ethical issues.
- So we have to vote, who should shoot down a drone?
- Not really. A sociological survey should be conducted that will allow us to make a choice. All of humanity will be responsible for it, not just one programmer.
- Do you think that this will work, people will accept it?
- It will still cause dissatisfaction, becausea large number of people were given a choice, and someone would naturally disagree with him. But if this choice is not at all, and instead of it is an authoritarian decision, then this is not entirely honest.