Research: humans will not be able to control superintelligent AI machines

AI systems are already causing concern, especially those that can control other machines. Many

scientists take a cautious position regardingsuperintelligent and uncontrollable AI. An international team of researchers, using theoretical calculations, has proven that it will simply be impossible to control superintelligent AI.

Suppose someone needs to programan artificial intelligence system with superior human intelligence so that it can learn independently of a human. Then the AI ​​connected to the Internet will have access to all the data of humanity. He will also be able to replace all existing programs and take control of all machines around the world. Will this utopia or dystopia happen? Can AI Cure Cancer, Establish World Peace and Avert Climate Catastrophe? Or will it destroy humanity and take over the Earth?

Computer scientists and philosophers wonder whether we will even be able to control superintelligent AI to ensure that it does not pose a threat to humanity.

"The superintelligent machine that rules the world,sounds like science fiction. But there are already machines that perform certain important tasks completely independently, and programmers do not fully understand how they learned this. Therefore, the question arises whether this could at some point become uncontrollable and dangerous for humanity, ”says study co-author Manuel Sebrian, head of the digital mobilization group at the Center for People and Machines at the Max Planck Institute for Human Development.

Scientists have explored two different ideas of howsuperintelligent AI can be controlled. On the one hand, the capabilities of the superintelligent AI could be specifically limited, for example, by isolating it from the Internet and all other technical devices so that it cannot contact the outside world, but this would make the superintelligent AI much less powerful and less able to respond to humanitarian assignments. ... Without such an opportunity, AI from the very beginning could be motivated to pursue only goals that meet the interests of humanity, for example, by programming ethical principles into it. However, the researchers also show that these and other modern and historical ideas for controlling superintelligent AI have their limits.

In their research, the team developeda theoretical containment algorithm that ensures that superintelligent AI cannot harm humans under any circumstances by first simulating the AI's behavior and stopping it if it is deemed harmful. But careful analysis shows that in our current computing paradigm, such an algorithm cannot be built.

“If we decompose the problem into basic rulestheoretical computer science, it turns out that an algorithm that would tell AI not to destroy the world could inadvertently stop its own operations. If that happened, you would not know if the containment algorithm would still analyze the threat, and if it stopped it to contain malicious AI. Essentially, this makes the containment algorithm unusable, ”says Iyad Rahwan, director of the Center for People and Machines.

Based on these calculations, the problem of containment is not feasible, that is, no algorithm can find a solution to determine whether AI will harm the world or not.

See also:

The Ministry of Health of Argentina disclosed data on side effects in those who received "Sputnik V"

Platypus turned out to be a genetic mixture of mammals, birds and reptiles

Abortion and science: what will happen to the children who will give birth