We live in the world of algorithms and artificial intelligence. How far will we go with this? Will we always be able to have this kind of technology under control? Researchers at the Max Planck Institute for Human Development set out to find out. The study was published in the ‘Journal of Artificial Intelligence Research’. The conclusions are alarming. They say that an artificial superintelligence could be uncontrollable.
Rules along the lines of ‘do no harm to humans’ would be difficult to enforce. Especially if we don’t understand the kinds of scenarios that artificial intelligence (AI) will come up with. What will happen when a computer system starts operating at a higher level? It will go beyond the reach of our programmers. And we will no longer be able to set limits.
“A superintelligence poses a problem. This is because a superintelligence is multifaceted. And it is potentially capable of mobilizing a diversity of resources to achieve its goals. Some, potentially incomprehensible to humans. Let alone controllable.
The reasoning is based on the ‘halting problem’ presented by Alan Turing in 1936. The problem centers on whether or not a computer program will reach a conclusion and answer. Here, therefore, it would stop. But what if it repeats itself forever trying to find one?
Turing proved it mathematically. It’s impossible to find a way to know that for every potential future program. Which brings us back to super intelligent AI. It could contain every possible computer program simultaneously.
Machines that learn
So a program to stop AI from harming humans and destroying the world may come to a conclusion (and stop). Or maybe not. It is mathematically impossible for us to be 100% sure. So, an artificial superintelligence might be uncontrollable.
“In effect,” says Iyad Rahwan, co-author of the study, “this renders the containment algorithm useless. The option would be to limit the capabilities of the superintelligence. For example, by disconnecting it from certain parts of the Internet or certain critical networks. But, if we will prevent it from solving problems beyond our power, why create it?
A machine that controls the world is science fiction,” says Manuel Cebrian. He is another of the signatories of the study. But there are already machines that perform tasks without the programmers understanding how they learned them”. The question, therefore, is: Could this become dangerous for humanity? The authors seem convinced that it could.