The idea that AI will overthrow humanity has been talked about for decades, and in January 2021, scientists made their verdict on whether we could control high levels of computer superintelligence. The answer? Almost certainly not.
The problem is that controlling superintelligence far beyond human understanding requires a simulation of that superintelligence that can be analyzed. But without understanding it, it is impossible to create such a simulation.
The authors of the 2021 paper must understand the types of scenarios that AI can come up with set rules such as not harming people. Once computer system is working on a level above the scope of Programmers, can no longer set limits.
“Superintelligence raises issues that are fundamentally different from what is normally studied under the banner of robot-ethics,” the researchers write.
“This is because superintelligence is multifaceted and therefore may not be understandable to humans and may be able to mobilize a variety of resources to achieve goals that are far more difficult to control.”
Part of the team’s reasoning comes from the halting problem put-forward by Alan Turing in 1936. The question is focused on whether the computer program draws a conclusion (to stop it) and answers, or tries to find it forever.
As Turing showed some smart math, we know that for some particular programs it is logically impossible to find a way to know about every program that we could ever write. This will bring you back to AI. AI is super-intelligent and can store all possible computer programs in memory at once.
For example, programs created to prevent AI from harming people or destroying the world may or may not come to a conclusion..
Instead of teaching AI ethics & telling it not destroying the world, researchers say that what algorithms can never do is limit the power of superintelligence. For example, you may be disconnected from part of the Internet or from a particular network.
Recent studies have also denied this idea, suggesting that it limits the scope of artificial intelligence. The debate is why if you didn’t use it to solve a problem that beyond the scope of humans, then why create it?
If we have to continue AI, we may not even know when superintelligence will arrive beyond our control. So so is its incomprehension. This means that we need to start asking some serious questions about where we are going.
“The super-intelligent machines that control the world sound like science fiction,” said Manuel Sebrian, a computer scientist at the Max Planck Institute for Human Development. “But there are already machines that perform certain important tasks independently without a complete understanding of how programmers have learned them.”
“Therefore, at some point it goes out of control and raises the question of whether it could be dangerous to humankind.”
The research was published in the Journal of Artificial Intelligence Research.