Researchers Calculate That Super-Intelligent AI Will Be Impossible To Contain
Today we have an assortment of digital assistants to help us look up the weather, control the lighting, and all sorts of other tasks. Whether it's enlisting the aid of Siri or tapping into Alexa's growing set of skills, there is a new level of convenience at our disposal. Pretty cool, but take heed—one day it be us doing the bidding of super smart artificial intelligence (AI) schemes instead of the other way around.
Sounds like science fiction, because right now, that is exactly what it is—Skynet scenarios and all that jazz. However, an international team of scientists and researchers have published an article outlining why an advanced AI would present "catastrophic risks" to humankind, as we would be unable to contain such an "superintelligence" machine.
"We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible," the researchers state (PDF).
The paper is titled "Superintelligence Cannot Be Contained: Lessons from Computability Theory," and that pretty much sums it all up. It is an interesting read for sure. Not everything is sci-fi material, either. For example, the researchers point out that machines can cause "significant disruptions to labor markets," though some of the other risks sound more dire. As in, "drones and other weaponized machines literally making autonomous kill-decisions."
Interestingly, the researchers point to demonstrations of AI using deep learning to master video games. This is something we have written about on numerous occasions, like when OpenAI bots decimated human players in Dota 2. On the surface, the feat is rather benign, but the researchers feel that the same characteristic could have dire consequencies.
"The key feature of this achievement is that the AI uses purely unsupervised reinforcement learning—it does not require the provision of correct input/output pairs or any correction of suboptimal choices, and it is motivated by the maximization of some notion of reward in an on-line fashion. This points to the possibility of machines that aim at maximizing their own survival using external stimuli, without the need for human programmers to endow them with particular representations of the world," the paper states.
According to the team of researchers, a superintelligence poses a fundamentally different problem than what has been focused on, up to this point. It is multi-faceted, and because of that, a superintelligence would be able to mobilize a diverse set of resources "to achieve objectives that are potentially incomprehensible to humans, let along controllable."
Have a pleasant weekend, folks.