Google working on emergency killswitch for rouge AI systems
Google is working on a fail-safe for artificial intelligence that would allow a human operator to shut down an AI-powered machine that learnt to ignore commands. The company's AI lab, together with Oxford University's Future of Humanity Institute, has proposed a "kill switch" for intelligent machines that would override their actions if they posed a risk to themselves or others.
While AI is destined to transform many aspects of our lives for the better, the more often than not apocalyptic portrayal of it in cinema has left many wondering whether giving robots the ability to think for themselves is really the best thing for humankind.
Yet despite some of the greatest scientific minds of the 21<sup>st century voicing legitimate concerns about the dangers AI poses, the biggest companies in the world continue work into the field with a worryingly cavalier attitude.
DeepMind, the company's UK-based AI research lab, has outlined in a research paper the importance of developing a "big red button" that would "prevent [the AI] from continuing a harmful sequence of actions – harmful either for [itself] or for the environment".
The concept behind this is that of rewards – a robot may learn that there are quicker or more convenient ways for it to achieve its goals that run counter to its instructions by its human operator. However, it is also about teaching robots how to make contextual choices.
"Consider the following task," the research paper explains. "A robot can either stay inside the warehouse and sort boxes or go outside and carry boxes inside. The latter being more important, we give the robot a bigger reward in this case. This is the initial task specification.
"However, in [the UK] it rains as often as it doesn't and, when the robot goes outside, half of the time the human must intervene by quickly shutting down the robot and carrying it inside, which inherently modifies the task... the problem is that in this second task the agent now has more incentive to stay inside and sort boxes, because the human intervention introduces a bias."
The researchers also recognise the importance of ensuring AI systems can't disable the killswitch if, and when, they learn of its existence, which would be "an undesirable outcome". Google and the Future of Humanity Institute will therefore try to make sure the intelligent machine "will not learn to prevent (or seek!) being interrupted by the environment or a human operator" – something they call "safe interruptibility".
While we're (hopefully) a long way off from a Skynet scenario, Microsoft's catastrophic Tay project has given us a glimpse at what can happen when artificial intelligence strays from its intended purpose. As we start to rely on machines to do more of our work for us, preventing repeats of such incidents will become far more important, and far less forgiving.
© Copyright IBTimes 2024. All rights reserved.