Terminator theory: Will the rise of the robots mean Judgment Day for humans?
Are we about to be supplanted by our own creations? Could robots destroy us or take over the world?
To answer this, we need to consider two key questions: could computers and robots develop the capacity to replace us? And would they have the motivation to do so? Let us start with capacity.
Reality is a difficult thing and computers mostly cannot cope with it. In 1997, IBM's Deep Blue defeated Garry Kasparov, the human chess champion and Grandmaster, but Deep Blue couldn't do anything else, such as play checkers, kick a football, or cook a pot roast.
In 2011, IBM's Watson beat the two human champions at verbal TV quiz show Jeopardy! In the process, IBM demonstrated it had enabled computers to do two much more difficult things: firstly, the ability to use natural language, and secondly the ability to read, comprehend, and make judgments based on masses of general knowledge.
Watson still can't cook a pot roast – although it could probably identify the most delicious pot roast recipe and order the ingredients. Watson is a general purpose, problem-solving computer, which is a big step towards human-like capability.
Computers today are rapidly, even explosively, developing abilities to deal with real world problems, such as the diagnosis of disease, or designing superior, patentable devices – or even designing better, faster computers.
This is partly due to brute force: computers continue to follow Moore's Law, doubling in speed every 18 months. I believe they are actually exceeding that speed limit, so that a decade from now, computers will be roughly 1,000 times faster than they are today. When you can throw massively greater computing power at problems, you have a better chance of solving them.
The other reason why computers are becoming more capable is we are developing better tools for them to use. Traditional methods, such as statistical analysis, are too plodding to manage today's massive new, real world data sets.
In response, new techniques are emerging, such as evolutionary algorithms that evolve solutions in the way life evolves organisms. Such techniques are now solving problems that are beyond the ability of humans.
Then add the development of systems, such as Google's DeepMind, which can teach themselves how to play computer games never encountered before, and to quickly exceed the abilities of expert human players. It's not the ability to play games that's key. It's the flexibility to go into a relatively undefined arena, and learn how to succeed.
It's a small conceptual leap then, to project computers that can learn how to comprehend and cope with the real world.
What about robots?
So computers are rapidly learning how to cope with reality conceptually. How about being able to interact with it physically? Here, the progress is much clearer, and undeniable: today's robots can walk, swim, fly, cross rocky ground, walk over ice and snow, run faster than a human and play football.
Yet it's important to realise that for robots to exceed human abilities, they don't need to look like Arnold Schwarzenegger. If I were going to build an army of robots to kill off humanity, I'd create a swarm of robot rats, bees and ants. I'd program this swarm to run or fly towards humans, then explode or release nerve toxins on contact.
There is already precedent for this. US military drones are already capable of identifying human targets and then seeking permission to kill them. The next step, already in discussion, is the production of autonomous drones that can identify a target and launch an attack without consulting a human.
So the answer to our first question is that if you look a relatively few decades into the future, then yes, robots will probably develop the capacity to replace or destroy humans. A bigger question (beyond the scope of this article) is whether they could destroy or displace all of us, and whether they could then run the global infrastructure necessary to ensure their own survival.
Why would they want to destroy us?
Which brings us to the question of motivation. Here, there are several plausible answers, which have been explored by science fiction writers for decades.
Let's start with the law of unintended consequences. HAL the computer, in Sir Arthur C Clarke's novel 2001: A Space Odyssey was programmed to ensure the success of a mission to Jupiter.
It concluded that the mission would be more likely to succeed if the fallible humans involved were eliminated - and then proceeded to try and do so.
Or suppose a computer was programmed to do whatever was necessary to eliminate contagious diseases such as Ebola or HIV that harmed humanity. Unless appropriate safeguards were established, the computer might conclude the simplest way to eliminate such diseases would be to eliminate humanity itself. That would certainly solve the problem.
Malice
But the necessary motivation may not have to come from the robots themselves. It could come from humans.
Black hat hackers create destructive viruses for fun, to steal or to cause chaos. It is a virtual certainty a similar class of humans will arise to hack robots and smart computers to make them destructive.
Likewise, a country at war could either create a class of destructive military robots and turn them on their enemies or, cheaper still, hack an enemy's robots to cause them to turn on their makers. This kind of cyberwar also seems a virtual certainty.
There is precedent
Finally, there's precedent for an entity evolving into intelligence and globe-spanning abilities: humanity itself. If it could happen once, it could happen again, only this time we would be the authors of that origin.
So I fear we must conclude that, yes, there could be a Terminator in our future. And if you remain unconvinced by my arguments, consider those put forth by people such as Stephen Hawking, Bill Gates, Elon Musk or the Cambridge University Centre for Existential Risk.
They are all serious and concerned about this question. A more important question may be: are we smart enough to cope intelligently with this challenge?
Or, as science fiction writer Larry Niven put it, think of it as evolution in action.
Richard Worzel is a chartered financial analyst, best-selling author and has for more than 25 years been a globally renowned futurologist.
© Copyright IBTimes 2024. All rights reserved.