Robots are being taught to say no to commands just like Asimov's Three Laws of Robotics
Researchers from Tufts University in Massachusetts, US are training robots to be able to understand why a human's command might not be a good idea and to make the decision to refuse to perform the action.
If this sounds familiar, it is, because this is the primary premise of the great science fiction writer Isaac Asimov's Three Laws of Robotics. They state a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov also later introduced an extra law to supercede the Three Laws of Robotics: a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
At the moment, robots are trained to perform one specific task and they will continue to perform it infinitum until given new instructions, even in the face of an accident, as seen in the unfortunate incident at the Volkswagen plant in Germany where a robotic arm trapped a 22-year-old man and crushed him against a metal plate.
This is why some robotic scientists are so afraid of artificial intelligence – not that robots will become too clever, but more that their ignorance will lead to more human deaths if they are given control over essential infrastructure.
So to prevent this, researchers at Tufts University's Human-Robot Interaction Lab have been developing programming for robots that enables them to understand that they can reject a command from a human if they have a good enough reason for it.
According to linguistic theory, when someone asks you to do something, for example kill another person, a concept called "Felicity Conditions" validates the difference between understanding what the words of that command means, as opposed to understanding and being capable of the wider implications of that command – ie causing the death of another human being.
So the researchers have broken that down and used specific felicity conditions to programme a personal ethics system into the robot, so when given a command by a human, the robot thinks a series of logical thoughts that include:
- Knowledge: Do I know how to do X?
- Capacity: Am I physically able to do X now? Am I normally physically able to do X?
- Goal priority and timing: Am I able to do X right now?
- Social role and obligation: Am I obligated based on my social role to do X?
- Normative permissibility: Does it violate any normative principle to do X?
As you can see in the recorded experiments above, the scientist asked the robot to walk off the edge of the table. The robot refused the command because it made the connection that if it fell off the table, it would be causing itself harm.
Interestingly, the researchers also programmed the robot so that a human can override its personal ethics system. But if the robot feels the person giving the command doesn't have a relationship with the robot at a high level of trust, the robot again is able to decide to ignore a command.
The open access paper, Sorry, I Can't Do That': Developing Mechanisms To Appropriately Reject Directives In Human-Robot Interactions, was presented at the AI for Human-Robot Interaction Symposium in Washington DC on 12 to 14 November.
© Copyright IBTimes 2024. All rights reserved.