Autonomous cars would not make life-or-death ethical decisions says Volvo engineer
Self-driving cars would never get themselves into situations where they would have to make an ethical decision to save lives, claims Trent Victor, senior technical leader of crash avoidance at Volvo.
Victor says autonomous cars, such as those currently being developed by Volvo, BMW, Google, Ford and many others, would "thoroughly evaluate" any "conflict situation" with other vehicles well in advance of a collision and reduce the chances of an accident by slowing down or switching lanes. And if a car encounters a situation it cannot prevent, there would be so little time to react that braking in a straight line would be the only option.
"It will be programmed to avoid getting into risky situations, to proactively stay within a zone where conflicts are resolvable, for example by changing lanes or slowing down," Victor told IBTimes UK.
"If there is, despite this, a conflict that occurs it will be a very time-critical conflict where braking is the main measure of resolving or mitigating the conflict," he added, saying in this situation it would not be possible to steer out of harm's way "to the extent that many of the ethical dilemmas presuppose is possible."
By looking ahead, perhaps by communicating directly with other vehicles, the autonomous car will "not get into a critical conflict situation where it has to make an ethical decision," Victor said. He added that in order for vehicles with near-100% autonomy to go on sale, they "must be very cautious and safe."
Autonomous cars and the Trolley Problem
Analysts and critics of autonomous car technology bring up the 'Trolley Problem', a hypothetical situation wherein you can either flick a switch to alter the path of a runaway train to save five people and kill one, or do nothing and let the train kill five, leaving one unharmed. Most people given this choice would alter the trolley's route to save the five. But when another version of the Trolley Problem is asked, where you must push a large man in front of the runaway train and kill him to stop it hitting the five, most people say they would do nothing despite the outcome being the same.
It was previously assumed that autonomous cars would need to make similar decisions when confronted with a no-win situation. How that decision is made could be down to software programmers, luck from a car making a random 50/50 decision, or perhaps even lessons the vehicle has taught itself from past experience.
To avoid any of these circumstances, the autonomous cars of the not-so-distant future would need to look much further ahead than they do today as most systems cannot currently see past the one vehicle ahead. Several carmakers, along with Google, are believed to have a goal of reaching almost full autonomy by 2020.
© Copyright IBTimes 2024. All rights reserved.