Humanising AI: Driverless cars to be trained on morality to make life and death decisions
AI to be taught morality by making use of results from "Moral Machine".
MIT researchers have been working on an artificial intelligence (AI) program that might be used in self-driving cars and training them in ethics and morality. Based on responses they gathered using their "Moral Machine", they are designing a system that responds to crash situations in the way a human would.
The Moral Machine is an MIT simulator that tackles the moral dilemma of autonomous car crashes. It poses a number of no-win type scenarios that range from crashing into barriers or into pedestrians. In both outcomes, people will die and it is up to the respondent to choose who lives. After nearly a year of collecting over 18 million responses, they have applied the results to the AI program.
An Outline report mentions that the Moral Machine is a cache of gut feelings of random internet users from around the world. Using this data, Ariel Procaccia from Carnegie Mellon University and Iyad Rahwan of MIT and one of the minds behind the Moral Machine created an AI that can evaluate situations where an autonomous car needs to kill someone.
While there are millions of votes to go with, the report points out that there are an innumerable number of situations that a car will face in a pedestrian crossing with combinations of people and animals that might not have been covered by the Moral Machine.
The AI will have to learn how to apply existing votes to these situations and then learn how they work in situations that have not been voted on, says the report. AI programs excel at such operations and finding patterns and getting answers for predictive tasks out of millions of data points is quite a simple task for AI to accomplish. It will be able to make use of the collective ethical intuitions of the many Moral Machine respondents, points out the report.
While Procaccia believes that the AI is not yet ready for deployment, he said: "It is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI."
Since the system uses millions of viewpoints to make complex decisions and not just from a small controlled group, the report quotes a paper from Duke University which concluded that an AI could result in operating a morally better system than any individual human can.
Crowdsourcing morality does not make AI ethical, says James Grimmelmann, a professor at Cornell Law School. Rather, "it makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical".
You can also participate in the Moral Machine questionnaire here.
While most autonomous car makers and companies that are designing self-driving vehicles seem to have got the driving aspect right, there seems to be a wide gap in the ethical and moral stand that a vehicle is forced to take when dealing with a crash. Several regulators around the world have started to formulate their own versions of what they think is the right way for autonomous cars to operate in death scenarios.