A Mathematical Morality for Self-Driving Cars

If you want to improve AI for self-driving cars, turn to philosophers, who will help you work on building ‘ethical algorithms’ for AI that could make self-driving cars more practical.
The idea of ethical algorithms is being worked on by a team comprised of two philosophers and an engineer working through a grant from the National Science Foundation.
The ethical algorithm approach would give the AI ‘ethical parameters’ from which it could self-learn how to react under certain situations, as opposed to writing in the behavior into the software.
The work is, to some degree, attempting to reduce a moral code to a mathematical equation that machines can understand, interpret, and apply.

Philosophers are building ethical algorithms to help control self-driving cars

a group of philosophers have taken a more practical approach, and are building algorithms to solve the problem. Nicholas Evans, philosophy professor at Mass Lowell, is working alongside two other philosophers and an engineer to write algorithms based on various ethical theories. Their work, supported by a $556,000 grant from the National Science Foundation, will allow them to create various Trolley Problem scenarios, and show how an autonomous car would respond according to the ethical theory it follows.

To do this, Evans and his team are turning ethical theories into a language that can be read by computers. Utilitarian philosophers, for example, believe all lives have equal moral weight and so an algorithm based on this theory would assign the same value to passengers of the car as to pedestrians. There are others who believe that you have a perfect duty to protect yourself from harm. “We might think that the driver has some extra moral value and so, in some cases, the car is allowed to protect the driver even if it costs some people their lives or puts other people at risk,” Evans said. As long as the car isn’t programmed to intentionally harm others, some ethicists would consider it acceptable for the vehicle to swerve in defense to avoid a crash, even if this puts a pedestrian’s life at risk.

Evans is not currently taking a stand on which moral theory is right. Instead, he hopes the results from his algorithms will allow others to make an informed decision, whether that’s by car consumers or manufacturers. Evans isn’t currently collaborating with any of the companies working to create autonomous cars, but hopes to do so once he has results.

Perhaps Evans’s algorithms will show that one moral theory will lead to more lives saved than another, or perhaps the results will be more complicated. “It’s not just about how many people die but which people die or whose lives are saved,” says Evans. It’s possible that two scenarios will save equal numbers of lives, but not of the same people.


Read More at qz.com
About Paul Gordon 3009 Articles
Paul Gordon is the publisher and editor of iState.TV. He has published and edited newspapers, poetry magazines and online weekly magazines. He is the director of Social Cognito, an SEO/Web Marketing Company. You can reach Paul at pg@istate.tv