Can a self-driving car be ethical and act like humans? The answer is yes. In a first, researchers have found that human ethical decisions can be implemented into machines.
The finding has important implications for managing the moral dilemma that autonomous cars might face on the road.
The study showed that human moral behaviour can be well described by algorithms that could be used by machines as well.
Until now it has been assumed that moral decisions were strongly context-dependent and, therefore, could not be modelled or described algorithmically. But the study found it quite the opposite.
"Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object," said lead author Leon Sutfeld, from the University of Osnabruck in Germany.
For the study, published in Frontiers in Behavioral Neuroscience, the team used immersive virtual reality to analyse human behaviour in simulated road traffic scenarios.
More From This Section
The participants were asked to drive a car in a typical suburban neighbourhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals and humans and had to decide which was to be spared.
The findings have major implications in the debate around the behaviour of self-driving cars and other machines, like in unavoidable situations.
Since it now seems possible that machines can be programmed to make human-like moral decisions it was crucial that society engages in an urgent and serious debate, the researchers said.
"We need to ask whether autonomous systems should adopt moral judgements. If yes; should they imitate moral behaviour by imitating human decisions; should they behave along ethical theories and if so which ones; and critically, if things go wrong who or what is at fault," explained Gordon Pipa, Professor at the University of Osnabruck.