Would you buy a car that might risk your life to save a pedestrian? That's the question researchers are mulling in a recent study about driverless cars.
When it comes to autonomous cars, people generally approve of cars programmed to sacrifice their passengers to save others, but these same people are not enthusiastic about riding in such "utilitarian" vehicles themselves, the survey revealed.
This inconsistency, which illustrates an inherent social tension between wanting the good of the individual and that of the public, persisted across a wide range of survey scenarios, revealing just how difficult it will be to make underlying programming decisions for autonomous cars - something that should be done well before these cars become a global commodity, the study's authors note.
Autonomous vehicles, or AVs, have the potential to benefit the world by eliminating up to 90 percent of traffic accidents, but not all crashes will be avoided and some crash scenarios will require AVs to make difficult ethical decisions.
To begin to inform a collective discussion about the way AVs should make such decisions, Jean-François Bonnefon and colleagues conducted six online surveys of U.S. residents between June and November 2015, asking participants questions about how they would want their AVs to behave.
The scenarios involved in the survey varied in the number of pedestrian and passenger lives that could be saved, among other factors.
Overall, participants said that AVs should be programmed to be utilitarian, but the same people also said they would prefer to buy cars that protected them and their passengers, especially if family members were involved.
More From This Section
This suggests that if both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in the latter, the authors say, even though they would prefer others to do so.
Bonnefon et al. note that regulation may be necessary, but, based on their survey results (which reveal that regulation could substantially delay AV adoption), they say it could also be counterproductive.
In a related Perspective, Joshua D. Greene highlights additional challenges around programming driverless cars, including how manufacturers of utilitarian vehicles will be criticized for their willingness to kill their own passengers, while manufacturers of self-protective cars "will be criticized for devaluing the lives of others."
Determining just how to build ethical autonomous machines "is one of the thorniest challenges in artificial intelligence today," Bonnefon and colleagues concluded. However, their data-driven approach highlights the way the field of experimental ethics can provide key insights in this space as more and more autonomous cars hit the road.
The study appears in the journal Science.