Self-driving cars has been launched for a practical purpose, to satisfy the desire to save time offered by technologies, but for safety as well. When an Arizona pedestrian was hit and killed by an Uber self-driving car, questions were raised about its utility. However, Uber thinks about it since the creation of autonomous vehicles.
Contacted by The Circular, Wessel Reijers, PhD Researcher in Ethics of Digital Content Technologies at Adapt Centre, notes :
The event itself is not that impressive, but it does confirm something that everyone saw coming: that accidents can and will happen with self-driving cars.
To know how to make ther autonomous cars react in case of mortal collision, Uber carried a broad survey out to solve this technical problem and program algorithm the closest as possible to human’s reaction.
People’s choice made in front of a screen is not valuable, at least not any more than in a theory driving exam
In these traffic incidents, driver’s reaction is guided by instinct. New Scientist explains that instinct directs our actions when we are facing danger like in a mortal collision. It is then quite easy to determine the responsible. However, machines that are autonomous cars aren’t able to count on their instinct so they must be programmed by an algorithm. We can then wonder if it is progress that this zero-sum game be pre-recorded thanks to theoretical tests — like the MIT’s one.
Wessel Reiers added:
Virtues driving, as such, requires the driver to be awake and succeeding, meaning that he needs to be engaged with the traffic, aware of the surroundings and ready to make a decision based on a particular situation, not on a standard situation.
The dominant opinion is that autonomous cars should save as many lives as possible, even if it includes driver and passengers of the vehicle. This approach makes sense in a moral perspective, but it is not easy to sell — nobody would enter a car which can potentially kill them on the way.
Try MIT’s test and comment what you felt!