BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Who Is Responsible In A Crash With A Self-Driving Car?

Following
This article is more than 4 years old.

On March 18, 2018, at nearly 10 PM, a self-driving Volvo hit and killed a pedestrian, a woman named Elaine Herzberg. Herzberg’s death was the first pedestrian fatality involving a self-driving car. The self-driving car was a test vehicle, a car that Uber was testing in Arizona. It could not figure out if the woman was a pedestrian, a bicycle, or another car, nor predict where she was going. Video showed that the driver of the self-driving car, acting as a “safety backup”, was not looking at the road at the time of the collision. Instead, she was watching an episode of “The Voice”.

This accident triggered Uber to temporarily stop testing their self-driving cars in Tempe, San Francisco, Pittsburgh and Toronto, and began a wave of legal action. It also caused people to re-visit the question - who is at fault in an accident with a self-driving car?

In the case of the collision that killed Herzberg, the blame was divided between the safety driver, Uber, the self-driving car, the victim, and the state of Arizona.

In a new study from Columbia University, researchers tackled the problem of liability in a collision involving a self-driving car. Who is at fault - the driver, the car, the manufacturer, or someone else? The researchers developed a game-theory model that regulated the drivers, the self-driving car manufacturer, the car itself, and lawmakers. The goal of the researchers was to come up with the optimal liability scenario while assuring that each party does not take advantage of the other.

They found that the human “drivers” of self-driving cars put a good deal of trust in the “intelligent” car, going so far as to take more risks. Dr. Xuan (Sharon) Di, lead author of the paper, says, “We found that human drivers may take advantage of this technology by driving carelessly and taking more risks, because they know that self-driving cars would be designed to drive more conservatively."

Their results found that a precise division of liability in various cases led to the optimal scenario of preventing human drivers or operators from growing complacent while assuring cars were developed to be safe. Such a policy would evolve, including when more and more self-driving cars enter the roads. The results also help to determine how lawmakers could adapt to this new landscape and how manufacturers could be incentivized (through subsidies and regulation) to develop cars that outperform solely human-driven cars, encouraging safety even with increased production costs.

Follow me on TwitterCheck out my website