Why do we react so differently to autonomous car crashes?

By law, the pre-self-driving stage still requires nerds to sit nervously in driver seats, unsure what to do with their hands but excited to be part of “the future”.

Well, it happened. The first reported autonomous car crash fatality. Testing suspended, people freaking out. But… should they?

If I were a less lazy researcher, I would track down local newspapers calling for an end to automobiles in the 1890’s after those first killed a pedestrian. We could have a good chuckle at how short-sighted those hat-and-vest-wearing luddites were way back when, what with their trying to curb the inevitable advance of American car culture and all.

But let’s be honest (and check Wikipedia): “National Highway Traffic Safety Administration (NHTSA) 2016 data shows 37,461 people were killed in 34,436 motor vehicle crashes, an average of 102 per day.” And that’s now, before all that pesky safety stuff Nader fought for.

So people being killed by (shall we call them “driver-ful”?) cars — though widely considered terrible tragedies, is something we accept as part of the price we pay to have cars at all — something we mostly agree we mostly need.

What’s different about crashes without a driver that causes each incident to generate so much interest?

Is that going to change in the coming years as we get used to them being part of life?

 

How should self-driving cars handle potentially fatal accidents?

Turns out your answer depends a lot on whether you're the car or the pedestrian.

Turns out your answer depends a lot on whether you’re the car or the pedestrian.

 

Self driving cars sound awesome. Less traffic, fewer accidents, more free mental bandwidth while commuting. But nothing is perfect, and some scientists are beginning to examine how automated cars should handle accidents:

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

So one could abstractly argue all day about what’s right, and if you’re able to take yourself out of the equation, the math is what it is.

 

If it were up to you to decide how autonomous cars handle accidents, what do you program them to do?

 

How does your answer change if:
a) you’re the first one driving one?
b) you’re also in charge of convincing other people to buy one?
c) everyone is required to drive one (and is that worth doing)?