When I went to Las Vegas last week I didn’t go to visit clubs, gamble or even take in a show. Those were activities for another trip. This time I attended the Automotive IQ System Safety conference. I met engineers, auto executives, and computer programmers who shared the same concern about autonomous vehicles: how safe is safe enough?
Companies like Uber and Tesla are rolling out autonomous technology faster than the industry and mainstream citizens are comfortable with.
In doing this, they are forcing adoption of these systems and instigating an arms race of vehicle technology. It will no longer suffice to have the most ergonomic cup holder if your car can’t compare to a competitor’s advanced driver assist technology. But rolling this technology out in such an aggressive way has many concerned. Like with many cutting edge innovations, there will be casualties and there will be costs.
So their task in Las Vegas was to establish the standard for acceptable safety going forward.
It was impressive to see how sincerely these people cared about vehicle safety and not just from a liability standpoint. It was comforting to learn the issues they were tackling. These were very smart people, who cared a great deal, and had the resources to take all necessary precautions. But the issue is complex and will likely evolve over time. There were some starting points, however.
First of all, worldwide, there are about 1.3 million vehicle related deaths per year. Somehow this has become acceptable. If that standard were to be cross applied to another sector of society it would be outrageous. For example, if 1.3 million people died every year due to airplane crashes nobody would fly. Yet somehow, we give teenagers the keys to a car and figure that’s just the price of growing up. When we look at accidents per mile driven, current autonomous test vehicles and adventurous owners of cars with advanced driver assist features have shown the technology is half as risky as human piloted cars. That means an autonomous vehicle drives twice as far before encountering a fatal accident as a human.
Is that good enough? Not according to those at the conference, and probably not according to average people. Somehow a robot car being half as likely to kill you versus a human driven one fails to be safe enough.
In order to get to where we consider something acceptably safe we must find our risk tolerance. For example, we don’t want autonomous vehicles to be merely safer than the average driver, because that includes drunk drivers, distracted drivers, young drivers, old drivers, and aggressive drivers. The standard needs to be how much safer is autonomous driving than good drivers. When we eliminate the above subcategories of drivers, we are left with about ten percent of driving fatalities worldwide happening to “good drivers,” i.e. people who weren’t drunk, distracted, new, incapable, or driving inappropriately.
To determine how much safer than good drivers autonomous vehicles would have to be in order to be tolerable, it is helpful to look towards other safety standards. For example, how safe must water be in order to be considered safe? When scientists talk about parts per million of a particular element it becomes lost on average people. But if we are told these pollution levels will cause a fatality in one out of every 100,000, or one in a million over a lifetime, that seems to be acceptable in the United States. These numbers often drive our pollution and safety standards.
Putting that into the statistics around vehicle fatalities may help create a safer standard. In order for autonomous vehicles to be truly accepted as safe and introduced en masse, they must be a hundred times safer than a good driving human. If this were to be accomplished, and most at the conference suggested the technology is very close, that would mean that yearly fatalities, once implemented, would drop from 1.3 million a year to 1,300 if all cars were driven without pilots. With such an automated system in place, this would eliminate things like fatal drunk driving accidents.
We know, however, that as we roll out autonomous driving technology there won’t be a simple conversion. Human pilots will coexist along with robot cars and every self-driving vehicle that is in or causes an accident will get much more ink than its human counterparts. This attention will, no doubt, add friction to our adoption of driverless technology. But we must also realize there are opportunity costs involved in the delay.
If driverless technology becomes simply twice as safe as humans, then it is fair to say that half the lives lost in traffic fatalities might have survived had we converted. The problem with this math is that there is no liability for technology that didn’t save lives. It is reserved for lives that were lost due to the technology. This adds another layer to the resistance as we slowly become more and more comfortable with software driving our cars.
So get your torches and pitchforks ready, because autonomous driving vehicles are coming. There will be accidents and there will be casualties. But before you set fire to or puncture the tires of an autonomous car, remember, there are good, conscientious, very smart people working tirelessly to safeguard against every contingency.
But like with people, we know there is nothing foolproof. At least not yet.
Jerry Mooney is a Language and Communications Professor at the College of Idaho and the author of History Yoghurt & the Moon. Follow him on Twitter: @JerryMooney
Whiteboard Photos: Sarah Ruddat, Program Director of Automotive IQ