Why Safety Numbers on Driverless Cars Don’t Add Up: Viewpoint
The “Race to Autonomy” has become the high-tech competition of our time, pitting automakers, suppliers, technology titans and Silicon Valley startups against each other in intense competition for the first bite of what will be a trillion-dollar driverless vehicle business. This horse race between some of the biggest companies in the world makes for gripping entertainment, but there is mounting evidence that it does so at the cost of important long-term perspective.
The California Department of Motor Vehicles has released its annual “disengagement reports” documenting each time a human “safety driver” had to take control of an autonomous test vehicle on the state’s roads. Once again, the reported numbers are being widely used by the media to handicap the competition. This is understandable given how little data are publicly available. But any temptation to take these disengagement numbers as an apples-to-apples comparison between makers should be resisted.
California law requires disengagement reports be filed whenever a “failure of the autonomous technology is detected” or when “safe operation of the vehicle requires” the human to “take immediate manual control.” The problem is that licensed companies have considerable leeway in interpreting these terms, meaning certain disengagement events might be reported by one company but not another.
After GM’s Cruise division put in a surprisingly strong showing in last year’s report, chief executive Kyle Vogt predicted that his outfit would be “in the number one spot” by the end of this year. Cruise didn’t end up passing the perennial leader, Alphabet-owned Waymo, with .79 disengagements per thousand miles to Waymo’s .178. But it wasn’t for lack of trying. At least four disengagements have been witnessed that are nowhere to be found on Cruise’s disengagement report to the DMV.
None of those cases involved a catastrophic failure or imminent danger, but they show how nebulous the concept of autonomous system “failure” can be. In two instances in a Nov. 28, reported by Reuters and Wired, Cruise vehicles came upon obstacles (a taco truck and bus, respectively) partially blocking their lane, whereupon they sat motionless until the safety driver steered them around it. The other two incidents, which occurred two days later and were reported in a Morgan Stanley research letter, seem to involve similarly prolonged hesitations over situations that were easily navigated once the human driver took over.
Because the Cruise cars were still pondering their next move when the driver took over, these incidents apparently do not constitute failures that must be reported to the DMV. Though a neat piece of legalism, this logic can’t help but make one wonder how long a vehicle can remain motionless on public roads without it constituting a failure of the autonomous technology. (Or until another vehicle comes along and smashes into it.) Put another way, how spectacularly did Cruise’s cars have to fail in the more than 100 incidents it did report in 2017? These questions come up every year.
They are not the sorts of questions you want your company to be asked when you’re getting into the business of automating the most dangerous thing most Americans do every day. After all, convincing people to put their lives into the hands of your robot car is going to require a profound level of trust.
The results of the 2017 California DMV disengagement reports are unlikely to determine which companies will and won’t prosper in the autonomous age, but one hopes the pattern of trust and transparency accumulated along the way absolutely will.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Edward Niedermeyer, an auto-industry analyst, is the co-founder of Daily Kanban and the former editor of the blog The Truth About Cars.