The Road Ahead for Autonomous Cars and Auto Insurance

May 17, 2018 by and

The death of a pedestrian who was struck by an autonomous vehicle in Tempe, Arizona, has brought fresh scrutiny to the accelerating development of self-driving cars. The accident on March 18 is bound to be studied exhaustively, both to determine fault and to assess and refine the overall safety of autonomous systems.

According to accounts of the accident, the vehicle, outfitted to test Uber’s autonomous driving system, struck a woman at night as she pushed her bicycle across a road outside of a designated crosswalk. Video of the crash, released by Tempe police, shows a woman emerging from a darkened area seconds before she was struck; in the same span of time, the safety driver looks down multiple times for reasons that aren’t clear. Uber pledged its full cooperation in the unfolding investigation but has already reached a settlement with some of the victim’s family members, while others have come forward, according to multiple news reports.

The first reported pedestrian fatality involving an autonomous vehicle broadens the safety discussion over self-driving technology, much of it highly relevant to auto insurers as self-driving vehicles proliferate on public roads. How will responsibility for accidents be assigned, and how will insurers structure coverage?

Safety in Perspective

Drivers may be wary of ceding control to computers, algorithms, and artificial intelligence, despite the human driving record. NHTSA publications have attributed about 94 percent of auto accidents to human error. Motor vehicle crashes in the United States in 2016 caused 37,461 deaths, or 1.18 deaths per 100 million vehicle miles traveled—all involving human drivers, according to NHTSA.

By comparison, self-driving technology company Waymo recently reported that its fleet of vehicles reached more than 5 million self-driven, fatality-free miles on public roads. On its blog, Tesla has discussed two U.S. driver fatalities involving its vehicles operating in Autopilot mode. In the first case, in 2016, the National Transportation Safety Board (NTSB) largely blamed human error. In a September 2017 press release, the NTSB said the “probable cause” of the accident was “the truck driver’s failure to yield the right of way and a car driver’s inattention due to over-reliance on vehicle automation.”

After the crash, Tesla acknowledged in its blog that Autopilot didn’t detect the white truck that “drove across the highway perpendicular to the Model S” before the accident. The company said in its blog that “neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.”

Tesla stated that it addressed the limitation by software modifications downloaded remotely to affected vehicles. More broadly, the NTSB’s report noted the system’s inability to do more than warn drivers against the kind of inattention cited in the fatal crash while the vehicle was being operated in autonomous mode.

The NTSB has joined the investigation into another fatal accident involving a Tesla vehicle operating in Autopilot, which occurred on March 23 near Mountain View, California. Tesla stated in a blog post that the driver had not heeded several warnings from the Autopilot system to keep his hands on the wheel.

In a typical auto accident, blame is generally apportioned among drivers. In the 2016 Tesla crash, the apportionment of some blame to the vehicle raises questions of where autonomous vehicles might fit in future liability determinations. A few manufacturers have publicly stated they will assume liability if their vehicle’s technology is responsible for an accident.

State financial responsibility and compulsory insurance laws typically require liability insurance and occasionally some other coverages for cars, depending on the jurisdiction. Some state laws include no-fault or personal injury protection type auto coverage, which often addresses first-party coverage for a person’s own bodily injury from an accident. It’s typically enacted to bypass costly litigation, minimize delays, and speed up settlements.

The issue of preserving streamlined claim settlement arises with the possibility that autonomous vehicle accidents will push the boundaries of existing auto insurance. While product liability may seem a natural coverage for a self-driving automaker’s exposure, the litigation involved can be lengthy and might produce cascading claims, depending on the structure of the coverage. Would a manufacturer’s product liability coverage encompass all accidents related to a vehicle, or would the manufacturer retain rights to sue component and technology manufacturers? In the Arizona crash, for example, The New York Times reported the vehicle was modified with Uber’s proprietary self-driving system. That system, according to a Bloomberg News report, used LIDAR (light detection and ranging) components supplied by Velodyne.

The degree of driver control over a vehicle could also affect division of liability between driver and manufacturer. As noted in multiple news accounts, the safety driver in the Uber test vehicle was to intervene if the autonomous system ran into difficulty. A production version of a self-driving car might not even be designed for such intervention. If claims are cut sharply by reducing or removing driver error from the accident equation, changes in the insurance structure would likely need to be considered.

In August 2016, ISO surveyed insurers about autonomous vehicles. Nearly half of the survey’s 385 participants cited autonomous vehicles as an important emerging insurance issue, but most respondents had only begun informal discussions of the matter, if any.

Some have suggested addressing the issue with a hybrid of product liability and auto insurance, but this would likely depend on compatibility with existing state insurance and vehicle ownership laws. It’s been suggested this combination might help avoid lengthy litigation by conceptually creating a “one-stop” policy for the vehicle owner, or “driver,” if applicable, and the manufacturer, for accidents involving autonomous vehicles.

Another option also discussed involves revised or expanded no-fault insurance—providing first-party coverage in one policy as part of the progression toward fully automated vehicles. As for comprehensive coverage, technological advancements could open new exposures to cyber threats, with potential implications for vandalism and theft.

Rating Issues

Rating aspects of personal and commercial auto insurance are also in play. Traditionally, key elements of personal auto rating include driver characteristics such as age, years licensed, and driving experience, plus any accidents and convictions, subject to state rating laws. In contrast, many commercial auto rating plans – including ISO’s – focus less on the driver and more on how and where the vehicle is used, in addition to its physical attributes.

Some insurers have launched usage-based insurance (UBI) offerings, which typically correlate personal auto insurance rating to a vehicle’s use and driver behavior. Commercial auto UBI solutions commonly focus on how and where the vehicle is operated, but not necessarily a specific driver. All these characteristics may lose importance as vehicle data, components, and software become better risk indicators. Driver-focused state insurance laws will warrant scrutiny as autonomous vehicles’ history and capabilities likely grow in importance.

As of year-end 2017, state legislatures had done little to revise existing auto insurance laws for the potential insurance implications of autonomous vehicles. Uneven state legislative and regulatory progress may hinder insurers in designing and maintaining insurance programs that address autonomous vehicles. This will demand insurers’ constant attention as they prepare to evolve with the changing landscape of auto insurance.

Related: