Four Key Points for Regulators in the Wake of the First Self-Driving Car Pedestrian Fatality
Somber news out of Arizona this week as an autonomous Volvo modified and operated by Uber reportedly struck and killed a pedestrian. Although this is not the first fatality in which a self-driving car played a role, this incident marks the first time a pedestrian has been killed by a car employing self-driving technology. Many details of the accident are still under investigation, and at least one investigator has suggested that even an attentive human driver could not have avoided it. Nevertheless, Uber has reportedly suspended its self-driving testing, and many commenters are already using the incident to call for tighter regulation of autonomous vehicles. Although any automotive fatality is an undeniable tragedy, it is important that regulators think critically about the legal landscape for autonomous vehicles and avoid reactionary lawmaking that could hamper the development of technology that has the potential to ultimately improve automotive safety.
The self-driving car debate is largely playing out at the state level right now. Although federal law prevents automakers from selling vehicles that do not comply with federal motor vehicle safety standards (which require, for instance, that cars be outfitted with human controls such as steering wheels and brake pedals), federal law leaves ample room for self-driving software solutions and aftermarket hardware changes. The Department of Transportation’s (DOT) National Highway Traffic Safety Administration (NHTSA) issued a policy document on autonomous vehicles back in 2016, but the publication merely provides guidance for manufacturers, suppliers, developers, and others entities involved in the development and deployment of automated vehicles, and does not impose any mandates or legally binding restrictions. Last year, the House of Representatives passed a bill called the SELF DRIVE Act that would require NHTSA to issue new safety standards to “accommodate the development and deployment of highly automated vehicles” and would greatly expand NHTSA’s authority to grant exemptions from federal safety to facilitate the production of highly autonomous vehicles. But even before the accident yesterday, the bill was held up in the Senate.
The federal legal landscape has left room for states to decide whether and under what circumstances self-driving cars can use public roads. In Arizona, an executive order permits companies to test and operate fully autonomous vehicles, provided certain safety-related conditions are met, including that the autonomous vehicle is “capable of complying with all applicable traffic and motor vehicle safety laws.” Companies have been testing autonomous vehicle solutions in Arizona and other states with enabling legal frameworks, with considerable success. Uber has logged 3 million miles using autonomous technology, and Google’s Waymo boasts more than 5 million miles of autonomous vehicle testing on public roads.
Although accidents with self-driving vehicle technology have been rare, and fatalities even rarer, the incident in Tempe this week may place additional pressure on regulators to rein in the testing and use of autonomous vehicles on public roads. But it is not clear at present that reining in testing is an appropriate or rational response. Because autonomous technology has the potential to transform the future of automotive transportation and ultimately make the roads far safer for passengers and pedestrians alike, regulators should be careful that any new legislation not thwart valuable technological development. To this end, these key points should remain in the conversation:
-
Facts are still forthcoming. Above all, lawmakers should wait until an investigation is complete before taking any action. Investigations of the Tempe incident are still ongoing; once they are complete stakeholders will have a better sense of the factors which contributed to this collision. Facts, rather than assumptions, should inform policy making regarding autonomous driving and automotive safety.
-
Self-driving cars are likely to save lives in the long run. Globally, on average, 3,287 people die each day in car crashes. In the U.S. alone, 37,461 people died in car crashes in 2016. Although this amounted to only 1.18 deaths per 100 million miles traveled, automotive accidents remain a leading cause of death in the United States. Given that distraction, drowsiness, and drunkenness played a role in nearly 40% of fatal automotive accidents in the U.S. in 2016, automotive and tech companies leading the charge on autonomous vehicles believe that such vehicles will be far safer than human-driven cars once the technology is perfected.
-
Technology cannot develop without testing. There will be pressure to push the pause button on continued testing of autonomous technologies on public roads, and the accident may make it harder to push for expanded testing and use of autonomous vehicles at the state and local level. But the key to the development of any technology is iterative testing and use, and that’s especially true of the software that controls self-driving cars. This technology depends on collecting vast quantities of data from a wealth of situations—meaning that a slow-down in testing could lead to long delays in improving the safety and utility of these vehicles.
-
Autonomous vehicle technology has wide-ranging applications. Hampering the development of self-driving cars would have consequences not only for the automotive industry, but for a broad variety of technologies. From unmanned aircraft systems to trucks and ships, sense-and-avoid technology is key to a future of autonomous vehicles and transportation. Self-driving cars provide a critical platform for developing the kind of sensor and software packages that will be necessary to move each of these industries forward.
If lawmakers and policy advocates can heed these points and principles, they can ensure that the life-saving potential of autonomous vehicles can be appropriately tested and evaluated. A contrary, reactive approach may be counterproductive and cost more lives that it saves.