Content

Events
About

How much testing will prove automated cars are safe?

Peter Els | 03/07/2018

In 2016 road accidents, globally, accounted for 29,501,040 injuries while 1,045,180 people lost their lives. In America alone, according to the Federal Highway Association, over the same period 4.6 million road users were injured with approximately 40,000 fatalities, sustained over 3.2 trillion miles; equating to 1.25 fatalities and 144 injuries per 100 million miles driven. The National Highway Traffic Safety Administration claims that 94% of these accidents were caused by human error.

Statistics such as these would appear to support the adoption of fully automated vehicles, sooner rather than later - after all we all know that robots do as they’re programmed to, and seldom make mistakes… but just how good are self-driving cars at the moment?

Image Source: 9to5Google

In 2016 a self-driving car failed about every 3 hours in California

Every January, carmakers testing self-driving cars in California have to detail how many times their vehicles malfunctioned during the preceding year. These so-called disengagement reports detail every time a human safety driver had to take control of the car, either due to hardware or software failure or because the driver anticipated a problem. 

Image Source: Mark Harris from information provided by California DMV

It’s important to note that none of the 2,578 disengagements experienced by the nine companies that carried out road-testing in 2016 resulted in an accident.

While Waymo has the biggest test program, its 635,868 miles of testing accounting for over 95 percent of all the miles driven, the fleet of 60 self-driving cars out-performed all competitors with a total of 124 disengagements, 51 of them due to software problems. This represents 0.2 disengagements for every 1,000 miles, a sharp reduction in disengagements from 2015, where the company recorded 0.8.

At the other end of the scale, Bosch reported over 1,400 disengagements while covering just 983 miles in three vehicles, equivalent to 1,467 disengagements for every 1,000 miles of driving. But that doesn’t mean Waymo’s cars are 8,000 times safer than Bosch’s, as every company has its own way of determining the disengagement statistics.

For instance, Waymo does not count every single time the driver grabs the wheel to take control from the automated system, which it admits happens many thousands of times annually. Instead, the company later simulates what would have happened if the human had not intervened, and only reports disengagements where the car would have done something unsafe. It calculates that if its drivers had taken no action at all, nine disengagements in 2016 would have led to the car hitting an obstacle or another road user. That is down from 13 the previous year, despite covering 50 percent more miles.

The other problem with comparing disengagement rates is that different companies are using California’s testing permits for a wide range of programs. Only Waymo and General Motors’ Cruise Automation have extensive, general-purpose testing programs. In its first year on the state’s roads, Cruise’s two dozen cars went from covering less than 5 miles in June 2015 to over 2,000 miles in September 2016. Its disengagement rate also plummeted over the same period, from over 500 to under 3 per 1,000 miles.

Of particular interest is Tesla’s disengagement report: In 2015, the company reported no disengagements at all, suggesting that it either carried out no public testing in California or that its cars were flawless. In 2016, after switching to Autopilot V2, its report declared 182 disengagements over 550 miles of autonomous driving.

However, all but a handful of those disengagements happened in just four cars over the course of a single long weekend in October, possibly during the filming of a promotional video. 

Tesla has an added advantage over other manufacturers when it comes to gathering data from real world testing: it receives millions of miles of road data from thousands of AutoPilot-equipped vehicles owned by its customers.

So while it’s clear that the increase in real world test mileage as well as virtual testing carried out by manufacturers is rapidly improving the safety of autonomous technology, there still is no clear and unambiguous answer to the question regulators, OEMs and suppliers are asking: How much testing is required to reasonably ensure safety? How many kilometers of driving would it take to make statistically significant safety comparisons between autonomous vehicles and those with human drivers? 

Considering that current human-piloted vehicles have a history spanning more than a century and have covered billions of kilometers, autonomous vehicles would have to be test-driven hundreds of billions of kilometers within the next few years to confidently demonstrate their safety.

How much testing will guarantee safety? 

Arguably the best-in-class automated driving company, Waymo, has covered 5.6 million autonomous kilometers while testing in 20 different cities, adding a further 4 billion simulated kilometers using virtual software environments reflecting real-world conditions.

According to Susan M. Paddock, senior statistician at RAND and co-author of the 2016 Rand Corporation study into self-driving transportation, although the data gathered so far is important, it does not come close to the level of driving that is needed to calculate safety rates - even if autonomous vehicle fleets accumulated 16 million kilometers, it still would not be possible to draw statistical conclusions about safety and reliability. 

For example, assuming a self-driving vehicle fleet had a 20 percent lower fatality rate than that for human drivers: Substantiating this with 95 percent confidence would necessitate driving 8 billion kilometers. This would be equivalent to driving every road in Texas nearly 16,000 times over. It would take a fleet of 100 vehicles, driving 24/7, around 225 years to deliver these results!

Concurring with Rand’s findings, mobility researchers at the University of Michigan believe that for consumers to accept driverless vehicles, tests will need to prove with 80 percent confidence that they're 90 percent safer than human drivers. To get to that confidence level, test vehicles would need to be driven in simulated or real-world settings for 17.7 billion kilometers. But it would take nearly a decade of round-the-clock testing to reach just 3.2 million kilometers in typical urban conditions.

In a Mcity white paper, published in May 2017, researchers described a new accelerated evaluation process that breaks down difficult real-world driving situations into modules that can be tested or simulated repeatedly. By so doing, automated vehicles can be evaluated under a reduced set of the most challenging driving situations, thereby condensing 480,000 to 160 million kilometers of real-world driving into 1,600 test kilometers.

To develop the four-step accelerated approach, the U-M researchers analyzed data from 40 million kilometers of real-world driving, collected by two U-M Transportation Research Institute projects; Safety Pilot Model Deployment and Integrated Vehicle-Based Safety Systems. Together they involved nearly 3,000 vehicles and volunteers over the course of two years.

From that data, the researchers:

Identified events that could contain "meaningful interactions" between an automated vehicle and one driven by a human, and created a simulation that replaced all the uneventful miles with these meaningful interactions.

Programmed their simulation to consider human drivers the major threat to automated vehicles and placed human drivers randomly throughout.

Conducted mathematical tests to assess the risk and probability of certain outcomes, including crashes, injuries, and near-misses.

Interpreted the accelerated test results, using a technique called "importance sampling" to learn how the automated vehicle would perform, statistically, in everyday driving situations.

The accuracy of the evaluation was determined by conducting and comparing accelerated and real-world simulations. 

Nonetheless, ultimately the answer to “How much testing is enough?” is, “It depends!” This was suggested by Bill Hetzel in his book “The Complete Guide to Software Testing”. 

It depends on risk and safety: The safety of road users and the risk of missing faults, of incurring high failure costs, of losing creditability and market share. All of these suggest that more testing is better. However, it also depends on the risk of missing a market window and the risk of over-testing (doing ineffective testing).

So, while there might not be an answer to how much testing is needed to ensure the safety of self-driving vehicles, manufacturers will probably continue to measure success by the number of disengagements they record in the real world.

Sources:

Darrell Etherington; Tech Crunch; Building the best possible driver inside Waymo’s Castle; October 2017; https://techcrunch.com/2017/10/31/waymo-self-driving-castle/

Sue Carney; Phys.Org; New way to test self-driving cars could cut 99.9 percent of validation costs; May 2017; https://phys.org/news/2017-05-self-driving-cars-percent-validation.html

Nidhi Kalra; The Rand Corporation; Why It's Nearly Impossible to Prove Self-Driving Cars' Safety Without a New Approach; May 2016; https://www.rand.org/blog/2016/05/why-its-nearly-impossible-to-prove-self-driving-cars.html

Mark Harris; IEEE Spectrum; The 2,578 Problems With Self-Driving Cars; February 2017; https://spectrum.ieee.org/cars-that-think/transportation/self-driving/the-2578-problems-with-self-driving-cars

Upcoming Events


Generative AI for Automotive 2025

11th - 13th February 2025
NH München Ost Conference Center, Munich, Germany
Register Now | View Agenda | Learn More


Data Management for Generative AI in Automotive

11th - 13th February 2025
NH München Ost Conference Center, Munich, Germany
Register Now | View Agenda | Learn More

MORE EVENTS