The deadliest stage in self-driving development

The deadliest stage in self-driving development
When an autonomous Uber in Arizona failed to slow down it fatally hit a 49-year-old woman.

Last week, the US National Transportation Safety Board (NTSB) released its preliminary report into the Uber self-driving crash that killed a woman in March.

The NTSB found that the car identified an object on the road seconds before the crash, but the did not stop. The radar and Lidar sensors on the modified Volvo XC-90 SUV detected 49-year-old Elaine Herzberg about six seconds before the crash. The vehicle classified Herzberg first as an object, then as a vehicle, and finally as a bicycle as she was walking her bike across the street.

About a second before impact, the self-driving system determined that emergency braking was needed to avoid a collision. But, Uber had disabled the Volvo's factory-equipped automatic emergency braking system to avoid clashes with its own tech, the report said.

Things got worse from there.

The NTSB also found that Uber's self-driving software had been trained not to apply its own emergency braking in situations that risked "erratic vehicle behaviour." This was done to provide a comfortable ride: Too many false positive detections (e.g. tree leaves, shrubs or plastic bags on the ) would result in a large number of emergency brakes which no passenger will tolerate.

So, instead, the company relied on the backup driver to intervene in the last minute to avoid disaster. That did not happen.

Re-thinking Level-3 and conditional automation

Most of the self-driving testing today requires human intervention. This is what's referred to as Level-3 or conditional automation – the stage in autonomous vehicle development which I think is the most dangerous because it involves the handover of vehicle control to the backup driver in case of emergency.

Few companies have already chosen to skip Level 3 and target the safer Level 4 (full autonomy).

In fact, I would argue that Level 3 should be explicitly prohibited on open roads. Having a human step inside the control loop at the last possible minute is nothing short of a guaranteed disaster.

With both automated emergency braking systems not available in the Uber vehicle, the company was relying on the backup operator to intervene at moment's notice to prevent a crash. This is problematic because passing control from car to human poses many difficulties especially in situations when the backup operator has zoned out. Video footage showed the operator looking down immediately before the crash. She braked only after the collision. Herzberg was killed.

Level 3 is also providing drivers with a false sense of security. In March, a Tesla driver was killed in a crash in California when his vehicle was running on Autopilot. In May 2016, a Tesla driver died when his car, also on Autopilot, crashed into a truck in Florida. These vehicles are designed for driving by humans, assisted by self-driving technologies, not driven by computers with human supervision.

Regulatory intervention – the way forward

The NTSB report highlights not only the shortcomings of Uber's testing program, but also a failure in regulating tests on open roads.

A report published last year showed the readiness of self-driving software varies across the different providers. Waymo's self-driving software was 5,000 times safer than Uber's, according to the report. This was measured according to the rate of disengagements, when the automated system forced the backup driver to take control of the vehicle. Uber's rate was 1 disengagement per mile driven, while Waymo's was 1 disengagement every 5,128 miles!

The industry is self-regulating and it is unknown how they determine if their technology is safe to operate on public roads. The regulators have also failed to provide the criteria for making such determinations.

While it is necessary to test the performance of self-driving software under real-life conditions, the trials on open roads should not be about testing the safety of the systems. Safety testing should be comprehensively evaluated before allowing the vehicles on public roads.

An appropriate course of action would be for regulators to come up with a set of standardised tests, and request companies to benchmark their algorithms on the same data sets.

The regulators should follow a graduated approach to certification. First, the self-driving system is evaluated in simulation environments. This provides confidence that the system is working safely. This is followed by real-world testing in confined environments (e.g. on test-beds). Once the vehicles pass the benchmark tests, the regulators can allow them on open roads but also with safety conditions.

This tragic incident should be a catalyst for regulators to establish a strong and robust safety culture to guide innovations in self-driving technologies. Without this, autonomous vehicle deployment would go nowhere very fast.

Citation: The deadliest stage in self-driving development (2018, May 30) retrieved 28 March 2024 from https://phys.org/news/2018-05-deadliest-stage-self-driving.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Feds: Uber self-driving SUV saw pedestrian, did not brake

11 shares

Feedback to editors