Google Self-Driving Cars Still Need Human Intervention

Between September 2014 and November 2015, Google’s autonomous vehicles in California experienced 272 failures and would have crashed at least 13 times if their human test drivers had not intervened.

Photo Caption: California regulators require self-driving car firms to report when humans had to take over from robot drivers for safety, though Google is giving only select data.

It’s important to note the hidden message. These “safety anomaly” interventions did not generally cause simulated contacts. With human beings, the fact that you zone out, take your eyes off the road, text or even in many cases even briefly fall asleep does not always result in a crash for humans, and nor will similar events for robocars. In the event of a detected anomaly, one presumes that independent (less capable) backup systems will immediately take over. Because they are less capable, they might cause an error, but that should be quite rare.

As such, the 5300 miles between anomalies, while clearly in need of improvement, may also not be a bad number. Certainly many humans have such an “anomaly” that often (that’s about every 6 months of human driving.) It depends how often such anomalies might lead to a crash, and what severity of crash it would be.

The report does not describe something more frightening — a problem with the system that it does not detect. This is the sort of issue that could lead to a dangerous “careen into oncoming traffic” style event in the worst case scenario. The “unexpected motion” anomalies may be of this class. (As such would be a contact incident, we can conclude it’s very rare if it happens at all in the modern car.) (While I worked on Google’s car a few years ago, I have no inside data on the performance of the current generations of cars.)

I have particular concern with the new wave of projects hoping to drive with trained machine learning and neural networks. Unlike Google’s car and most others, the programmers of those vehicles have only a limited idea how the neural networks are operating. It’s harder to tell if they’re having an “anomaly,” though the usual things like hardware errors, processor faults and memory overflows are of course just as visible.

Other Self-Driving Car Companies

Google didn’t publish total disengagements, judging most of them to be inconsequential. Safety drivers are regularly disengaging for lots of reasons:

  Taking a break, swapping drivers or returning to base
  Moving to a road the car doesn’t handle or isn’t being tested on
  Any suspicion of a risky situation

The latter is the most interesting. Drivers are told to take the wheel if anything dangerous is happening on the road, not just with the vehicle. This is the right approach - you don’t want to use the public as test subjects, you don’t want to say, “let’s leave the car auto-driving and see what it does with that crazy driver trying to hassle the car or that group of schoolchildren jaywalking.” Instead the approach is to play out the scenario in simulator and see if the car did the right thing.

Tesla reports zero disengagements, presumably because they would define what their vehicle does as not an autonomous mode.

VW’s report is a bit harder to ready, but it suggests 5500 total miles and 85 disengagements.

Google’s lead continues to be overwhelming. That shows up very clearly in the nice charts that the Washington Post made from these numbers.

How safe do we have to be?

If the number is the 100,000 mile or 250,000 mile number we estimate for humans, that’s still pretty hard to test. You can’t just take every new software build and drive it for a million miles (about 25,000 hours) to see if it has fewer than 4 or even 10 accidents. You can and will test the car over billions of miles in simulator, encountering every strange situation ever seen or imagined. Before the car has an accident it will be unlike a human. It will probably perform flawlessly. if it doesn’t, that will be immediate cause for alarm and correction of the problem.

Makers of robocars will need to convince themselves, their lawyers and safety officers, their boards, the public and eventually even the government that they have met some reasonable safety goal.

Over time we will hopefully see even more detailed numbers on this. That is how we’ll answer this question.

This does turn out to be one advantage of the supervised autopilots, such as what Tesla has released. Because it can count on all the Tesla owners to be the fail-safe for their autopilot system, Tesla is able to quickly gather a lot of data about the safety record of its system over a lot of miles. Far more than can be gathered if you have to run the testing operation with paid drivers or even your own unmanned cars. This ability to test could help the supervised autopilots get to good confidence numbers faster than expected.

Indeed, though I have often written that I don’t feel there is a good evolutionary path from supervised robocars to unmanned ones, this approach could make my prediction be in error. For if Tesla or some other car maker with lots of cars on the road is able to make an autopilot, and then observe that it never fails in several million miles, then they might have a legitimate claim on having something safe enough to run unmanned, at least on the classes of roads and situations which the customers tested it on. Though a car that does 10 million perfect highway miles is still not ready to bring itself to you door to door on urban streets, as Elon Musk claimed would happen soon with the Tesla.

Editor’s Note: This article was republished with permission from Brad Templeton’s Robocars blog.




About the Author

Brad Templeton · Brad Templeton is a developer of and commentator on self-driving cars. He writes and researches the future of automated transportation at Robocars.com.
Contact Brad Templeton: 4brad@templetons.com  ·  View More by Brad Templeton.
Follow Brad on Twitter.



Comments



Log in to leave a Comment



Editors’ Picks

How Many Robots Does it Take to Screw in a Light Bulb?
Watch a Fetch robot with a custom soft robotic gripper use a...

Disney: Focus on the Robot Experience
The robot experience included in a business strategy is important not only...

Flirtey Wants Drones to Deliver Defibrillators in Nevada
Flirtey and REMSA have partnered to use drones to delivery automated external...

NVIDIA: Drive PX Pegasus AI Computer Powers Level 5 Autonomous Vehicles
NVIDIA introduced its Drive PX Pegasus AI computer that it claims can...