Google Self-Driving Car Exec Talks Fatal Tesla Crash
Due to the semi-autonomous nature of Tesla Autopilot, a Google self-driving car executive says it was the responsibility of Joshua Brown, who was killed in an accident while his Model S was on Autopilot, to be cautious.
Google has finally broken its silence on the fatal Tesla crash that killed 40-year-old Joshua Brown. And it seems Google is blaming both Brown and Tesla for what tragically occurred.
In an interview with Bloomberg, John Krafcik, CEO of Google’s Self-Driving Car Project, called what happened on May 7, 2016 in Williston, Fla. “a tragedy.” But Krafcik also pointed out that Brown was “one of probably a hundred or so people who died that day in automotive fatalities, in the U.S. alone” and that the Tesla Model S involved in the accident “wasn’t a self-driving car.”
Here’s a portion of the interview where Krafcik focuses on the semi-autonomous ability of Tesla Autopilot and how humans trust it too much:
What can be learned from the Tesla fatality?
“Well, first of all, it’s a tragedy. I mean, Joshua Brown lost his life. A couple of key points, though. One is, he was one of probably a hundred or so people who died that day in automotive fatalities, in the U.S. alone. You know the statistics: 35,000 fatalities, up 7 percent from the year prior. Globally, it’s over 1.2 million. It’s as if a 737 was crashing every hour of every day all year. From a macro standpoint, it’s a very, very big problem.
“But we need to make sure we’re using the right language when we talk about what happened with that accident, because that wasn’t a self-driving car, what we refer to as an L4, or fully autonomous car. That was a car with traffic-aware cruise control and a lane-keeping function - an L2, where, for better or worse, it was the responsibility of the driver to be cautious. We, as humans, are fallible creatures. [The crash] confirms our sense that the route to full autonomy, though much harder, is the right route. And we’ve learned from experience what happens when you put really smart people with really clear instructions inside a car with capabilities like that Tesla one.
“Back in 2012 we had a technology that was very similar. We let Google employees test it, after lengthy training sessions imploring them to pay attention at all times. We wanted to see how they were interacting with the technology. After three months we saw enough to say this is definitely a problem. People would take their eyes off the road for some period, look down at their phones and start texting while in the driver’s seat. Turning around to the back to get their laptop because they needed to plug their phone in. Right? When you’re hurtling down the road at 60 miles an hour in a two-ton vehicle?
“That takes us to the fundamental conundrum of the L2 semi-autonomous solutions: As they get better and better, but not quite good enough for humans to zone out entirely, then risk increases. So we need to take the human out of the loop. With L4, which is our focus at Google, the idea is, you don’t need a steering wheel or controls because we’re going to take care of everything, and you just have to say, “I want to go to that destination,” and the car will take you there.”
Tesla has come under intense scrutiny following a series of alleged crashes involving its Autopilot system. Consumer watchdog Consumer Reports urged Tesla to disable Autopilot, saying “consumers should never be guinea pigs for vehicle safety ‘beta’ programs.”
Tesla, however, said it will not disable Autopilot. In fact, Elon Musk’s Master Plan, Part Deux includes Tesla self-driving trucks and buses, both of which are in the early stages of development and should be unveiled some time in 2017. And on a conference call August 3, 2016, Musk talked about Tesla’s progress in level 4 fully autonomous driving, adding that they’ll be releasing something sooner than people think.
“What we’ve got will blow people’s minds, it blows my mind … it’ll come sooner than people think.”