Watch: NVIDIA Self-Driving Car Learns How to Become a Better Driver

After just 3,000 miles, NVIDIA's self-driving car uses its DAVENET deep-learning network to learn the rules of the road and become a better driver.


When compared to the 1.5 million miles autonomously driven by Google’s self-driving cars, the three thousand miles NVIDIA’s self-driving car drove in one month seems like peanuts.

But if you watch the video above, there’s no doubt you’ll be impressed with what NVIDIA is doing.

NVIDIA has been working on self-driving cars that run its DAVENET deep-learning network (we’ll explain later). As you can see in the first 32 seconds of the above video, things were a little rough for the deep-learning car as it first hit the road, running over traffic cones, nearly hitting trash cans, getting confused by several roads. Human intervention was often required.

PODCAST: How NVIDIA’s Jetson TX1 is Making Robots Smarter

But the point of the above video is to show just how far NVIDIA has come. After 3,000 miles of learning, the car appears to handle the roads, and the rain, much better.

So, how does NVIDIA’s self-driving car learn how to become a better driver? We’ll let NVIDIA explain:

Using the NVIDIA DevBox and Torch 7 (a machine learning library) for training, and an NVIDIA DRIVE PX self-driving car computer to process it all, our team trained a CNN with time-stamped video from a front-facing camera in the car synced with the steering wheel angle applied by the human driver.

We collected the majority of the road data in New Jersey, including two-lane roads with and without lane markings, residential streets with parked cars, tunnels and even unpaved pathways. More data was collected in clear, cloudy, foggy, snowy and rainy weather, both day and night.

Using this data, our team trained a CNN to steer the same way a human did given a particular view of the road and evaluated in simulation. Our simulator took videos from the data-collection vehicle and generated images that approximate what would appear if the CNN were instead steering the vehicle.

Once the trained CNN showed solid performance in the simulator, we loaded it onto DRIVE PX and took it out for a road test in the car. The vehicle drove along paved and unpaved roads with and without lane markings, and handled a wide range of weather conditions. As more training data was gathered, performance continually improved. The car even flawlessly cruised the Garden State Parkway.

Our engineering team never explicitly trained the CNN to detect road outlines. Instead, using the human steering wheel angles versus the road as a guide, it began to understand the rules of engagement between vehicle and road.

Impressive stuff. This project kicked off nine months ago at NVIDIA to build on the DARPA Autonomous Vehicle (DAVE) research to create a robust system for driving on public roads. To learn more, check out NVIDIA research paper “End to End Learning for Self-Driving Cars.




About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Trends. Steve has been writing about technology since 2008. He lives in Belchertown, MA with his wife and daughter.
Contact Steve Crowe: scrowe@ehpub.com  ·  View More by Steve Crowe.




Comments



Log in to leave a Comment


in the Future Tech Hub

Editors’ Picks

CES 2018 AI Conference Schedule
Robotics Trends' AI conference at CES 2018 examines recent developments, current applications, and...

Unibo Robot Stars in Fujitsu AI Cloud Platform
Unibo can recognize users and customize conversations accordingly. Unibo can move its...

Jibo Music Brings iHeartRadio to Social Robot
ibo and iHeartRadio have teamed up to launch Jibo Music that will...

Japanese Startup GROOVE X Goes Viral as Teaser for LOVOT Robot
GROOVE X is teasing its LOVOT companion robots that are scheduled to...