LIDAR Hacks Fairly Unlikely Attacks on Self-Driving Cars

To attack a self-driving car's LIDAR system, the hacker needs to physically be near the car and a solid object needs to be in front of the car. As Brad Templeton explains, other sensors in self-driving cars can more easily be compromised.

(There are LIDAR designs which favour the strongest pulse or the last pulse, or even report multiple pulses. The latter would probably be able to detect a spoof. The latter - if any exist - could allow the creation of a false object behind a real object, or even hide the real object from view.)

The author also spoke of an attack from in front of the target LIDAR, such as in a vehicle further along the road. Such an attack, if I recall correctly, was not actually produced but is possible in theory. In this attack, you would shine a laser directly at the LIDAR. This is a much brighter pulse, and in theory it might shine on the LIDAR’s lens from an angle, but be bright enough to generate reflections inside the lens which would be mistaken for the otherwise much dimmer return pulse. (This is similar to the bright spots known as “lens flare” in a typical camera shooting close to the sun.) In this case, you could create spots where there is nothing behind the ghost object, though it is unknown just how far an angle from the attack laser (which will appear as a real object) you could create the flare pulse. The flare pulse would have to be aimed quite precisely to hit the lens, and not hit other lenses if you want a complex object.

As noted, all this can do is create a ghost object, or create noise that temporarily blinds the LIDAR. The researchers also demonstrated a scarier (but less real) “denial of service” attack against the LIDAR processing software from the LIDAR’s manufacturer. This software’s job is to take the cloud of 3-D points returned by the LIDAR and turn it into a list of objects perceived by the LIDAR. Cars like Google’s and most others don’t use the standard software from the manufacturer.

The software they tested here was limited, and could only track a fixed number of objects, I believe around 64. If there are 65 objects (which is actually quite a lot) it will miss one. This is a much more series error called a false negative - there might be something right in front of you and your LIDAR would see it but the software would discard it and so it would be invisible to you. Thus, you might hit it if things go badly.

The attack involves creating lots of ghost objects, so many that they overload the software, and go over this limit. They demonstrated this, but again, only on fairly basic software that was not designed to be robust. Even software that can only track a fixed number of objects should detect when it has reached its limit, and report that, so the system can mark the result as faulty and take appropriate action.

As noted, most attacks, even an overwhelming number of ghost objects, would only cause the car to decide to brake for nothing. This is not that hard to do. Humans do this all the time actually, braking by mistake, or for lightweight objects like tumbleweeds, or animals dashing over the road, or birds, or even mirages. You can make a perfectly functioning robocar brake by throwing a ball on the road. You can also blind humans temporarily with a bright flash of light or a cloud of dust or water. This attack is much more high tech and complex, but amounts to the same thing.

It is risky to suddenly brake on the road, as the car behind you may be following too closely and hit you. This happens even if you brake for something real. Swerving is a bit more dangerous, but normally only be done when there is a very high confidence the path being swerved into is clear. Still, it’s always a good idea to avoid swerving. But again, you would also do this for the low-tech example of a balloon thrown on the road.

It is possible to design a LIDAR to return the distance of the first return it sees (the closest object, after all) or the last (perhaps the most solid.) Some may report the strongest, or report the multiple returns. The Velodyne used by many teams reports only one return.

If a LIDAR reports the first, you can fake objects in front of the background object. If it reports the last, more frighteningly, you can fake an object behind the background and possibly hide the background (though that would be quite difficult and require you making your fake object very large.) If it reports the strongest, and you are not as worried about eye safety, you can always be the strongest, and put your fake object in either place.

About the Author

Brad Templeton · Brad Templeton is a developer of and commentator on self-driving cars. He writes and researches the future of automated transportation at
Contact Brad Templeton:  ·  View More by Brad Templeton.
Follow Brad on Twitter.


Log in to leave a Comment

Editors’ Picks

10 Best Robots of CES 2018
Self-driving cars were the talk of Las Vegas during CES 2018. We recap...

Top 10 AI & Robot Stories of 2017
2018 is shaping up to be a massive year for automation. But first,...

Breaking Down Autonomous Systems
Future tech: Autonomous intelligence

Robots are Learning to Pick up Objects Like Babies
UC Berkeley has developed a technique that enables robots to complete tasks...