What is Lidar and How Does it Help Robots See?

This in-depth analysis explains what Lidar is, how it works, how it helps robots see, and how these systems have made their way into humanoid robots.

Photo Caption: Lidar systems have found their way into humanoid robots, including the next-gen Atlas from Boston Dynamic. (Photo Credit: Boston Dynamics)

The environment in the air can also have an effect on the Lidar readings. Heavy fog and rain are also documented to pose issues for Lidar systems by scattering, or otherwise attenuating the emitted laser pulses. In order to help alleviate these issues, higher power lasers are used, which are not good solutions for smaller, mobile or otherwise power-sensitive applications.

Another challenge about Lidar systems is the relatively slow refresh rate of the spinning Lidar systems. The refresh rate of the system is limited by how fast the complicated optics can rotate. 10hz (10 times per second) is approximately the fastest that the Lidar system can rotate, hence, this is the limiting refresh rate of the data stream. A car moving at 60 miles per hour travels 8.8 feet in the 1/10th of a second as the sensor is rotating, so the sensor is essentially blind to changes that happen as it travels those 8.8 feet. Perhaps more importantly, the range of Lidar (in perfect conditions) is 100–120 meters (less than 400 ft), which equates to less than 4.5 seconds of travel time for a car moving at 60 mph.

Perhaps the largest challenges for Lidar to overcome is the high cost of the device. Although cost has been dramatically decreasing since the introduction of the technology, cost remains a significant barrier to adoption. For the mainstream automotive industry, a $20,000 sensor is not going to be accepted by the market. Elon Musk says: “I just don’t think it makes sense in a car context. I think it’s unnecessary.”

Finally, although we consider Lidar a computer vision component, the point cloud representations are purely based on geometry. The human eye, in contrast, uses other physical properties like color and texture in addition to shape. A Lidar system today can’t tell the difference between a paper bag and a rock, which should factor into how the sensor interprets and tries to avoid the obstacle.

The Opportunities

There are still many opportunities for Lidar within the intelligent machine ecosystem. Compared with using 2D images, a point cloud is much easier for a computer to be able to use to build 3D representations of the physical environment. While 2D images are the most easily digestible data for human brains, point clouds are the easiest for computer brains to interpret.

Scanse has released a $250 2D Lidar scanner called “sweep” that can be used outdoors, and is designed for mobile, low-power applications. At nearly a quarter of the cost of competitors, this will allow fundamentally new applications for the sensors (a phenomena we have seen for many other types of sensors as well). The 2D Lidar can also be attached to a second rotary element to generate complete 3D point clouds of environments.

The Scanse Sweep is available for pre-sale until April 11th.

Other companies are pursuing other strategies for lowering system cost, such as Quanergy’s solid state Lidar. The system is principally the same as we have already explained above, however, as opposed to using spinning optics to move many beams, they use something called “phased array optics” to guide the direction of the laser pulses. The result is that the system is able to release one laser pulse in one direction, and the next pulse (a microsecond later), can be aimed somewhere else in the field of view. This allows for real-time focusing on areas where something seems to be moving, which is analog to how a human driver would focus attention on an obstacle as it is about to enter the roadway. The Quanergy system is designed to do this without mechanically moving at all, allowing it to sample around a million data points per second  -  on par with the speed of the 64 channel spinning Lidar counterparts, but at a fraction of the cost. An added benefit is that these sensors are more easily integrated with other components of the automobile like mirrors and bumpers.

Quanergy Lidar SystemPrototype Quanergy Lidar system (Photo Credit: Quanergy)

On the other end of the scale, larger and higher power systems are being developed that can image the ground from an aircraft flying at 30,000 feet, with resolution good enough to be able to see vehicles on the ground. While these systems will be lower in demand and higher in cost, developments on this front will continue to lower the cost of the sensor technology as a whole.

Conclusions

Lidar is only one of the many sensors that are used to give computers data about the physical environment, but the data that is produced is some of the easiest for the computer to interpret. And the sensors are getting cheaper, too. According to to Velodyne director of sales and marketing Wolfgang Juchmann, the cost of Lidar has decreased 10 fold in the past 7 years. We are continually seeing new areas for potential application due to these price reductions.

In future articles, we will discuss some of the other advances in intelligent machine technologies that are driving this new industrial revolution.



Comments



Log in to leave a Comment

Article Topics

News · All Topics


Editors’ Picks

Autonomous Snake-like Robot to Support Search-and-Rescue
Worcester Polytechnic Institute is creating autonomous snake-like robots that can navigate through...

Love Writing About Robotics and AI? Robotics Trends is Hiring!
Robotics Trends and sister site Robotics Business Review are growing and adding...

WiBotic PowerPad Wirelessly Charges Drones
WiBotic’s PowerPad wirelessly charges everything from large industrial drones to smaller...

Meet Jing Xiao: WPI’s New Director of Robotics
In January 2018, Jing Xiao will become the new director of the Robotics...