MIT’s Cutting-Edge Approach to Robot Control for DARPA Finals

MIT didn't win the DARPA Robotics Challenge Finals, but the team still feels victorious after developing and testing its cutting-edge algorithm to control robots.

Photo Caption: A still from a video of MIT's first DARPA Robotics Challenge run, where they scored 7 points.

Balancing act

The lower-level control algorithm, however, can’t afford to ignore the forces acting at individual points of contact. Early on, Tedrake set the ambitious goal of a system that could evaluate information from the robot’s sensors and readjust the trajectories of its limbs 1,000 times a second, or at a rate of one kilohertz.

That sounds daunting, but as Tedrake explains, past a certain point, the high sampling rate actually becomes an advantage. One one-thousandth of a second allows so little time for circumstances to change that the imposition of new constraints usually occurs piecemeal. From one sensor reading to the next, the algorithm rarely has to meet more than one or two new constraints, which it can usually manage with just a small adjustment.

As one test of the kilohertz controller, members of the MIT team instructed their robot to dismount from the utility vehicle they’d been using to test its driving skills; once it had transferred all its weight to one foot, they started jumping up and down on the vehicle’s fenders. The robot maintained its balance.

Human factors

For several of the robot’s tasks, the MIT researchers exploited the fact that the contest allowed human operators to communicate with their robots — although their communication links would be erratic.

Although the robot has an onboard camera, its chief sensor is a laser rangefinder, which fires pulses of light in different directions and measures the time they take to return. This produces a huge cloud of individual points — some of which belong to the same objects, and some of which don’t. Resolving that point cloud into distinct objects is an extremely difficult task, which computer vision researchers have been wrestling with for decades. It would be almost impossible to perform in real time.

So the MIT researchers built a library of generic geometric representations of objects the robot was likely to encounter — such as the fallen lumber whose removal was one of its tasks during the competition finals. The remote operator can look at an image captured by the robot’s camera, identify the appropriate library of object representations, and superimpose the point cloud produced by the laser rangefinder. Then the operator clicks the track pad or mouse button twice to roughly indicate the ends of the objects in the image. Algorithms then automatically cluster points together according to the geometric models, picking out the individual objects that the robot will have to manipulate.

When the robot enters a new environment, its rangefinder readings can tell it where nearby objects are. But it doesn’t know which are safe to step on. So the MIT researchers also developed an interface that allows the robot’s operator to click on a graphical representation of the robot’s surroundings, identifying flat surfaces that offer secure footholds.

From the robot’s sensor readings, the algorithm automatically determines the extent of the safe areas, by locating the first significant changes in altitude. So if the operator clicks at a single point on an uncluttered floor, the interface highlights an expanse of space that extends outward from that point to the first obstacles the rangefinder registers. Similarly, if the operator clicks a single point on one step of a staircase, the algorithm highlights most of the rest of the step, but stops short of its edges.

“If you look at what happened at DRC [DARPA Robotics Challenge], it was a lot of teleoperation, a lot of scripted pieces of movement, and then a human telling the robot which movement to execute in great detail,” says Emanuel Todorov, an associate professor of electrical engineering and computer science at the University of Washington. “Humans are smart, and at least for the time being, if you put them in the loop, they outperform the autonomous controllers Russ and others built. But eventually it’s going to turn the other way around, because these are complicated machines, and there’s only so much a human can figure out in real time. The approach that Russ was taking was in some sense the right approach. This is what robotics should look like five or 10 years from now.”

Reprinted with permission of MIT News.

About the Author

MIT News · MIT News is dedicated to communicating to the media and the public the news and achievements of the students at the Massachusetts Institute of Technology.
Contact MIT News:  ·  View More by MIT News.
Follow MIT on Twitter. Follow on FaceBook


Log in to leave a Comment

Article Topics

Future Tech · Humanoid Robots · News · How Tos · All Topics

Editors’ Picks

Top 10 AI & Robot Stories of 2017
2018 is shaping up to be a massive year for automation. But first,...

Robots are Learning to Pick up Objects Like Babies
UC Berkeley has developed a technique that enables robots to complete tasks...

Self-Driving Taxis Giving Rides During CES 2018
Aptiv and Lyft have teamed up to give CES 2018 attendees self-driving taxi...

Roombas Will Help Clean up Your Home’s Weak WiFi
iRobot's top-tier Roomba robot vacuums will soon be able to sweep your...