Babies Helping Robots Get a Grip

The Carnegie Mellon University Robotics Institute thinks robots can learn a thing or two about manipulation from babies. Google has donated $1.5 million to the cause to test this concept on dozens of robots.

Photo Caption: (Credit: Carnegie Mellon University)

Manipulation is one of the major challenges for robots. But scientists at the Carnegie Mellon University (CMU) Robotics Institute think robots can learn a thing or two about manipulation from babies.

To explore this idea further, CMU has received a three-year, $1.5 million “focused research award” from Google. A team of CMU scientists first presented this theory last fall at the European Conference on Computer Vision, showing that robots, like babies, “gained a deeper visual understanding of objects when they were able to manipulate them.”

Thanks to Google, CMU will now test this approach on dozens of robots, including one- and two-armed robots and drones. Abhinav Gupta, an assistant professor of robotics at CMU, says the manipulation shortcomings with today’s robots were apparent during the 2015 DARPA Robotics Challenge, which saw some of the world’s most advanced robots struggle to open doors or unplug cables.

“Our robots still cannot understand what they see and their action and manipulation capabilities pale in comparison to those of a two-year-old,” says Gupta.

Related: How Babies Are Making Robots Smarter

One of the main goals of this project is to speed up the learning process. CMU says robots are slow learners, requiring hundreds of hours of interaction to learn how to pick up objects. And because robots have previously been expensive and often unreliable, researchers relying on this data-driven approach have long suffered from “data starvation.” CMU says scaling up the learning process will help address this data shortage.

“If you can get the data faster, you can try a lot more things - different software frameworks, different algorithms,” says Lerrel Pinto, a Ph.D. student in robotics in Gupta’s research group.

To this point, much of CMU’s manipulation work has been done using a two-armed Baxter robot with a simple, two-fingered manipulator. Using more and different robots, CMU says, will enrich manipulation databases. And once one robot learns something, it can be shared with all robots.

For decades, visual perception and robotic control have been studied separately. Visual perception developed with little consideration of physical interaction, and most manipulation and planning frameworks can’t cope with perception failures. Gupta predicts that by allowing the robot to explore perception and action simultaneously, like a baby, can help overcome these failures.




About the Author

Steve Crowe · Steve Crowe is managing editor of Robotics Trends. Steve has been writing about technology since 2008. He lives in Belchertown, MA with his wife and daughter.
Contact Steve Crowe: scrowe@ehpub.com  ·  View More by Steve Crowe.




Comments



Log in to leave a Comment

Article Topics

News · Google · All Topics


Editors’ Picks

SpotMini Robot Dog Gets Major Makeover
Boston Dynamics introduced a new version of its SpotMini robot dog. The...

3 Ways AR/VR Are Improving Autonomous Vehicles
A lot of work still needs to be done before we start...

New Emotional Robotics Lab to Study Human-Robot Interaction
The University of Texas at Arlington has launched a new Emotional Robotics...

Inside the Autonomous Super Highway Race
As autonomous vehicles are speeding ahead toward adoption, one challenge that Cruise,...