MIT C-LEARN Helps Robots Learn, Share Skills
MIT CSAIL researchers have developed a Constraints Learning (C-LEARN) system that easily teaches robots a variety of tasks, including opening doors, transporting objects and extracting objects from containers. After a robot learns a new skill with this C-LEARN system, that knowledge can be automatically transferred to other robots.
One of the most popular YouTube videos of real robots features the comical failures at the 2015 DARPA Robotics Challenge (DRC) Finals, which had robots operating in a disaster scenario. Many observers were surprised to see humanoids toppling over while attempting simple things like opening a door.
But researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a teaching method for robots that could make these robots more adept at physical tasks when working in emergencies situations.
Constraints Learning (C-LEARN) combines a knowledge base with demonstration learning, essentially providing a robot with information and then directions about how to use it.
One aspect is motion planning, which is the series of steps a robot must take to accomplish something in the physical world, such as traveling to a waypoint. C-LEARN combines machine learning and motion planning techniques so that people who aren’t roboticists or programmers can teach a robot new tasks that can be easily transferred to other robots, according to a paper that was accepted to the IEEE International Conference on Robotics and Automation (ICRA), which begins May 29 in Singapore.
The approach bridges two common methods of programming motion in robotics: demonstrations, for instance moving a manipulator by hand so that it grabs a cup, and motion-planning techniques such as optimization and sampling, in which a programmer specifies the goal of the task and any limits upon it.
The first method is often difficult to extend to other situations, while the second can be time consuming and requires an expert. With C-LEARN, however, people can teach a robot new tricks simply by furnishing it with some basic information about manipulating things, and then performing a single demonstration. The method also allows the machine to adapt to new situations, for instance if it encounters an obstacle, while using the same learned information.
The researchers tested the approach with Optimus, a 16 DOF two-armed humanoid bomb-disposal robot with Robotiq manipulators that moves around on a Husky Unmanned Ground Vehicle by Clearpath Robotics. It was given tasks such as moving a cylinder into a bucket, extracting cylinders from a box, opening a small door and carrying a tray.
After providing the robot with information about how to grasp different objects, a 3D interface is used to show the machine how to accomplish a certain task through a series of steps known as keyframes. By referring to the grasping information it has, the robot can automatically formulate its own motion plans, which the human operator can edit.
Optimus robot performing four test tasks autonomously. Motion plans for each keyframe are shown using still images of the trajectory, with color ranging from gray (initial position) to light blue (end position).
“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” author Claudia Pérez-D’Arpino, a PhD student, says. “We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”
The robot was able to perform better when working with a human operator, who was able to correct small errors, compared to autonomous behavior.
“Autonomous execution produced a success rate of 87.5% on average across ten trials of each of the four tasks, while the shared autonomy method resulted in an overall success rate of 100%,” Pérez-D’Arpino writes in the paper along with collaborator Julie Shah, an MIT professor.
The researchers were also able to transfer the skills Optimus acquired to a simulated version of Atlas, the six-foot-tall, 400-pound, 28 DOF bipedal humanoid robot from Boston Dynamics that MIT fielded in the DRC. The hope is that C-LEARN will make robots better at responding to disasters and other emergencies, where every movement is currently remote operated, as well as working in advanced manufacturing and maintenance roles.
“Traditional programming of robots in real-world scenarios is difficult, tedious and requires a lot of domain knowledge,” Shah says. “It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step towards teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”