MIT ‘Beerbots’ Fetch Beers While You Sit on the Couch
One of the big challenges in getting robots to work together is the fact that the human world is full of so much uncertainty.
These uncertainties were reflected in the team’s delivery task: among other things, the supply robot could serve only one waiter robot at a time, and the robots were unable to communicate with one another unless they were in close proximity. Communication difficulties such as this are a particular risk in disaster-relief or battlefield scenarios.
“These limitations mean that the robots don’t know what the other robots are doing or what the other orders are,” Anders says. “It forced us to work on more complex planning algorithms that allow the robots to engage in higher-level reasoning about their location, status, and behavior.”
Making the Micro More Macro
The researchers were ultimately able to develop the first planning approach to demonstrate optimized solutions for all three types of uncertainty.
Their key insight was to program the robots to view tasks much like humans do. As humans, we don’t have to think about every single footstep we take; through experience, such actions become second nature. With this in mind, the team programmed the robots to perform a series of “macro-actions” that each include multiple steps.
For example, when the waiter robot moves from the room to the bar, it must be prepared for several possible situations: The bartender may be serving another robot; it may not be ready to serve; or it may not be observable by the robot at all.
“You’d like to be able to just tell one robot to go to the first room and one to get the beverage without having to walk them through every move in the process,” Anders says. “This method folds in that level of flexibility.”
The team’s macro-action approach, dubbed “MacDec-POMDPs,” builds on previous planning models that are referred to as “decentralized partially observable Markov decision processes,” or Dec-POMDPs.
“These processes have traditionally been too complex to scale to the real world,” says Karl Tuyls, a professor of computer science at the University of Liverpool. “The MIT team’s approach makes it possible to plan actions at a much higher level, which allows them to apply it to an actual multi-robot setting.”
The findings suggest that such methods could soon be applied to even larger, more complex domains. Amato and his collaborators are currently testing the planning algorithms in larger simulated search-and-rescue problems with the Lincoln Lab, as well as imaging and damage assessment on the International Space Station.
“Almost all real-world problems have some form of uncertainty baked into them,” says Amato. “As a result, there is a huge range of areas where these planning approaches could be of help.”
This article originally appeared on MIT News.