Tele-operated Robot Offers Immobile Patients More Autonomy
The robot translates the user's mental focus into action
By Robotics Trends' News Sources - Filed Nov 21, 2012

Researchers at the CNRS-AIST Joint Robotics Laboratory (a collaboration between France's Centre National de la Recherche Scientifique and Japan's National Institute of Advanced Industrial Science and Technology) are developing software that allows a person to drive a robot with their thoughts alone. The technology could one day give a paralyzed patient greater autonomy through a robotic agent or avatar, or allow for teleoperation of humanoid robots during disasters like theFukishima melt-down

The main research subjects include: task and motion planning and control, reactive behavior control, and human-robot cooperation through mult-imodal interface integrating brain-computer interface (BCI), vision and haptics. The team is also conducting collaborative research with external research institutes within Japanese and European projects.

In the example video (embedded below), the system is used to control an HRP-2 robot.

To control the robot, an operator concentrate their attention on a symbol displayed on a computer interface. An electroencephalography (EEG) cap outfitted with electrodes reads the electrical activity in their brain, which is interpreted by a signal processor. These commands are then sent to the robot.

The system so far is used for pre-configured control only.  For now the robot is only performing a preset action such as walking forward, turning right or left, and so on. The robot's artificial intelligence, developed over several years at the lab, allows it to perform more delicate tasks such as picking up an object from a table without needing human input. In this scenario, the robot's camera images are parsed by object recognition software, allowing the patient to choose one of the objects on a table by focusing their attention on it.

"Basically what you see is how with one pattern, called the SSVEP, which is the ability to associate flickering things with actions, it's what we call the affordance, means that we associate actions with objects and then we bring this object to the attention of the user and then by focusing their intention the user is capable of inducing which actions they would like with the robot, and then this is translated."

Object recognition software automatically detects and highlights the bottled water and canned drink in the robot's camera images, and by focusing on one of them the patient can command the robot to retrieve it With training, the user can direct the robot's movements and pick up beverages or other objects in their surroundings.

 

<< Return to story