Applying Neuroscience to Robot Vision
Scientists attempt to replicate human-like vision, spatial perception, and object grasping in robots.
By Robotics Trends Staff - Filed May 19, 2011

After three years of intense work, members of a European research effort called EYESHOTS have made progress in controlling the interaction between vision and movement, and as a result have designed an advanced three-dimensional visual system synchronized with robotic arms which could allow robots to observe and be aware of their surroundings and also remember the contents of those images in order to act accordingly.

For a humanoid robot to successfully interact with its environment and develop tasks without supervision, it is first necessary to refine these basic mechanisms that are still not completely resolved, says Spanish researcher Ángel Pasqual del Pobil, director of the Robotic Intelligence Laboratory of the Universitat Jaume I. His team has validated the members' findings with a system built at the University of Castellón (Spain) consisting of a robot head with moving eyes integrated into a torso with articulated arms.

To make the computer models, the team drew upon knowledge of animal and human biology, bringing together experts in neuroscience, psychology, robotics, and engineering. The study began by recording monkeys' neurons engaged in visual-motor coordination, since primates share our way of perceiving the world.

The first feature of the visual system that the members replicated artificially focused on so-called saccadic eye movement, a behavior related to the dynamic change of attention. According to Dr. Pobil: "We constantly change the point of view through very fast eye movements, so fast that we are hardly aware of it. When the eyes are moving, the image is blurred and we can't see clearly. Therefore, the brain must integrate the fragments as if it were a puzzle to give the impression of a continuous and perfect image of our surroundings."

From the neural data, the experts developed computer models of the section of the brain that integrates images with movements of both eyes and arms. This integration is very different from that which is normally carried out by engineers and experts in robotics. The EYESHOTS consortium set out to prove that when human beings make a grasping movement towards an object, our brains do not have to previously calculate the coordinates.

As the Spanish researcher explains: "The truth is that the sequence is much more straightforward: Our eyes look at a point and tell our arm where to go. Babies learn this progressively by connecting neurons." Therefore, these learning mechanisms have also been simulated in EYESHOTS through a neural network that allows robots to learn how to look, how to construct a representation of the environment, how to preserve the appropriate images, and use their memory to reach for objects even if these are out of their sight at that moment.

"Our findings can be applied to any future humanoid robot capable of moving its eyes and focusing on one point. These are priority issues for the other mechanisms to work correctly," Pobil notes.

 

<< Return to story