How Robots Can Trick You Into Loving Them
WALL-E was just the beginning... Robots have hacked into our brains and are confusing our categorical programming.
By Maggie Koerth-Baker - The New York Times Magazine - Filed Sep 18, 2013

I like to think of my Roomba as cute and industrious. He makes noises while he cleans that make me feel as if he’s communicating with me, which contributes to the fact that I have assigned him a sex. He exists in a liminal space between animate and inanimate, but once he’s done cleaning my floors, I don’t mind putting him in the closet until I need his services again — he’s a rolling disc, after all.

Robosimian — a headless, quadrupedal disaster-response robot designed by engineers at NASA’s Jet Propulsion Laboratory — is a bit more useful than my Roomba, slightly more human-looking and a lot less cute: the C-3PO to my R2-D2. Robosimian can maneuver over rubble and through narrow corridors in order to, one day, rescue humans from peril. But its more difficult task will be forming some sort of bond with the E.M.T.’s and first responders who might use it. Robosimian will be more than just a tool, but not quite a colleague.

In the future, more robots will occupy that strange gray zone: doing not only jobs that humans can do but also jobs that require social grace. In the last decade, an interdisciplinary field of research called Human-Robot Interaction has arisen to study the factors that make robots work well with humans, and how humans view their robotic counterparts.

H.R.I. researchers have discovered some rather surprising things: a robot’s behavior can have a bigger impact on its relationship with humans than its design; many of the rules that govern human relationships apply equally well to human-robot relations; and people will read emotions and motivations into a robot’s behavior that far exceed the robot’s capabilities. As we employ those lessons to build robots that can be better caretakers, maids and emergency responders, we risk further blurring the (once unnecessary) line between tools and beings.

Provided with the right behavioral cues, humans will form relationships with just about anything — regardless of what it looks like. Even a stick can trigger our social promiscuity. In 2011, Ehud Sharlin, a computer scientist at the University of Calgary, ran an observational experiment to test this impulse to connect. His subjects sat alone in a room with a very simple “robot”: a long, balsa-wood rectangle attached to some gears, controlled by a joystick-wielding human who, hidden from view, ran it through a series of predetermined movements. Sharlin wanted to find out how much agency humans would attribute to a stick.

Some subjects tried to fight the stick, or talk it out of wanting to fight them. One woman panicked, complaining that the stick wouldn’t stop pointing at her. Some tried to dance with it. The study found that a vast majority assumed the stick had its own goals and internal thought proc­esses. They described the stick as bowing in greeting, searching for hidden items, even purring like a contented cat.

When a robot moves on its own, it exploits a fundamental social instinct that all humans have: the ability to separate things into objects (like rocks and trees) and agents (like a bug or another person). Its evolutionary importance seems self-evident; typically, kids can do this by the time they’re a year old.

The distinction runs deeper than knowing something is capable of movement. “Nobody questions the motivations of a rock rolling down a hill,” says Brian Scassellati, director of Yale’s social robotics lab. Agents, on the other hand, have internal states that we speculate about. The ability to distinguish between agents and objects is the basis for another important human skill that scientists call “cognitive empathy” (or “theory of mind,” depending on whom you ask): the ability to predict what other beings are thinking, and what they want, by watching how they move.

“We make these assumptions very quickly and naturally,” Scassellati says. “And it’s not new, or even limited to the world of robotics. Look at animation. They know the rules, too. A sack of flour can look sad or angry. It’s all about how it moves.”

We’re hard-wired, in other words, to attribute states of mind to fellow beings — even dumb robots, provided they at least appear autonomous. But little things — how fast an agent is moving, whether it changes its movements in response to our own — can alter how we interpret what it’s thinking.

Elizabeth Croft, professor of mechanical engineering at the University of British Columbia, has done a study in which humans and robotic arms pass objects back and forth — a skill that would be important for a robot caregiver to get right. She has found that if a robot and a human reach for the same object simultaneously, and the robot never hesitates or varies its speed, people think the robot is being rude. When the robot makes little jerky motions and slows down, according to Croft, people actually describe this disembodied arm as considerate — maybe even a little shy.

But this built-in gullibility has its downsides for robots, too. It’s relatively easy to program a robot with behaviors that arouse our cognitive empathy, but this can create a dissonance in expectations once people figure out it’s not as smart as it appears. A paper by David Feil-Seifer, assistant professor of computer science at the University of Nevada, Reno, briefly describes a study wherein a group of autistic children figured out that their new talking, moving robot pal really only had a limited number of phrases and behaviors in its repertory. They “became disappointed” — one child even stated that the robot was “learning-disabled.” (This shouldn’t be unfamiliar — consider the widespread derision and disappointment inspired by Siri, Apple’s “intelligent personal assistant.”) The other problem is more philosophical.

“Our entire civilization is based on empathy,” Sharlin told me. “Societies are built on the principle that other entities have emotions.” What happens when we start designing technologies specifically to exploit the very backbone of society? You get things like the Japanese-made therapeutic robot Paro — not smart, but programmed to manipulate us into treating them nicely.

Designed to look like a fluffy baby harp seal, Paro isn’t intelligent in the Isaac Asimov sense. But it seems incredibly sociable, capable of eliciting caregiving and affection from elderly people in nursing homes and hospitals. Paro was created to give isolated people a social outlet — like, for example, Alzheimer’s patients, who can have trouble connecting with human visitors. People treat Paro like a pet, or a baby — responses they’d never have to a Roomba, much less to Robosimian. By all outward appearances, Paro — really just a well-programmed network of wires and fabric fluff — does need your love.

Is that manipulative? Is it delusional? Sharlin can understand why people might come to both conclusions, but he doesn’t think it’s an ethical problem. To him, perceived behavior is as good as real behavior, if the overall outcome is something positive. “For these people, time with Paro is often the best hour of their day,” he told me.

Unlike Paro, most of the “smart” tools that are part of our lives today aren’t fooling anyone. But that soon may change. And like any story about robots — from “A.I.” to “Wall-E” — this is really about us, not the machines. Thanks to Human-Robot Interaction research, whatever social skills we program into robots in the future will be illusory and recursive: not intelligence, but human ingenuity put to use to exploit human credulity. By using technology to fool ourselves into thinking that technology is smart, we might put ourselves on the path to a confounding world, populated with objects that pit our instincts against our better judgment.

Maggie Koerth-Baker is science editor at BoingBoing.net and author of “Before the Lights Go Out,” on the future of energy production and consumption.

<< Return to story