Current Weather
The Spy FM

Mercy For Robots? Experiment Tests How Humans Relate To Machines

Filed by KOSU News in Health.
January 28, 2013

In 2007, Christoph Bartneck, a robotics professor at the University of Canterbury in New Zealand, decided to stage an experiment loosely based on the famous (and infamous) Milgram obedience study.

In Milgram’s study, research subjects were asked to administer increasingly powerful electrical shocks to a person pretending to be a volunteer “learner” in another room. The researcher subject would ask a question, and whenever the learner made a mistake, the research subject was supposed to administer a shock — each shock slightly worse than the one before.

As the experiment went on, and as the shocks increased in intensity, the “learners” began to clearly suffer. They would scream and beg for the research subject to stop while a “scientist” in a white lab coat instructed the research subject to continue, and in videos of the experiment you can see some of the research subjects struggle with how to behave. The research subjects wanted to finish the experiment like they were told. But how exactly to respond to these terrible cries for mercy?

Bartneck studies human/robot relations, and he wanted to know what would happen if a robot in a similar position to the “learner” begged for its life. Would there be any moral pause? Or would research subjects simply extinguish the life of a machine pleading for its life without any thought or remorse?

Treating Machines Like Social Beings

Many people have studied machine/human relations, and at this point it’s clear that without realizing it, we often treat the machines around us like social beings.

Consider the work of Stanford professor, Clifford Nass. In 1996, he arranged a series of experiments testing whether people observe the rule of reciprocity with machines.

“Every culture has a rule of reciprocity, which roughly means, if I do something nice for you, you will do something nice for me,” Nass says. “We wanted to see whether people would apply that to technology: Would they help a computer that helped them more than a computer who didn’t help them?”

And so they placed a series of people in a room with two computers. The people were told that the computer they were sitting at could answer any question they asked, and in half the experiments the computers could, the computer was incredibly helpful. Half the time the computer did a terrible job.

After about twenty minutes of questioning, a screen appeared explaining that the computer was trying to improve its performance; was there any way the human could help? The humans were then asked to do a very tedious task that involved matching colors for the computer. Now sometimes the screen requesting help would appear on the computer the human had been using, sometimes the help request appeared on the screen of the computer across the aisle.

“Now if these were people [and not computers],” Nass says, “we would expect that if I just helped you and then I asked you for help, you would feel obligated to help me a great deal. But if I just helped you and someone else asked you to help, you would feel less obligated to help them.”

What the study demonstrated was that people do in fact obey the rule of reciprocity when it comes to computers. When the first computer was helpful to people, they helped it way more on the boring task than the other computer in the room. They reciprocated.

“But when the computer didn’t help them, they actually did more color matching for the computer across the room than the computer they worked with, teaching the computer [they worked with] a lesson for not being helpful,” says Nass.

Very likely the humans involved had no idea they were treating these computers so differently. Their own behavior was invisible to them. Nass says that all day long our interaction with the machines around us — our iPhones, our laptops — is subtly shaped by social rules we aren’t necessarily aware we’re applying to non-humans.

“The relationship is profoundly social,” he says. “The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human, in this case, it was just answering questions — it didn’t have a face on the screen, it didn’t have a voice. But given the slightest hint of humanness people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.”

So what happens when a machine begs for its life — explicitly addressing us as if it were a social being? Are we able to hold in mind that in actual fact, this machine cares as much about being turned off as your television or your toaster? That the machine doesn’t care about losing it’s life, at all.

Bartneck’s Milgram Study With Robots

In Bartneck’s study the robot — an expressive cat that talks like a human — sits side by side with the human research subject and together they play a game against a computer. Half the time the cat robot was intelligent and helpful, half the time not.

Bartneck also varied how socially skilled the cat robot was, “So if the robot would be agreeable, the robot would ask, ‘Oh could I possibly make a suggestion now?’ If it were not, it would say, ‘It’s my turn now. Do this!’”

At the end of the game, whether the robot was smart or dumb, nice or mean, a scientist authority figure modeled on Milgram’s would make clear that the human needed to turn the cat robot off, and it was also made clear to them what the consequences of that would be: “They would essentially eliminate everything that the robot was, all of its memories, all of its behavior, all of its personality would be gone forever.”

In videos of the experiment you can clearly see a moral struggle as the research subject deals with the pleas of the machine. “You are not really going to switch me off, are you?” the cat robot begs, and the humans sit, confused and hesitating. “Yes. No. I will switch you off!” one female research subject says, and then doesn’t switch the robot off.

“People started to have dialogues with the robot about this,” Bartneck says, “Saying, ‘No! I really have to do it now, I’m sorry! But it has to be done!’ But then they still wouldn’t do it.”

There they sat, in front of a machine no more soulful than a hair dryer, a machine they knew intellectually was just a collection of electrical pulses and metal, and yet they paused.

And while eventually every participant killed the robot, it took them time to intellectually override their emotional queasiness. In the case of a helpful cat robot, around 35 seconds before they were able to complete the switching off procedure. How long does it take you to switch off your stereo?

The Implications

On one level there are clear practical implications to studies like these. Bartneck says the more we know about machine human interaction, the better we can build our machines.

But on a more philosophical level, studies like these can help to track where we are in terms of our relationship to the evolving technologies in our lives.

“The relationship is certainly something that is in flux,” Bartneck says. “There is no one way of how we deal with technology and it doesn’t change, it is something that does change.”

More and more intelligent machines are integrated into our lives. They come into our beds, into our bathrooms. And as they do — and as they present themselves to us differently — both Bartneck and Nass believe, our social responses to them will change. [Copyright 2013 NPR]

Leave a Reply

9PM to 5AM The Spy

The Spy

An eclectic mix of the Spy's library of more than 10,000 songs curated by Ferris O'Brien.

Listen Live Now!

5AM to 9AM Morning Edition

Morning Edition

For more than two decades, NPR's Morning Edition has prepared listeners for the day ahead with two hours of up-to-the-minute news, background analysis, commentary, and coverage of arts and sports.

View the program guide!

9AM to 10AM The Takeaway

The Takeaway

A fresh alternative in morning news, "The Takeaway" provides a breadth and depth of world, national and regional news coverage that is unprecedented in public media.

View the program guide!

Upcoming Events in your area (Submit your event today!)

Streaming audio and podcasts

Stream KOSU on your smartphone

Phone Streaming

SmartPhone listening options on this page are intended for many iPhones, Blackberries, etc. with low-cost software applications available to listen to our full-time web streams, both News on KOSU-1 and Classical on KOSU-2.

Learn more about our complete range of streaming services

We're perfecting the patient experience - Stillwater Medical Center