The receding tide

“The short answer is no one really know what kind of emotions people want in robots, ” said Maja Mataric, a computer science professor at the University of Southern California. But scientists are trying to figure it out: Dr. Mataric was speaking last week from a conference on human-robot interaction in Salt Lake City.

There are signs that in some cases, at least, a cranky or sad robot might be more effective than a happy or neutral one.

At Carnegie Mellon University, Rachel Gockley, a graduate student, found that in certain circumstances people spent more time interacting with a robotic receptionist — a disembodied face on a monitor — when the face looked and sounded unhappy. And at Stanford, Clifford Nass, a professor of communication, found that in a simulation, drivers in a bad mood had far fewer accidents when they were listening to a subdued voice making comments about the drive.

“When you’re sad, you do much better working with a sad voice,” Dr. Nass said. “You don’t feel like hanging around with somebody who says, ‘Hi! How are you!’ ”

That illustrates the longer answer to the question of what humans want in their robots: emotions like those they encounter in other humans. “People respond to robots in precisely the same way they respond to people,” Dr. Nass said.|link|

Well, for me, a chess game is a conversation of sorts. From my perspective, today’s off-the-shelf computer programs come awfully close to meeting Turing’s test.|link|

2 Comments

  1. So we want robots to be just as miserable as us? Seems odd but I guess as long as the primary function of the robot is still to be subserviant to humans as a tool then it makes sense that whatever emotional states best serve that end would be what designers should shoot for.

  2. I read the Times article this weekend. One thing that struck me about it was that the term ’emotion’ was being used in a non-standard way. When the article says, “people want robots to have emotions” it doesn’t mean that people want robots to have internal states which correllate with things like love, sadness, or joy. Instead, the idea is that robots should exhibit behaviors that mimic the behaviors of entities that have such states.

    Why? Well, because when people interact with robots they want to be able to predict what the robot is going to do, and it just so happens that the most evolved mechanisms for prediction available to human beings are mechanisms which make predictions based upon observed evidence of mental states. That is, mechanisms which fix on public displays of emotion. So, “people want robots to have emotions” really means ‘people want robots to display cues which enable human beings to accurately predict robot behavior.’

Submit a comment