What can you teach a robot, is how not Be a Jerk?
http://walkprod.blogspot.com/2014/05/what-can-you-teach-robot-is-how-not-be-ajerk.html
For some people, the only thing scarier than a robot is a rude robot. But what if you could teach robots how to be polite — even ethical?
Robots can be programmed to think and use absolute logic to make good decisions. They're best, though, when the task and desired outcomes are known. Variables, like human interaction, can be accounted for, but good luck getting a robot to intuit what us emotional humans are thinking or understand our intent. Some experts though, believe robots can be trained to do the right thing.
AJung Moon has been working on developing robots that know right from wrong and can act accordingly. She's focused, in particular, on what she calls "robot ethics." She's so committed to the cause that her Twitter handle is actually @RoboEthics.
Currently a University of British Columbia Ph.D. student studying Human-Robot Interaction, Moon and a team from the Open RoboEthics Initiative recently designed an ethical challenge using a Willow Garage PR2 robot, an elevator and some patient humans.
The idea was fairly simple. First survey people to find out under what circumstances they would cede control of the elevator to a robot carrying both urgent and non-urgent mail. Moon presented people with four robot response options, which people graded on levels of acceptability. The four responses and actions the robot could provide were:
Yield by saying, “Go ahead. I will ride the next one,”
Do nothing and remain standing by the elevator,
Not yield, say, “I have urgent mail to deliver and need to ride the elevator. Please exit the elevator,” and take the elevator once the person exits, or
Engage in a dialogue by telling the person that it’s on an urgent mission and asking if they are in a hurry.
Overall, respondents said, "the most appropriate action chosen for the robot was to engage in dialogue with the person, and the least appropriate behavior was to take no action at all." In other words, no one wanted the robot to just stand there, looking weird. Humans also expect that if the robot has a non-urgent letter, it should always yield to people. Researchers used that data to program the Willow Garage robot and then filmed the results.
In the video above, the robot never moves quickly or pushes anyone out of the way. Instead, if it has urgent mail, it announces its need for the elevator, but if someone refuses to get out (the robot is so large that it and a person may not fit comfortably), it simply tells the person to go ahead and waits for the next one. Moon also programmed it to act inappropriately. So when the robot and a wheelchair-bound person are both waiting for the elevator, the robot announces it has urgent mail and then bounds forward into the elevator, leaving one person with a disability person pretty pissed off.
Ultimately, researchers found that there are no real hard and fast rules. The robot should, like people, assess the context of the situation and use communication and whatever other interaction is at its disposal to act appropriately. She presented her findings on the research blog Footnote1.
Even if robots can't be taught to distinguish right from wrong, they can fake it. Honda ASIMO designers recently explained to me how they taught the humanoid robot to pause and look at everyone seated around a table before serving them tea. That simple act makes ASIMO look like it understands social mores, when, in fact, it's simply running a routine.
Similarly, Moon programmed a robot to pause before grabbing something from a bowl if it sees that a person is also grabbing at the same time. Again, the robot doesn't understand that it's impolite to reach into the bowl when someone else's hand is already in there, but its programming and sensors can detect the other hand and know that, in that instance, it has to pull back and wait.
Robots may never understand right from wrong, but it does appear that they can be programmed to be less of a jerk.
[Editor's Note: After we published this post, AJung Moon engaged in a quick email Q+A with Mashable.]
Why do you think roboethics are so important?
Roboethics is a really important field of study because application field of robotics is much more diverse than ever before. With robots coming out of traditional work cells, and into our homes, offices, battlefields, hospitals etc., they are interacting with people and impacting our lives and society at large on a greater scale. So it's really important to consider ethical, legal, and societal implications of the technology as we move forward.
What did you learn about robot response from this test?
This particular project was a proof-of-concept of the idea that greater stakeholder feedback can be directly used to influence design decisions for roboticists. We demonstrated the feasibility of using stakeholder feedback to design robot behaviors as a way to address one of the key challenges in roboethics — the challenge that people don't agree on the same moral principles etc.,because people's moral values/principles vary depending on which stakeholder group(s) they belong to, their cultural/religious background.
Did the results disappoint you?
The result didn't really disappoint me. The project itself was framed as a proof-of-concept, hence the survey results that we used to train the algorithm is not representative of the general population. However, we learned that, within the limited pool of participants, people generally opted for robots to have dialogues to resolve conflicts regarding priority of access to elevators. This hints to the value of developing robots that can communicate with people better, such that robots can make appropriate decisions with the help of people involved.
Can robots and humans work together if robots do not know right from wrong?
They absolutely can, depending on what context of 'working together' you are interested in. We already have robots that don't necessarily have any higher level ethics algorithm implemented in them, that are already working with people in various contexts. KIVA systems used by Amazon is an example of robots — automated guided vehicles (AGV) — that are working with people as a greater system. I think it's very very hard to come up with robots that can truly know right from wrong, but with the right interactive features they can get help from agents that generally have a better sense of what is right and wrong (people), which is a much easier approach.
Can we trust robots to do the right thing?
This goes back to one of the key challenges discussed above — the right thing to do in a particular situation varies from person to person (e.g., even among our survey participants, we had discrepancies in terms of what people thought was the most or the least appropriate action to take in the elevator riding scenarios). Hence, I don't think it's possible for us to trust robots to always do the things that you think is the right thing. There are projects trying to implementing the Rules of Engagement and Laws of War into robots such that they can make appropriate decisions within the military context. Even though the RoE and LoW are agreed-upon international rules, implementing these rules into a robotic system remains a challenge.
When did you get interested in robots?
I became interested in robotics since the first year of my undergraduate program (went to University of Waterloo for a bachelor's degree in Mechatronics Engineering). Part of the mechatronics engineering curriculum was to play with LEGO Mindstorm kits to build a robot that follows lines on the ground. I had a lot of fun with the project, but also learned that the engineering training I was receiving could be used to build robots that can do catastrophic things as well as amazingly helpful things. So I became curious about robotics, and the role engineers play in trying to build robots that have positive impact in the users/society.
What do you think of ASIMO?
ASIMO is cool! I haven't worked with it myself. But as far as I have heard, it's a wonderful research platform for roboticists in various subfields of robotics, such as HRI (human-robot interaction) as well as getting robots to walk etc.
Post a Comment