Robot inquisition keeps witnesses on the right track








































MEMORY is a strange thing. Just using the verb "smash" in a question about a car crash instead of "bump" or "hit" causes witnesses to remember higher speeds and more serious damage. Known as the misinformation effect, it is a serious problem for police trying to gather accurate accounts of a potential crime. There's a way around it, however: get a robot to ask the questions.












Cindy Bethel at Mississippi State University in Starkville and her team showed 100 "witnesses" a slide show in which a man steals money and a calculator from a drawer, under the pretext of fixing a chair. The witnesses were then split into four groups and asked about what they had seen, either by a person or by a small NAO robot, controlled in a Wizard of Oz set-up by an unseen human.













Two groups - one with a human and one a robot interviewer - were asked identical questions that introduced false information about the crime, mentioning objects that were not in the scene, then asking about them later. When posed by humans, the questions caused the witnesses' recall accuracy to drop by 40 per cent - compared with those that did not receive misinformation - as they remembered objects that were never there. But misinformation presented by the NAO robot didn't have an effect.












"It was a very big surprise," says Bethel. "They just were not affected by what the robot was saying. The scripts were identical. We even told the human interviewers to be as robotic as possible." The results will be presented at the Human-Robot Interaction conference in Tokyo next month.












Bilge Mutlu, director of the Human-Computer Interaction Laboratory at the University of Wisconsin-Madison, suggests that robots may avoid triggering the misinformation effect simply because we are not familiar with them and so do not pick up on behavioural cues, which we do with people. "We have good, strong mental models of humans, but we don't have good models of robots," he says.












The misinformation effect doesn't only effect adults; children are particularly susceptible, explains the psychologist on the project, Deborah Eakin. Bethel's ultimate goal is to use robots to help gather testimony from children, who tend to pick up on cues contained in questions. "It's a huge problem," Bethel says.












At the Starkville Police Department, a 10-minute drive from the university, officers want to use such a robotic interviewer to gather more reliable evidence from witnesses. The police work hard to avoid triggering the misinformation effect, says officer Mark Ballard, but even an investigator with the best intentions can let biases slip into the questions they ask a witness.












Children must usually be taken to a certified forensic child psychologist to be interviewed, something which can be difficult if the interviewer works in another jurisdiction. "You might eliminate that if you've got a robot that's certified for forensics investigations, and it's tough to argue that the robot brings any memories or theories with it from its background," says Ballard.


















The study is "very interesting, very intriguing", says Selma Sabanovic, a roboticist at Indiana University. She is interested to see what happens as Bethel repeats the experiment with different robot shapes and sizes. She also poses a slightly darker question: "How would you design a robot to elicit the kind of information you want?"












This article appeared in print under the headline "The robot inquisition"




















It's all about how you say it







When providing new information, rather than helping people recall events (see main story), a robot's rhetoric and body language can make a big difference to how well it gets its message across.









Bilge Mutlu of the University of Wisconsin-Madison had two robots compete to guide humans through a virtual city. He found that the robot which used rhetorical language drew more people to follow it. For example, the robot saying "this zoo will teach you about different parts of the world" did less well than one saying "visiting this zoo feels like travelling the world, without buying a plane ticket". The work will be presented at the Human-Robot Interaction conference in Tokyo next month.











































If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.




































All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.


If you are having a technical problem posting a comment, please contact technical support.








You're reading an article about
Robot inquisition keeps witnesses on the right track
This article
Robot inquisition keeps witnesses on the right track
can be opened in url
https://newssugarplum.blogspot.com/2013/02/robot-inquisition-keeps-witnesses-on.html
Robot inquisition keeps witnesses on the right track