> In a test lab, Bert2 -- a humanoid robot with three separate displays, allowing its eyes and mouth to express...

Originally shared by Daniel Estrada

> In a test lab, Bert2 -- a humanoid robot with three separate displays, allowing its eyes and mouth to express various emotions -- performed in three different ways. One was silent and made zero mistakes, while a second was mute and programmed to make a single blunder (which it would then correct, quietly). A third was able to speak and accept simple "yes" or "no" responses from the user. In a basic kitchen scenario, the vocal android would apologise for its mistakes -- after dropping an egg, for instance -- and give a heads-up when it was about to try a new technique.

While the slowest, it was the robot that most people preferred.

But here's where it gets interesting. At the end of the exchange, the robot would ask for a job. Some participants were reluctant to say no -- even if they preferred the silent, more efficient robot -- because they thought it would upset the machine. "It felt appropriate to say no, but I felt really bad saying it," one of the test participants said. "When the face was really sad, I felt even worse. I felt bad because the robot was trying to do its job."

More: https://www.engadget.com/2016/08/24/people-lie-robots-avoid-hurting-feelings/
Full article: https://arxiv.org/ftp/arxiv/papers/1605/1605.08817.pdf
via 李卓

https://www.engadget.com/2016/08/24/people-lie-robots-avoid-hurting-feelings/

Comments

Popular posts from this blog

can't understand why in the US it's sooo much cheaper than in Europe, since the product is not even from there..

The theory of multiple intelligences is a theory of intelligence that differentiates it into specific (primarily...