A good-looking robot is seriously hard to find. Robots can be pretty, and some even handsome. But as soon as they get too realistic, they start to creep us out. A new system that helps robots generate more realistic expressions might go some way to help.
Many years ago, roboticists realised that as you morph an abstract robot into a human you generate a peak of unease — the “uncanny valley principle” — that makes people feel uncomfortable when a robot looks realistic but not realistic enough. Some say it’s because they remind us of a corpse. However, research [PDF] has shown that if you manipulate the robotic images so that they are more attractive, you can bypass this feeling of unease.
To create a robot we are more likely to accept, life-like expressions are vital. That’s why Nicole Lazzeri at the University of Pisa, Italy, and her colleagues have designed a “Hybrid Engine for Facial Expressions Synthesis” (HEFES) — a facial animation engine that gives realistic expressions to a humanoid robot called FACE.
FACE’s appearance is modelled on one of the team’s wives. “It’s really realistic,” says Lazzeri, who presented the work at BioRob in Rome last month. See for yourself in the video below.
To mimic the myriad expressions that facial muscles are capable of achieving, the team placed 32 motors around FACE’s skull and upper torso that manipulate its polymer skin in the same way that real muscles do.
To create expressions they used a combination of motor movements based on the Facial Action Coding System (FACS) — a system created over 30 years ago, which codes facial expressions in terms of anatomic muscle movements.
HEFES is used to control FACE’s expressions. It is essentially a mathematical program that creates an “emotional space” a person can use to choose an expression for FACE that exists anywhere between one or more basic emotions, including anger, disgust, fear, happiness, sadness and surprise. The algorithm then works out which motors need to be moved to create that expression or transition between two or more.
The team evaluated the accuracy of their expressions by asking five autistic and 15 non-autistic children to identify a set of expressions performed first by FACE and then by a psychologist. Both groups were able to identify happiness, anger and sadness, but they were less able to identify fear, disgust and surprise.
So is it more attractive? I’m not convinced. But FACE’s ability to smoothly transition between one emotion and another is pretty remarkable. And not too creepy.
New Scientist reports, explores and interprets the results of human endeavour set in the context of society and culture, providing comprehensive coverage of science and technology news. [clear]