Robot with binary code in the background

Researchers in Japan make android child’s face more expressive

Image credit: Pixabay

A trio of researchers at Osaka University have found a method for identifying and quantitatively evaluating facial movements on their android robot child head, named Affetto, to add rich nuance to the expressions on its face.

With the first-generation model of the android robot reported in a publication back in 2011, the researchers have now found a system to make the second-generation Affetto more expressive, which enables the androids to have greater ranges of emotion and ultimately have deeper interaction with humans.

The researchers investigated 116 different facial points on Affetto (shown below) to measure its three-dimensional movement, and its facial points were underpinned by so-called deformation units.

The newly developed face of the Affetto child android robot

The newly developed face of the Affetto child android robot.

Image credit: Osaka University

Each deformation unit consists of a set of mechanisms that create distinctive facial contortion, for example, lowering or raising a part of its lip or eyelid. These measurements were then subjected to a mathematical model to quantify its surface-motion pattern.

“Surface deformations are a key issue in controlling android faces,” study co-author Minoru Asada explained. “Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with. We sought a better way to measure and control it.”

Although the researchers encountered challenges in balancing the applied force and adjusting the synthetic skin attached to the android to give it human-like features, they were able to employ their system to adjust the deformation units for precise control of the robot’s facial surface motions.

“Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms,” first author of the study, Hisashi Ishihara, said. “Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning.”

The android robot was created to represent a one-to-two-year-old child and has been used to study the early stages of human social development. The video below shows a facial motion test of when the android robot was first revealed in 2011. 

 

There have been several previous attempts to study the interaction between child robots and people. However, the lack of realistic child appearance and facial expressions has hindered human-robot interaction, which resulted in caregivers not attending to the robot in a natural way.

The scientists at Osaka University are hoping to overcome this challenge, by using the techniques stated above, to make the android shares its emotions with a caregiver more successfully.  

Affetto generation-one

Some of the expressions that Affetto can make to share its emotions with a caregiver in 2011.

Image credit: Osaka University

The study, ‘Identification and Evaluation of the Face System of a Child Android Robot Affetto for Surface Motion Design’ was published in Frontiers in Robotics and AI at DOI.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close