Emotion recognition and Human-Robot Interaction

Background: Affective computing is the study and development of systems that can recognize, interpret, and simulate human affects. One of the motivation for the research is the ability to foster mutual understanding and simulate empathy: the machine should interpret the emotional state of humans and adapt its behaviour with an appropriate response for those emotions.

Objective: Our aim is to use a multi-sensor network to acquire movement, physical and physiological data related to emotional display, and recognize the correct emotional state associated to a set of data. In fact, we try to build a "physiological emotional image", capturing facial expressions, body posture, gestures, speech characteristics, and physiological data, such as skin temperature and galvanic resistance.

Natural human-robot interaction

Natural human-robot interaction

Physiological emotional image

Physiological emotional image

Results: Face expression and audio recognition are standard methods for emotional recognition. We integrated those methods with gesture recognition, using wearable IMU and EMG sensors.

Human-robot emotional creative interaction

Laughter recognition

References

Last Update: 2016-11-01
Copyright(C) 2003-2016 WB Team/Takanishi Laboratory
All Rights Reserved.