Last updated 2002.12.1 Sun.

Human-like Head Root WE-4

1. Objective
e have been developing human-like head robots in order to develop a new head mechanisms and functions for a humanoid robot having the ability to communicate naturally with a human by expressing human like emotion. We developed a new human-like head robot "WE-4" (Waseda Eye No.4) which has four sensations and facially expressing emotions in 2002.

2. Hardware Overview
Fig. 1 and Fig. 2 present the hardware overview of the human-like head robot WE-4. It has 29-DOF (Waist: 2, Neck: 4, Eyeballs: 3, Eyelids: 6, Eyebrows: 8, Lips: 4, Jaw: 1, Lung: 1) and has a lot of sensors which serve as sense organs (Visual, Auditory, Cutaneous and Olfactory sensation) for extrinsic stimuli. Table 3 shows its weight. The following are descriptions of each part.

Fig. 1 WE-4 (Whole View) Fig. 2 WE-4 (Head Part)
Fig. 1 WE-4 (Whole View) Fig. 2 WE-4 (Head Part)

2.1 Eyeballs and Eyelids
The eyeballs have 1-DOF for the pitch axis and 2-DOF for the yaw axis. The maximum angular velocity of eyeballs is similar to a human with 600[deg/s] for the eyeballs. The eyelids have 6-DOF. WE-4 can rotate its upper eyelid in order to be able to express using the corner of robot's eye. The maximum angular velocity of opening and closing eyelids is similar to a human with 900[deg/s] for the eyelids. Furthermore, this robot can blink within 0.3[s], which is as fast as a human does.
For miniaturization of the head part, we newly developed an Eye Unit that integrated eyeballs parts and eyelids parts. Moreover, in the Eye Unit of WE-4, the eyeball pitch axis motion mechanically synchronizes opening and closing upper eyelid motion. Therefore, we can control coordinated eyeball-eyelids motion by hardware.

2.2 Neck
WE-4fs neck has 4-DOF, which are the upper pitch, the lower pitch, the roll and the yaw axis. WE-4 can stretch and pull its neck using the upper and lower DOF like a human. The maximum angular velocity of each axis is similar to a human's at 160[deg/s].

2.3 Trunk
Recently we added a waist that has 2-DOF, the pitch and yaw axes, to WE-4. By adding a waist, WE-4 pursues the targets using not only coordinated head-eye motion but also coordinated waist-head-eye motion with V.O.R. In addition, WE-4 produced emotional expression with not only facial expressions but also the upper-half part of its body.
Additionally, we set a lung in the chest of WE-4. It can display breathing motion which expresses more emotional motion in addition to breathing air for olfactory sensation.

2.4 Facial Expression Mechanisms
WE-4 expresses its facial expression using its eyebrows, lips, jaw, facial color and voice. The eyebrows consist of flexible sponges, and each eyebrow has 4-DOF.
We used spindle-shaped springs for WE-4fs lips. The lips change their shape by pulling from 4 directions, and WE-4fs jaw that has 1-DOF opens and closes the lips.
For facial color, we used red and blue EL (Electro Luminescence) sheets. We applied them on the cheeks. WE-4 can express red and pale facial colors.
For the voice system, we used a small speaker that was set in the jaw. The robot voice is a synthetic voice made by LaLaVoice 2001 (TOSHIBA Corporation).

2.5 Sensors
(1) Visual Sensation
WE-4 has two color CCD cameras in its eyes. The images from its eyes are captured to a PC by an image capture board. WE-4 calculates the gravity and area of the targets. WE-4 can recognize any color as the targets and it can recognize four targets at the same time. If there are multiple target colors in the robot's view, WE-4 follows the largest target.

(2) Auditory Sensation
We used two small condenser microphones as the auditory sensation. WE-4 can localize the sound directions from the loudness and the phase difference between the right and the left.

(3) Cutanious Sensation
WE-4 has tactile and temperature sensations in the human cutaneous sensation. We used the FSR (Force Sensing Resistor) as tactile sensation FSR is able to detect even very weak forces, and is a thin and light device. We devised a method for recognizing not only the magnitude of the force, but also the difference of the touching manner that are "Push", "Stroke", "Hit", by using a 2 layers structure with FSR. On the other hand, WE-4 has a Thermistor the temperature sensor.

(4) Olfactory Sensation
We used the four semiconductor gas sensors as the olfactory sensation. We set them in WE-4's nose. WE-4 can recognize the smells of alcohol, ammonia and cigarette smoke.

2.7 System Configuration
Fig. 3 shows the total system configuration of WE-4. We used two computers (PC/AT compatible) connected to each other by Ethernet. PC1 obtains and analyzes the outputs from the olfactory and cutaneous sensations using an A/D boards and the sounds from microphones using a soundboard. Then, PC1 determines the mental state according to the stimuli. In addition, PC1 controls all DC motors, the facial color and the voice system according to the visual and mental information. PC2 captures the visual images from CCD cameras and it calculates gravity and brightness of the target, and sends them to PC1.

Fig. 3 System Configuration
Fig. 3 System Configuration

3. Facial Expressions
We used the Six Basic Facial Expressions of Ekman in the robot's facial control, and defined the seven facial patterns of "Happiness", "Anger", "Disgust", "Fear", "Sadness", "Surprise", and "Neutral" facial expressions. The strength of each facial expression is variable by a fifty-grade proportional interpolation of the differences in location from the "Neutral" facial expression. WE-4 has the facial pattern shown in Fig. 4.

Fig. 4a Happiness Fig. 4b Anger Fig. 4c Suprised
(a) Happiness (b) Anger (c) Surprised
Fig. 4d Sadness Fig. 4e Fear Fig. 4f Disgust
(d) Sadness (e) Fear (f) Disgust
Fig. 4g Neutral
(g) Neutral
Fig. 4 Seven Basic Facial Expressions

Fig. 5a Drunken Fig. 5b Shame
(a) Drunken (b) Shame
Fig. 5 New Facial Expressions

4. Mental Modeling
4.1 Approach
The Mental Dynamics, which is the mental transition caused by the internal and external environment of the robot, is extremely important in the emotional expression. Therefore, in construction of the mental model, we considered that the human brain model had a three-layered model that consisted of the reflex, emotion and intelligence. And, we are approaching the mental model from the reflex. Moreover, we divided the emotion into "Learning System", "Mood" and "Dynamic Response" according to the working duration.

Fig. 5 Brain Dynamics
Fig. 5 Brain Dynamics

4.2 Information Flow
WE-4 changes its mental state according to the external and internal stimuli, and expresses its emotion using facial expressions and facial color. We introduced an information flow into the robot shown in Fig. 6. There are two big flows. The one is the flow caused from the external environment. And, the other is the flow caused from the robot internal state. Furthermore, we introduced the Robot Personality because each human has deferent personality. The Robot Personality consists of the Sensing Personality and the Expression Personality.

Fig. 6 Information Flow of the Mental Modeling
Fig. 6 Information Flow of the Mental Model

4.3 Personality and Learning System
The Robot Personality consists of the Sensing Personality and the Expression Personality. The former determines how a stimulus works the mental state. And, the later determines how the robot expresses its emotion. We can easily assign these personalities. Therefore, it's possible to easily obtain a wide variety of the Robot Personalities. Moreover, we introduced the "Learning System" in order for the robot to learn the experiences and construct its personality based on its experiences dynamically.

4.4 Emotion Vector and Mood Vector
We adopted the 3D mental space, which consists of a pleasantness axis, an activation axis and a certainty axis, shown in Fig. 7. The vector E named the "Emotion Vector" expresses the mental state of WE-4. Furthermore, we newly introduce the "Mood Vector" M that consists of a pleasantness axis and an activation axis.
The pleasantness component of the Mood Vector changes by the current mental state. But, in order to describe the activation component of the Mood Vector, we introduced the internal clock that is a kind of automatic nerve system.

4.5 Equations of Emotion
he Emotion Vector E is described the Equations of Emotion if the robot senses the stimuli. We considered that the mental dynamics which is a transition of a human mental state might be expressed by similar equations to the equation of motion. Therefore, we expanded the equations of emotion into the second order differential equation which modeled on the equation of motion. The robot can express the transient state of the mental state after the robot senses the stimuli from the environment. We can obtain the complex and various mental trajectories.
Finally, we mapped out 7 different emotions in the 3D mental space as in Fig. 8. WE-4 determines the emotion by the Mental Vector passing each region.

Fig. 7 Mental Space
Fig. 7 Mental Space

Fig. 8 Emotional Mapping
Fig. 8 Emotional Mapping