Emotional Intelligent

Page 1

B l e n d e r i s t h e t y p e f a c e t h at i s b a s e d o n a r at i o n a l a p p r o a c h f o l l o w i n g l o g i c a l p r i n c i p l e s , a n d b u i lt o n g r i d s y s t e m s , y e t t h e y c o n ta i n a n e m o t i o n a l , h u m a n e l e m e n t. N i k T h o e n e n f i r s t d e s i g n e d B l e n d e r , w h i c h w a s r e l e a s e d u n d e r t h e F o n t L a b e l i n 2 0 0 3 . T h e d e s i g n w a s h e av i ly i n f l u e n c e d b y t h e ‘ G r i d n i k ’ f o n t ( o r i g i n a l l y d r a w n b y t h e D u t c h d e s i g n e r W i m C r o u w e l i n t h e l at e 6 0 s ; h e c a l l e d i t ‘ t h e t h i n k i n g m a n ’ s C o u r i e r ’ ) a n d t h e i n t e n t i o n w a s t o c r e at e a f o n t t h at s e r v e d a s a l e s s s p e c t a c u l a r h e a d l i n e a n d t e x t f o n t f e at u r i n g a c u r v e d f a c e .

how the gap between human and machines has been bridged

emotional intelligent [E.I]

gestalten





how the gap between human and machines has been bridged

emotional intelligent [E.I]



+0 +1 +2 +3 +4

introduction c a n r o b o t s h av e e m o t i o n s ? w h at a r e e m o t i o n s ? l e a r n i n g t o m a k e fa c i a l e x p r e s s i o n s emotional intelligence [e.i]

six nine ten thirteen fourteen twenty-three twenty-four forty-one forty-two forty-nine

ta b l e o f c o n t e n t s


> > > >

publishing c o r p o r at e d e s i g n motion/interactive case studies


[e.i] how the gap between human and machines has been bridged

Gestalten specializes in developing content for aficionados of cutting-edge visual culture worldwide. c u r at i o n a n d c o n s u lt i n g c u s t o m e r p u b l i s h i n g a n d i n t e r n at i o n a l d i s t r i b u t i o n

Gestalten is best known for the more than three hundred books we have published that document and anticipate vital design movements for our own title list as well as customer publishing projects. Gestalten also provides Curation, Art Direction, Consulting, Production, Distribution and Design Services to international clients. Always working directly with talented young designers and artists, Gestalten brings a deep understanding of visual culture on a global level to whatever we do and remains firmly committed to presenting visual trends with timeless substance. The company has 40 staff members through offices in Berlin, London, New York and Tokyo.

+0 /011


+0 emotional intelligence [e.i]+0 introduction /012

Blender

T h e f o n t i s b a s e d o n a r at i o n a l a p p r o a c h f o l l o w i n g l o g i c a l p r i n c i p l e s , a n d b u i lt o n g r i d s y s t e m s , y e t t h e y c o n ta i n a n e m o t i o n a l , h u m a n e l e m e n t.

to blend in

to to to to

merge support harmonize disguise

grid system

social networking pattern

systematic order

logical principle

mathematics

science chemistry robots

logic

problem solving grid system

how animals disguise in nature adapt and adopt living together

machineries


[e.i] how the gap between human and machines has been bridged

+0 /013

Pro house appliance

convenience fruit lifestyle

P u b l i s h e r G e s ta lt e n Designer Nik Thoenen Release June 2008

google project: people participating in completing the world map

m a c h i n e s t h at c a n f e e l a n d u n d e r s ta n d h u m a n emotions

Nik Thoenen, a member of the Vienna-based design collective RE - P.ORG , i s t h e a u t h o r o f t h e B l e n d e r, r e l e a s e d u n d e r G e s t a l t e n Fonts in 2003. Over the years, the designer has developed and created expanded character sets for Blender Central European (including L atin Glyphs for Bosnian, C z e c h, E s t o n i a n, H u n g a r i a n, L a t v i a n, L i t h u a n i a n, P o l i s h, Romanian, Slovak and Slovenian) and Blender Cyrillic (Slavic langu age s inc luding Bel ar u si an, B ulg ar i an, M ac edoni an, Russian, Serbian and Ukrainian). T he font is based on a r ational appr oach following logical principles, and built on grid systems, yet they contain an emotional, human element. Nik T hoenen f ir s t de signed Blender, w hic h w a s r ele a sed under the Font Label in 2003. The design was heavily inf luenced by the “Gridnik” font (originally dr aw n by the D u t c h de signer W im C r ou w el in t he l a t e 6 0 s; he c alled i t “ t h e t h i n k i n g m a n’s C o u r i e r ” ) a n d t h e i n t e n t i o n w a s to create a font that served as a less spectacular he adline and t ex t fon t fe a t ur ing a c ur ved fac e. Since it’s original r elease, Nik has wor ked on developing B l e n d e r a n d e x p a n d e d i t ’s r a n g e o f R o m a n c h a r a c t e r s from 254 to 377 glyphs. It now includes all codes and sets for the Nordic and Slavic languages from Czech to Polish as well as Romanian, Hungarian, Finnish and Turkish. Expanding a character set has been sheer hard work in calculating the additional characters appropriately so that they f it in with the rest of the font.


+1

+1 e m o t i o n a l i n t e l l i g e n c e [ e . i ] + 1 c a n r o b o t s h av e e m o t i o n s ? /014

can ro bots have emotions?


The gap between science fiction and science fact appears vast, but some reserchers in artificial intelligence now believe it is only a qustion of time before it is bridged.

oe ? [e.i] how the gap between human and machines has been bridged +2 /015


+1 e m o t i o n a l i n t e l l i g e n c e [ e . i ] + 1 c a n r o b o t s h av e e m o t i o n s ? /016


Science fiction is full of machines that have feelings. In 2001: A Space Odyssey, the onboard computer turns against the crew of the spaceship Discovery 1, and utters cries of pain and fear when his circuits are f inally taken apart. In Blade Runner, a humanoid r obot is distr essed to lear n that her memories are not real, but have been implanted in her silicon brain by her programmer. In Bicentennial Man, Robin Williams plays the part of a robot who redesigns his own circuitry so that he can experience the full range of human feelings. These stories achieve their effect in part because the c apacit y for emotion is of ten consider ed to be one of the main differences between humans and machines. This i s c e r t ai n l y t r ue o f t h e m a c h i n e s w e k n o w t o d a y. T h e responses we receive from computers are rather dry

a f f a i r s, s u c h a s “ S y s t e m e r r o r 1 3 7 8 .� P e o p l e s o m e t i m e s get angry with their computers and shout at them as if t hey had emotions, bu t t he c ompu ter s t ake no notice. T hey neither feel their ow n feelings, nor r ecognise your s. The gap between science fiction and science fact appears vast, but some r esear cher s in ar tif icial intelligence now believe it is only a question of time before it is bridged. The new field of affec tive computing has alr eady made some pr ogr ess in building primitive emotional machines, and every month brings new ad vances. However, some critic s ar gue that a machine could never come to have real emotions like ours. At best, they claim, clever pr ogr amming might allow it to simulate human emotions, but these would just be clever fakes. Who is right? To answer this question, we need to say what emotions really are.

[e.i] how the gap between human and machines has been bridged +2 /017


+2 1

2

3 4 6

5

7 8 9

10 11 12 13 14

15


Although everyone experiences emotions, scientists do not all agree on what emotions are or how they should be measured or studied.

Emotions are complex and have both physical and mental components. Generally researchers agree that emotions have the following parts: subjective feelings, physiological (body) responses, and expressive behavior.

the james-lange theory the cannon-bard theory the schacter-singer model why d o w e h av e e m o t i o n s ?

what are emotions?


+2 /20

e m o t i o n a l i n t e l l i g e n c e [ e . i ] + 2 w h at a r e e m o t i o n s ?

the sources of emotions

Emo tion s, of ten c alled feelings, include ex per ienc e s s u c h a s l o v e, h a t e, a n g e r, t r u s t , j o y, p a n i c , f e a r, a n d grief. Emotions ar e r elated to, but differ ent fr om, mood. Emotions are specif ic reactions to a par ticular event that are usually of fairly short duration. Mood is a more gener al feeling such as happiness, sadness, fr ustr ation, c on t e n t m e n t , or an x ie t y t h a t l a s t s f or a lon ge r t im e. > A l t h o ugh e v er y one ex per ienc e s em o t ion s, s c ien t i s t s d o not all agree on w hat emotions are or how they should be measured or studied. Emotions are complex and have both phy sic al and ment al c omponent s. Gener all y r e se ar cher s agr ee that emotions have the follow ing par t s: subjec tive feelings, physiological responses, and expressive behavior. > The component of emotions that scientists call subjective feelings refers to the way each individual person experiences feelings, and this component is the most difficult to describe or measure. Subjective feelings cannot be observed; instead, the per son exper iencing the emotion must de scr ibe it to others, and each person’s description and interpretation o f a fe e lin g m a y b e s ligh t l y di f fe r e n t . F or ex am p le, t w o people falling in love will not experience or describe their feeling in exactly the same ways. > Physiological responses are the easiest part of emotion to measure because scientists have developed special tools to measure them. A pounding hear t, sweating, blood rushing to the face, or the release of adrenaline in response to a situation that creates intense e m o t i o n c a n a l l b e m e a s u r e d w i t h s c i e n t i f i c a c c u r a c y. People have ver y similar inter nal r e sponse s t o t he s ame emotion. For example, regardless of age, race, or gender, w hen people ar e under st r e s s, t heir bodie s r ele ase adrenaline; this hormone helps prepare the body to either run away or f ight, which is called the “f ight or flight” reaction. Although the psychological part of emotions may be different for each feeling, several different emotions can produce the same physical reaction. > E xpr essive behavior is the out w ar d sign that an emotion is being experienced. Outward signs of emotions can include fainting, a flushed face, muscle tensing, facial expressions, tone of voice, rapid breathing, restlessness, or other body language. The out ward expression of an emotion gives ot her people clue s to w hat someone is ex per iencing and help s t o r egula t e s oci al in t er ac t ion s. > S c ien t i s t s h ave developed several theories about how emotions are generated based on subjective feelings, physiological responses, and e x p r e s s i v e b e h a v i o r. > T h e f a c i a l m u s c l e s i n v o l v e d i n emotional expression are governed by ner ves following a c omp lex s y s t e m o f dir e c t an d in dir e c t p a t h w a y s t o an d f r o m t h e m o t o r c o r t e x (v o l u n t a r y s m i l e c i r c u i t u n d e r conscious contr ol) and the limbic system and br ain stem (spontaneous smile circuit not under conscious contr ol). This may explain why people’s faces can express emotions like happiness, fear, and disgust without their being aware of it. > American scientist William James (1842-1910) and Danish scientist C arl L ange (1834-1900) both studied the relationship bet ween emotion and physical changes in the


[e.i] how the gap between human and machines has been bridged

+2 /21

b o d y. I n a b o u t 1 8 8 5, t h e y i n d e p e n d e n t l y p r o p o s e d t h a t feeling an emotion is dependent on two factors: the physical changes that occur in the body and the person’s understanding o f t he b od y’s c h ange s af t er t he em o t ion al e v en t . J ame s and Lange believed that physical changes occur f irst, and then interpretation of those physical changes occurs. Together, they create the emotion. > According to the this t heor y, w hen Mand y ex per ienced a t hr e atening si t uation (almost being hit by a car), her body first sent out chemical messengers, like adrenaline, that caused physical changes such as increased breathing and a faster hear t r ate. Her brain then sensed these physical changes and interpreted t h e m a s e m o t i o n f e a r s. > O n e o f t h e p r o b le m s w i t h t h e James-L ange theor y is that emotions seem to happen too quickly to be accounted for by the release of chemical messengers and the changes they cause. Another problem i s t h a t di f fe r e n t e m o t ion s (f or ex am p le fe ar an d an ge r) have been show n t o c ause t he s ame phy sic al r e sponse s. > In 19 27, abou t 4 0 ye ar s af ter t he J ame s-L ange t heor y was developed, Harvard physiologist Walter Cannon (1 8 71-1 9 4 5) a n d h i s c o l l e a g u e P h i l i p B a r d (1 8 9 8 -1 9 7 7) developed a ne w t heor y t hat r elated t he w or k ings of t he nervous system to the expression of emotions. Cannon and Bar d found that people could experience emotion w ithout getting physical feedback from chemical messengers. They proposed that upon experiencing a stimulating event, information about the event is collected by the body’s senses and is sent thr ough the ner vous system to the br ain. > In the brain, the message is sent two places at the same time. The message is sent to the cortex (the part of the brain that controls conscious thought; it is where people experience thinking and feeling), w hich creates emotions; in Mandy’s case it created fear. At the same time, the message also goes to the hypothalamus (hy-po-THAL-ah-mus). The hypothalamus is the part of the brain that controls automatic body responses. It tells the body to send out chemical messengers that cause the body to respond. Some of these responses are experienced as behaviors such as shaking, rapid breathing, and cr ying. > In 1962, American scientists S t an le y S c h ac t e r (19 2 2-19 9 7) an d J e r om e S in ge r (s t ill teaching at Yale Universit y in 2000) took elements of both the James-Lange and the Cannon-Bard theories and modified them to try to better explain the relationship bet ween physical responses and emotional experience. > According to the Schacter-Singer model, both physical changes and conscious mental processing are needed to fully experience any emotion. In this model, in response to her near-accident, Mandy’s body sent out. > Researchers believe that the frontal lobes and the amygdala are among the most important brain structures affecting emotions. Feelings of happiness and pleasure are linked to the prefrontal cor tex. Anger, fear, s ad n e s s, an d o t h e r n e g a t i v e e m o t ion s ar e lin ke d t o t h e am y gd al a. > Me s s age s t o c r e a t e p h y sic al c h ange s s uc h as an incr e ased he ar t r ate. Mand y’s br ain sensed t he se changes and then analy zed them and put a label on them.


+2 /22

e m o t i o n a l i n t e l l i g e n c e [ e . i ] + 2 w h at a r e e m o t i o n s ?

input devices

speech body position gaze direction gesture recognition keyboard mouse user id motion detection

reactive component

knowledge base

u n d e r s ta n d i n g module

g e n e r at i o n module

response planner


[e.i] how the gap between human and machines has been bridged

action scheduler

output devices

animation rendering speech synthesizer devices

discourse model

architecture with data stores shown

+2 /23


The emotional label selected for the feelings was fear, and it depended in part on Mandy’s experience with large fast cars; in other words, she knew from experience in her past that cars are dangerous. This model explains why the same physical responses can produce different emotions. The b r a i n d e c i d e s, f o r e x a m p l e, w h e t h e r f e a r o r a n g e r o r surprise is the appropriate emotion based on mental processing of physical information. Thus, interpretation of information fr om the envir onment, body feelings, and e x p e r i e n c e f i g u r e m o r e p r o m i n e n t l y i n t h e S c h a c t e rSinger model. > Research continues on the relationship between the body, the brain, and the perception of emotions. One current area of research is focused on whether certain ar e a s of t he c or tex ar e dedic at ed t o specif ic emo t ion s and whether a person can feel an emotion when a particular p ar t of t he c or t ex is s t imul a t ed dir ec t l y b y an elec t r ic impulse. > Emotions appear to serve several physical and psychological purposes. Some scientists believe that emo t ion s ar e one of t he f und amen t al t r ai t s a s s oc i a t ed with being human. Emotions color people’s lives and give them depth and differentiation. For many people, strong emotions ar e linked to cr eativit y and expr ession. Gr eat art, music, and literature deal on a fundamental level with arousing emotions and creating an emotional connection bet ween the ar tist and the public. Some scientist s also belie ve t h a t emo t ion s ser ve a s mo t i v a t ion t o beh ave in specific ways. > The French neurologist Guillaume Duchenne (1806-1875) studied the bod y’s neur omuscular system. In this experiment (c. 1855), he used an electrical stimulation device to activate the involuntary facial muscles involved in smiling and laughter. > Physiologically, emotions aid in survival. For example, sudden fear often causes a person to freeze like a deer caught by a car’s headlights. Because animals usually attack in response to motion, at its simplest le vel, fe ar r educ e s t he c h anc e s of a t t ac k . W hen M and y f r o z e i n r e s p o n s e t o a c a r r a c i n g b y h e r, t h i s w a s a n example of a physical response to an emotion that improved her chances of survival. > Emotions also help people monitor their social behavior and regulate their interactions with others. Every person unconsciously learns to “read”

+2 /24 e m o t i o n a l i n t e l l i g e n c e [ e . i ] + 2 w h at a r e e m o t i o n s ?


the out ward expressions of other people and apply past experience to determine what these outward signs indicate about w hat the other per son is feeling. If a per son sees a m a n a p p r o a c h i n g w h o i s w a l k i n g v e r y a g g r e s s i v e l y, holding his bod y s tif f l y and fr ow ning, t he per son might c o r r e c t l y a s s u m e t h a t t h e m a n i s a n g r y. U s i n g t h i s infor mation, the per son c an decide w hether to leave or t o s t ay or w h a t t one of v oic e and bod y l angu age t o u se when approaching the man. > Some outward expressions o f e m o t i o n s ( b o d y l a n g u a g e) m e a n d i f f e r e n t t h i n g s i n different cultures. For example, if a young person avoids looking directly at a per son in authorit y, it is taken as a sign of r espect in some cultur es. In other cultur es, this expr ession suggest s guilt or a lack of tr ust wor thiness.

[e.i] how the gap between human and machines has been bridged +2 /25

w h y d o w e h av e e m o t i o n s ?


T h e Fa c i a l A c t i o n C o d i n g S y st e m ( FAC S ) M a n ua l t e ac h e s yo u h ow t o r e c o g n i z e a n d s c o r e t h e A c t i o n U n i ts ( AUs ) , w h i c h r e p r e s e n t t h e m u s c u l a r a c t i v i t y t h at p r o d u c e s m o m e n ta r y c h a n g e s i n fa c i a l appearance.

+2 /26 e m o t i o n a l i n t e l l i g e n c e [ e . i ] + 2 w h at a r e e m o t i o n s ?

1

9

10

surprise glabella

2 root of nose

3

5

8 eye cover fold

4 i n f r a o r b i ta l f u r r o w lower eyelid furrow

6 nostril wing

7

p h i lt r u m

i n f r a o r b i ta l t r i a n g l e nasolabial furrow

chin boss


[e.i] how the gap between human and machines has been bridged

Measurement of Fa c i a l M o t i o n

T o c at e g o r i z e expressions, we need first to determine the expressions from fa c i a l m o v e m e n t s . Ekman and Friesen h av e p r o d u c e d a system for describing a l l v i s u a l ly distinguishable fa c i a l m o v e m e n t s .

+2 /27

T he s y s t em, c alled t he Facial A c t ion C oding Sy s t em, for FACS, is based on the enumeration of all “action units” of a face that cause facial movements. As some muscles give rise to more than one action unit, the correspondence bet ween action units and muscle units is approximate. T h e r e a r e 4 6 AU s i n FACS t h a t a c c o u n t f o r c h a n g e s i n f a c i a l e x p r e s s i o n a n d 1 2 AU s t h a t d e s c r i b e c h a n g e s i n h e a d o r i e n t a t i o n a n d g a z e . FACS c o d i n g i s d o n e b y individuals trained to categorize facial motion based on an a t om y of fac i al ac t i v i t y, for example, ho w mu sc le s singl y and in c ombin a t ion c h ange t he fac i al appe ar anc e. A FACS c oder “ di s sec t s” an ex pr e s sion, dec ompo sing i t i n t o s p e c i f i c AU s t h a t p r o d u c e d t h e m o t i o n . T h e FACS s c o r i n g u n i t s a r e d e s c r i p t i v e, i n v o l v i n g n o i n f e r e n c e s about emotions. Using a set of rules, FACS scores can be conver ted to emotion scores to generate a FACS’ emotion d i c t i o n a r y. > T h e v a l i d i t y o f FACS a s a n i n d e x o f f a c i a l emo t ion h a s been demon s t r a t ed in a number of s t udie s. Unfor tunately, despite effor ts in the development of FACS as a tool for describing facial motion, there has been lit tle exploration of whether action units are the units b y w hic h w e c a t egor ize ex pr e s sion s. S uppor t er s of t he FACS model claim that emotions that ar e of ten confused with one another are those that share many action unit s. However, the liter atur e cont ains lit tle systematic investigation of comparisons bet ween different bases for description of postures or relative positions of facial features. > Emotion recognition requires delineating the facial pat terns that give rise to the judgment of different emotions. It involves the description of information in the face that leads observers to specif ic judgments of emotion. The studies are based on the above methods expressions and then analyzing the relationships bet ween c om p on e n t s o f t h e ex pr e s sion s an d j udgm e n t s m ade b y the observers. These judgment studies rely on static r e pr e s e n t a t ion s o f f ac i al ex pr e s sion s. T h e u s e o f s uc h stimuli has been heavily criticized since “ judgment of facial expression hardly ever takes place on the basis o f a f ac e c a ug h t in a s t a t e simil ar t o t h a t pr o v ide d b y a p h o t o gr ap h y s n app e d a t 2 0 milli s e c on d s.” T h e fe a t u r e based descriptions derived from static stimuli ignore several levels of facial information relevant to the judgment of emotions. One of these levels is the rate at which the emotion is expr essed. A nother level is r elated to the structural deformation of the surface of the face. Bassili ar gues that bec ause facial muscles ar e f ixed in cer t ain spatial arrangement, the deformations of the elastic sur face of the face to w hich they give rise during facial expressions may be informative in the recognition of facial expr essions.


One possible way to avoid the need for costly human intervention is to develop algorithms that would allow robots to learn to make facial expressions on their own. +3 /28 emotional intelligence [e.i]+3 learning to make facial expressions

lea mak exp


+3

arning to ke facial pressions


+3 /30

emotional intelligence [e.i]+3 learning to make facial expressions


[e.i] how the gap between human and machines has been bridged

+3 /31


+3 /32

emotional intelligence [e.i]+3 learning to make facial expressions

Fa c i a l e x p r e s s i o n s a n d m a c h i n e s

Of all the nonverbal behaviors—body movements, posture, gaze, voice, etc.—the face is probably the most accessible “w indow” into t he mechanisms, w hich gover n our e m o t ion al an d s o c i al li v e s. T h e c u r r e n t t e c hn o lo gic al developments provide us with the means to develop au t om a t ed s y s t em s for moni t or ing fac i al ex pr e s sion s and animating synthetic facial models. Face processing by machines could revolutionize f ields as diverse as m e d i c i n e, l a w, c o m m u n i c a t i o n s, a n d e d u c a t i o n . T h i s progress would make it feasible to automate many aspects of face processing that humans take for g r a n t e d (f a c e r e c o g n i t i o n , e x p r e s s i o n a n d e m o t i o n r ecognition, lip r e ading, etc.), and to develop new t e c h n o l o g i c a l a i d s (r o b o t i c s, m a n - m a c h i n e s y s t e m s, medical, teleconferencing, etc.). > Realistic animation of fac e s w ould ser ve a m ajor r ole in br idging t he g ap b e t w e e n m an an d m ac hin e. C om p u t e r s w i t h anim a t e d f ac e s c o uld b e u s e d in c l a s s r o om s t o t e ac h c hild r e n. Machines that would know how to express emotions w ould be in s t r umen t al in an e s t abli shing a c omple t el y ne w p ar adigm for m an m ac hine in t er ac t ion. M ac hine s that can recognize expressions will be able to relate to the emotion and feeling of the user. A machine that can both model and r ecognize expr essions w ill be one step closer to having a virtual persona. > Animation and s y n t he si s of fac i al ex pr e s sion s al s o h a s applic a t ion s ou t of t he r e alm of human-machine sy s tems. I t c an be used to generate 3-D synthetic actors that would have ex pr e s sions and emotions t o r ef lec t t he c ontex t of t he s t or y and t he en v ir onmen t t he y r e side in. S uc h anim a t ion s (s ome t ime s exagger a t ed t o r e f lec t t he stor y-telling contex t) ar e aimed at establishing an emotional relationship with the audience. > Facial motion analysis can be applied to applications of reading lips. It can be used to compliment speech recognition. On its own it would be a great resource for the hearing impaired. > An important application, perhaps one that is addressed effectively in recent year s is a video-phone or a teleconferencing application. It is argued that due to the limits on the bandwidth of transmissions, an efficient form of model-based coding of facial expression information and its tr ansmission it to another location is required. A d d i t i o n a l l y, c o n c e p t s o f t e l e p r e s e n c e a n d v i r t u a l of f ic e s w ould bec ome po s sible a s one c ould si t in one c on t inen t c ar r y on a mee t ing w i t h di f fer en t pe ople in di f fer en t c on t inen t s and s t ill be able t o ob ser ve e ac h and ever y facial gesture. > In basic research on br ain, facial expressions can identify when specif ic mental processed are occurring. Computers can become useful tools for such studies. Such processing can al s o b e u s e d t o w ar d s un de r s t an din g e m o t ion an d t h e r el a t ed fac i al ex pr e s sion s. > Fac i al ex pr e s sion s al s o


[e.i] how the gap between human and machines has been bridged

hold pr omise for applied medic al r e se ar ch, e speciall y in cases of analyzing the psychological state of the patient. Detailed facial modeling can be used to visualize faces for biomedical applications. Several researchers have used 3-D biomechanical models for pre/post surgery simulations and surgical path planning. Visual sensing

Using computer vision techniques to determine parameters of ex pr e s sion s by e s tim ating t he p at ter n c h ange s, e v ol v ing o ver t ime, of a fac e in a seq uenc e of im age s. This involves observing image sequences and calculating the temporal change (i.e., computing motion). Modeling and graphics

Def ining a computer gr aphics model to describe a face as a geometric shape to which the above change in facial pat tern c an be applied. P h y s i c a l l y- b a s e d ( a n a t o m i c a l ) m o d e l i n g

Extending this geometric model to be a physically-based model of the face, with dynamic properties. This includes adding muscle information and other biomechanical c onst r aint s to t he geome t r ic model. E x t r a de t ails ar e added in r egions w her e f low comput ation has a higher variance of error. A multi-grid mesh is developed to account for low frequency and high frequency motions separately. Dy na m ic est i m at ion a n d con t r ol

Devising a dynamic estimation and control loop to corr ect the estimates of the featur e change, based on t he c on s t r ain t s o f t he d y n amic s y s t em and t he er r or covariance of the temporal changes in images. This will correct the behavior of the dynamic model and allow e s timation and c ompu t ation of muscle ac ti vation (i.e., facial control input) from visual obser vations. A n a ly s i s a n d i de n t i f ic at io n

Est ablishing the r elationship bet ween visual input and the physic al model in ter ms of a set of or thogonal, but t ime -v ar y ing b a sis p ar ame t er s and u sing t he se b a si s par ameter s to deter mine a set of contr ol par ameter s for specific muscle group actuations. This basis d e c o m p o s i t i on w i ll b e ac c o m p li s h e d b y a p p lic a t i o n o f principle component analysis and similar statistical analysis techniques. Using the dynamics of the underlying biologic al model gives cr edence to the time var iations in these parameters. This newer set of parameters for m s t he ex t ended FACS model (n amed FACS +). T hi s FACS + m o d e l i s t h e n u s e d f o r a n a l y s i s, r e c o g n i t i o n and synthesis of facial expressions. S y n t h e s i s a n d r e a l -t i m e s i m u l a t i o n

The determined control parameters of muscle actuations for faci al ex pr e s sion pr o v ide a se t of pr oper “c on t r ol knobs� for synthesis. These are then used for real-time facial tracking and animation.

+3 /33


The human face is a ver y complex system, with more than 44 muscles whose activation can be combined in non-trivial ways to produce thousands of different facial expressions. As android heads approximate the level of complexity of the human face, scientists and engineers face a difficult control problem, not unlike the problem faced by infants: how to send messages to the different actuators so as to produce in t e r pr e t a ble ex pr e s sion s. > O t h e r s h a v e ex p lor e d t h e possibility of robots learning to control their bodies through exploration. Olsson, Nehaniv, and Polani proposed a method to learn robot body con-figurations using vision and touch sensor y feedback during r andom limbs movements. T he algor i t hm w or ked w ell on t he AIBO r obot s. How ever, AIBO has only 20 degrees of freedom and is subject to well known rigid body physics. Here we utilize an android head (Hanson Robotics’ Einstein Head) that has 31 degrees of freedom and non-rigid dynamics that map ser vo actuators to facial expr e s sions in non t r i v ial w ay s. In pr ac tice, set ting up the r obot expr essions r equir es many hour s of trial-and error work from people with high level of expertise. In addition as time progresses some servos may fail or work di f fer en t l y t hu s r equir ing c on s t an t r ec alibr a t ion of t he expressions. > One possible way to avoid the need for costly human intervention is to develop algorithms that would allow robots to learn to make facial expressions on their own. In develop-mental psychology, it is believed that infants learn to control their body through systematic exploratory mo vemen t s. For example, t he y b abble t o le ar n t o spe ak and wave their arms in what appear to be a random manner as t h e y l e a r n t o c o n t r o l t h e i r b o d y a n d r e a c h f o r o bj e c t s . T his pr ocess may involve tempor al contingency feedback from proprioceptive system and from the sensor y system that register s the con- sequences of body movements on the external physical and social world. Here we apply this same idea to the problem of a robot learning to make realistic facial expressions: T he r obot uses “expression-babbling� to progressively learn an inverse kinematics model of i t s o w n fac e. T he model m ap s t he r ela t ion ship be t w een proprioceptive feedback from the face and the control s i g n a l s t o 31 s e r v o m o t o r s t h a t c a u s e d t h a t f e e d b a c k .

+3 /34 emotional intelligence [e.i]+3 learning to make facial expressions

neutral surprise

anger


Since the Einstein robot head does not have touch and stretch sensors, we simulated the proprioceptive feedback using computer vision methods: An automatic facial ex pr e s sion an al y zer w a s u sed t h a t e s t im a t ed, f r ame b y f r ame, under l y ing hum an fac i al mu sc le ac t i v a t ion s f r om the obser ved facial images produced by the android head. O nc e t he inver se k inem a t ic s model i s le ar ned t he r obo t can generate new control signals to produce desired facial expressions. The proposed mechanism is not unlike the bodybabbling approach hypothesized by as a precursor for the development of imitation in infants. happyness

disgust

T h e fa c i a l m u s c l e s i n v o lv e d i n e m o t i o n a l e x p r e s s i o n a r e g ov e r n e d b y n e r v e s f o l l o w i n g a c o m p l e x s y s t e m o f d i r e c t a n d i n d i r e c t p at h w ay s t o a n d f r o m t h e m o t o r c o r t e x ( v o l u n t a r y s m i l e c i r c u i t u n d e r c o n s c i o u s c o n t r o l ) a n d t h e l i m b i c s y s t e m a n d b r a i n s t e m ( s p o n ta n e o u s s m i l e c i r c u i t n o t u n d e r c o n s c i o u s c o n t r o l ) . T h i s m ay e x p l a i n w h y p e o p l e ’ s f a c e s c a n e x p r e s s e m o t i o n s l i k e h a p p i n e s s , f e a r , a n d d i s g u s t w i t h o u t t h e i r b e i n g a w a r e o f i t.

[e.i] how the gap between human and machines has been bridged +3 /35


+3 /36

emotional intelligence [e.i]+3 learning to make facial expressions

kismet

Kismet is an autonomous robot designed for social interactions w i t h humans. In gener al, social r obotic s has c oncent r ated on gr oups of r obots per forming behavior s such as f locking, foraging or dispersion, or on paired robot-robot interactions such as imi t ation. T his pr ojec t foc use s not on r obot-r obot inter actions, but r ather on the constr uc tion of r obot s that engage in meaningful social exchanges with humans. By doing so, it is possible to have a socially sophisticated human assist t h e r ob o t in ac q uir in g m or e s op hi s t ic a t e d c om m unic a t ion sk ills and helping i t le ar n t he me aning t he se ac t s h ave for o t her s. O ur appr o ac h is in spir ed b y t he w ay infan t s le ar n to communicate with adults. Specif ically, the mode of social i n t e r a c t i o n i s t h a t o f a c a r e t a k e r- i n f a n t d y a d w h e r e a hum an ac t s a s t he c ar e t aker for t he r obo t .


[e.i] how the gap between human and machines has been bridged

+3 /37


+3 /38

emotional intelligence [e.i]+3 learning to make facial expressions

M e a s u r e m e n t o f Fa c i a l M o t i o n

To c ategorize expr essions, we need f ir st to deter mine the expressions from facial movements. Ekman and Friesen have produced a system for describing all visually distinguishable facial movements. The system, called the Facial Action Coding System, for FACS, is based on the enumeration of all “action units” of a face that cause facial movements. As some muscles give rise to more than one ac tion unit, the cor r espondence bet ween ac tion unit s and muscle uni t s is appr oximate. T her e ar e 4 6 AU s in FACS that account for changes in facial expression and 12 AUs that describe changes in head orientation and gaze. FACS coding is done by individuals trained to categorize fac i al mo t ion b a sed on an a t om y of fac i al ac t i v i t y, for example, how muscles singly and in combination change t h e f a c i a l a p p e a r a n c e . A FACS c o d e r “ d i s s e c t s” a n e x p r e s s i o n , d e c o m p o s i n g i t i n t o s p e c i f i c AU s t h a t p r o d u c e d t h e m o t i o n . T h e FACS s c o r i n g u n i t s a r e d e s c r i p t i v e, i n v o l v i n g n o i n f e r e n c e s a b o u t e m o t i o n s . Using a set of r ules, FACS scor es c an be conver ted to emotion scores to generate a FACS’ emotion dictionary. > The validit y of FACS as an index of facial emotion has been demonstrated in a number of studies. Unfortunately, d e s p i t e e f f o r t s i n t h e d e v e l o p m e n t o f FACS a s a t o o l for describing facial motion, there has been little exploration of whether action units are the units by which we categorize expressions. Supporters of the FACS model c l aim t h a t emo t ion s t h a t ar e of t en c onf u sed w i t h one another are those that share many action units. However, the literature contains little systematic investigation of c omp ar isons be t w een dif fer ent b ase s for de scr iption of postures or relative positions of facial features. > Emo t ion r ec ogni t ion r eq uir e s deline a t ing t he fac i al patterns that give rise to the judgment of different emotions. It involves the description of infor mation in the face that leads obser ver s to specif ic judgments of emotion. T he s t udie s ar e b ased on t he above me t hods expressions and then analyzing the relationships bet ween components of the expressions and judgments made by the obser vers. These judgment studies rely on static representations of facial expressions. The use of such stimuli has been heavily criticized since “ judgment of facial expression hardly ever takes place on the basis of a face caught in a state similar to that pr ovided by a photogr aphy snapped at 20 milliseconds.” T he featureb ased de scr iptions der i ved fr om s t atic s timuli ignor e several levels of facial information relevant to the judgment of emotions. One of these levels is the rate at which the emotion is expressed. Another level is related to the structural deformation of the surface of the face. B a s s i l i a r g u e s t h a t b e c a u s e f a c i a l m u s c l e s a r e f i xe d in certain spatial arrangement, the deformations of the elastic sur face of the face to w hich they give rise during facial expressions may be informative in the r ecognition of facial expr e ssions. > B assili conduc ted


[e.i] how the gap between human and machines has been bridged

experiments by covering faces of actors with black makeup and painting w hite spot s in r andom or der over it. Faces were divided into upper and lower regions and recognition studies were conducted. This study showed that in addition to the spatial arrangement of facial f e a t u r e s, m o v e m e n t o f t h e s u r f a c e o f t h e f a c e d o e s serve as a source of information for facial recognition. Gr a p h ics a n d A n i m at ion

C ompu t er-b a sed modeling and anim a t ion of fac e s h a s at t r ac ted c onsider able inter e s t in c ompu ter gr aphic s for many years. The initial efforts to represent and animate faces using computers go back almost 20 years. Facial modeling is interesting in computer graphics context as it generates synthetic facial models for 3-D character animation. This in itself has many applications in virtual reality, visualization, telepresence, autonomous creatures, personable interfaces and cinematic special effects. > Facial representation and facial animation share the same set of issues as other representation and animation activities: modeling, motion control, and image r endering. Some applic ations for facial animation such as visualization, biomedic al applic ations, and r ealistic simulations r equir e det ailed physic s-based models. In o t h e r a p p l i c a t i o n s, t h e o n l y r e q u i r e m e n t i s t h a t t h e faces be believable w ithin the contex t and the set ting. There is an extensive debate on which applications n e e d p h y s i c s - b a s e d d e t a i l s a n d w h i c h d o e s n’ t ; s o m e have suggested a mix-and-match compromise. The only desir e for all (animator s and r esear cher s) is that the face of an animated 3-D character has believable expressions in a given story telling context and that the audience establish an emotional relationship with the characters portrayed. Caricatures and purposeful facial exaggeration are often more acceptable than attempts at realism. > Facial modeling is concerned with developing geometric descriptions or procedures for representing face s. I t addr e s se s t he is sue s of facial c onfor mation, r e alism, likene s s, ex pr e s si vene s s, and anim a t abili t y. F o r m o d e l i n g p u r p o s e s, t h e v i s i b l e s u r f a c e o f a f a c e is modeled as a net wor k of connec ted polygons. Some implementations have used curved surface modeling techniques. Facial motion control is concerned with t ec hnique s and algor i t hm s for spec i f y ing t he mo t ion s between expressions. This control is achieved by defining a controlled motion of the polygon vertex positions over time in such a way that the rendered facial surfaces have a desired expression in each frame of the animated sequence. Facial image sequences depend on the use of high-quality rendering techniques, which are extensively researched throughout the computer graphics community. C o n t r o l o f Fa c i a l M o t i o n

The earliest, and still a widely used, scheme for implementing and contr olling facial animation uses key expression bases and interpolation. Parke demonstrated this key-framing approach, where two or more complete

+3 /39


+3 /40

emotional intelligence [e.i]+3 learning to make facial expressions

sensors

l o w l e v e l f e at u r e extraction

at t e n t i o n s y s t e m

motors


[e.i] how the gap between human and machines has been bridged

kismet design structure higher level perceptual system

b e h av i o r s y s t e m

m o t i vat i o n system h o m e o s tat i c r e g u l at i o n

motor system

motor skills

orient head and eyes face and body postures vocal acts

+3 /41


+3 /42

emotional intelligence [e.i]+3 learning to make facial expressions

(static) poses were used and the intermediate information was calculated by simple interpolation techniques. Progressive research to reduce the complexity of this kind of synthesis resulted in parameterized representation for facial animation. > The limitations in producing a wide range of realistic expressions using a limited set of parameters led to the development of facial models based on the anatomical structure of the human face. Plat t and Badler have developed a p ar tial face model in w hich ver tice s of a face sur face are interconnected elastically to form the skin, and are connected to the underlying bone structures by muscles modeled with elastic properties and contraction for ces. In their system, facial expr ession is manipulated by applying forces to the elastically connected skin mesh thr ough the under lying muscles. T he muscle ac tions ar e p a t t er ned af t er t he FACS model de sc r ibed e ar lier. T hi s adds the anatomical properties of the face to the geometric FACS model. T hus the nex t gener ation of facial animation systems included muscles as control models. > The major limitation of FACS is that it is a completely geometric model. The muscle model overcomes this limitation by relating the geometric deformations of FACS to the muscle groups. The resulting facial models are anatomically consistent muscle and t is sue models. Fa c e p r o c e s s i n g a n d a n a ly s i s b y m a c h i n e s

Fac i al ex pr e s sion is a pr im ar y v ar i able in p s y c hologic al and sociological research; facial expression communicates infor m a t ion abou t per s on al ex per ienc e, pl ay s a c r i t ic al r ole in the communic ation of inter per sonal behavior, and provides a window into brain and autonomic nervous system functioning. Given the importance of facial expression, the need for an objective and automated fac i al an al y sis s y s t em is c ompelling. > W i t h t he c ur r en t r ate of impr ovement in technology, it appear s feasible to au t om a t e m any a spec t s of fac e pr oc e s sing t h a t hum an s t a ke f o r g r an t e d (f ac e r e c o g ni t i on, e x p r e s s i on an d e m o t i o n r e c o g n i t i o n , l i p r e a d i n g , e t c .), a n d t o d e v e l o p new technologic al aids (r obotic s, man-machine systems, m e d i c a l , t e l e c o n f e r e n c i n g , e t c .) . i t i s f o r t h i s r e a s o n t h e s t ud y o f f ac i al im age pr o c e s sin g i s an in c r e a sin gl y in t e r di s c ip lin ar y t opic . > T h e f ir s t in s t an c e o f de t aile d analysis of facial expressions appears in 1862 by Duchenne d e B o u l o g n e . D u c h e n n e’s f a s c i n a t i n g p h o t o g r a p h s a n d insightful commentary provided generations of researches with foundations for experimentation in the perception and communication of human facial affect. Some of Duchenne’s hypotheses are still widely accepted and most of them have been used by Ekman and his colleagues. > Electromyography (EMG) of facial expression is a ver y tr aditional appr oach, e xe r c i s e d b e f o r e c o m p u t e r v i s i o n m e t h o d s w e r e b e i n g considered to automate facial pattern analysis. Facial EMG h a s suc c e s sf ull y di f fer en t i a t ed be t w een po si t i vel y an d n e g a t i v e l y e f fe c t s an d h a s c on c u r r e n t v alidi t y w i t h some FACS action units. Facial EMG is also more sensitive to subtle changes in facial muscles than human obser vers


[e.i] how the gap between human and machines has been bridged

+3 /43

using FACS, and is without a doubt the most accurate facial motion measurement method. However, the need to attach electrodes to subjects is a signif icant limiting factor that r ule s ou t i t s use in nat ur alis tic ob ser vations. Mor eover, the elec tr odes may hamper the subjec t s’ experience and expression of emotion. EMG does, however, have a natural application in biomedical f ields. Bennet has shown how EMGs can be used in a surgical operating room to measure muscle actuations of a patient under anesthesia, in order to determine the level of anesthesia (and discomfor t). V ision-based sensing

Computer vision deals with the problem of scene analysis; more specif ically the ex tr action of 3-D information about scenes and objects from 2-D time-varying images obtained by video c amer as. O ver the year s, many algorithms have been de veloped for de t er mining 3 - D sh ape, t ex t ur e, and motion in scenes. Facial image processing has been an active area of research for at least t wo decades. Most of the effor t s to date have focused on face r ecognition and head tracking. However, recently much progress has made in e s timation of 3 -D r igid and nonr igid motion le ading t o gesture analysis as well as facial expression recognition, understanding, and tracking. > Face recognition has t ypically been posed as a static problem requiring the app lic a t ion o f p a t t e r n r e c o gni t ion t e c hniq ue s t o s t a t ic images. Signif icant work has been done to see if face r ecognition is possible thr ough the analysis and stor age of very love level features of images like gray-levels and /or edge s. Rec en t l y, r e se ar c her s h ave s uc c e s s f ull y used traditional pattern recognition techniques like principal component analysis to obt ain a psychologic ally plausible model for human face recognition. However, the s t ud y of face fe at ur e s is s till pur sued intensel y. Yuille’s wor k of defor mable templates uses image featur es to f it a de f or m a b le t e m p l a t e t o a f ac e an d t h e p ar am e t e r s o f t hi s t empl a t e ar e t hen u sed for sh ape an al y si s. Br unelli and Poggio present a comparison between template matching and feature matching for face recognition, and show that template matching results in higher recognition accur acy. Many other r esear cher s have used techniques such as edge detection, intensity variation, etc., to locate the lips, mouth, eyes, and nose on a face. All of this work is formulated in a static and a passive framework; no obser vations ar e made ac tively and the system does not evolve over time. > Williams’ method automatically tracks a number of points on the surface of a real face and maps the motion onto a very detailed facial model. The locations of these points control a texture map and when these points are moved on the basis of tracked motion, realistic facial expressions are generated. This is an eff icient and an extremely practical system, capable of capturing much detail. It does require direct user input but that is desired, s i n c e t h e m a i n g o a l o f t h i s s y s t e m i s t o m i x- a n d - m a t c h between motion in image sequences and user def ined m o t i o n . T h i s s y s t e m i s n e i t h e r a c t i v e n o r d y n a m i c, a n d


+3 /44

emotional intelligence [e.i]+3 learning to make facial expressions

t he lack of bot h is defended ex tensi vel y by t he au t hor. > Terzopoulos and Water s’ method tr aces linear facial features, estimates the corresponding parameters of a three-dimensional wireframe face model, and reproduces facial expression. A signif icant limitation of this system i s t h a t i t r eq uir e s fac i al fe a t ur e s be highligh t ed w i t h m a k e - u p, e s p e c i a l l y o n t h e c h e e k s . A l t h o u g h , a c t i v e c on t our model s (sn ake s) ar e u sed, t he s y s t em i s s t ill passive; the facial structure is passively dragged by the tracked contour features without any active control on the basis of observations. Our method is an extension of Terzopoulos’s work, as our method does not look at any prescribed (and marked) regions on a face for extraction of muscle forces, and muscle actuations are computed ac t i v e l y f r om f ac i al m o t ion. > A n o t h e r appr o ac h w a s introduced by Mase who developed a method for tracking f a c i a l a c t i o n u n i t s u s i n g o p t i c a l f l o w. T h i s a p p r o a c h is an ex tension of Mase and Pentland’s wor k on lip reading. The major limitation of this work is that no physical model is employed; the face motion estimation is for mulated s t atic all y r at her t han for mulated w i t hin a dynamic optimal estimation framework. This approach ser ves as a motivating factor for our work as it shows t he applic abili t y of optic al f low as a me asur e of facial motion. > Yacoob presents a system that extends Mase’s w or k . Yac oob’s s y s t em u se s op t ic al f lo w c ompu t a t ion to determine motion in different regions of the face a n d t h e n o n t h e b a s i s o f FACS, d e f i n e s a r u l e - b a s e d system for recognition of facial expression from dynamic image sequences. > A nother system wor th mentioning i s de veloped b y Reinder s. T hi s me t hod i s simil ar a s i t does use a priori knowledge of the facial model. It then uses this model and the shape representations of facial features in a feature extraction scheme. These features ar e t hen t r acked t o ex t r ac t r igid and nonr igid motion. A mu sc le model i s al s o u sed t o m ani fe s t fac i al mo t ion f or ex pr e s sion. T hi s m e t h o d i s m ain l y aim e d a t f ac i al tr ack ing and lacks that ac tive d y namic contr ol system that makes our analysis detailed and robust. It relies on FACS parameters for motion modeling and uses a neural network to “decide” which motion leads to what FACS unit.


[e.i] how the gap between human and machines has been bridged

Perhaps the most interesting application of a teleconferencing system is introduced by Haibo Li, Pertti Roivainen, and Robert Forchheimer, who describe an approach in which a control feedback loop between computer graphics and computer vision processes is used for a facial image coding system. This results in an efficient model-based coding system. The limitation of this work is the lack of both a dynamic model and of observations of motion over large predefined areas on the face.

+3 /45


+4 /46

emotional intelligence [e.i]+4 emotional intelligence [e.i]

emotional inteligence [e.i]


[e.i] how the gap between human and machines has been bridged

> > > > >

+4 /47

+4

Self-awareness: recognizing internal feelings m a n a g i n g e m o t i o n s : f i n d i n g w ay s t o h a n d l e e m o t i o n s a p p r o p r i at e t o t h e s i t u at i o n s M o t i v at i o n : u s i n g s e l f - c o n t r o l t o c h a n n e l e m o t i o n s t o w a r d a g o a l E m p at h y : u n d e r s ta n d i n g t h e e m o t i o n a l p e r s p e c t i v e o f o t h e r p e o p l e H a n d l i n g r e l at i o n s h i p s : u s i n g p e r s o n a l i n f o r m at i o n a n d i n f o r m at i o n a b o u t o t h e r s t o h a n d l e s o c i a l r e l at i o n s h i p s a n d t o d e v e l o p i n t e r p e r s o n a l s k i l l s

Emotional intelligence refers to people’s ability to monitor their ow n and other people’s emotional st ates and to use this information to act wisely in relationships. Researchers are beginning to develop tests that can measure emotional intelligence. Scientists who study emotions generally believe that people with high emotional intelligence usually work well in cooperative situations and are good at motivating and managing others. People with low emotional intelligence often misinterpret emotional signals and have difficulty with relationships. Although emotional intelligence probably has an inherited component, many psychologists believe that people can be guided into making better use of the emotional intelligence that they possess. > In humans and other animals, we tend to call behavior emotional when we observe certain facial and vocal expressions like smiling or snarling, and when we see certain physiological changes such as hair standing on end or sweating. Since most computers do not yet possess faces or bodies, they cannot manifest this behaviour. However, in recent years computer scientists have been developing a range of ‘animated agent fac e s’, pr ogr amme s t h a t gener a t e im age s of hum anlike faces on the computer’s visual display unit. These images can be manipulated to form convincing emotional expressions.


nal intelligence refers to people’s ty to monitor their own and other s emotional states and to use this tion to act wisely in relationships. earchers are beginning to develop tests that can measure emotional intelligence. Scientists who study ions generally believe that people

motional ntelligenc efers to

Emotional intelligence refers to people’s ability to monitor their own and other people’s emotional states and to use this

motiona itelligen

motional intelligence efers to people’s ability to monitor teir own and other

Emotional intelligence refers to ple’s ability to monitor their own other people’s emotional states nd to use this information to act ly in relationships. Researchers beginning to develop tests that measure emotional intelligence. ts who study emotions generally

motion intellig

motional intelligence efers to people’s ility to monitor

onal intelligence refers to bility to monitor their own people’s emotional states use this information to act elationships. Researchers nning to develop tests that

motional intelligence efers to people’s ability to monitor

Emotional intelligence refers to people’s ability to monitor their own and other people’s emotional states and to use this information to act wisely

Emot

moti

motional ntelligence

Emotional intelligence refers to people’s ability to monitor their own and other people’s


8/8

14/14

28/28

17/17

34/34

10/10

20/20

40/40

12/12

24/24

48/48

29/29

58/58

ce

al nce

8.4/8.4

al gence

e

ional

onal 14.5/14.5


+4 /50

emotional intelligence [e.i]+4 emotional intelligence [e.i]


[e.i] how the gap between human and machines has been bridged

+4 /51

T h e f o n t i s b a s e d o n a r at i o n a l a p p r o a c h f o l l o w i n g l o g i c a l p r i n c i p l e s , a n d b u i lt o n g r i d s y s t e m s , y e t t h e y c o n ta i n a n e m o t i o n a l , h u m a n e l e m e n t.

> o n t h e l e f t _ / : a f a c e c a n b e FACS - c o d e d _ / : i n n e r b r o w r a i s e / o u t e r b r o w r a i s e / c o r r u g at o r / u p p e r l i d r a i s e / l o w e r l i d t i g h t e n / l i p s t r e t c h / j a w d r o p


+4 /52

emotional intelligence [e.i]+4 emotional intelligence [e.i]

> blender pro book

> blender pro bold

> b l e n d e r p r o h e av y

a b c d e f g h i j k l m n o p q r s t u v w x y z a j s a j s

b c d e f g h i k l m n o p q r t u v w x y z b c d e f g h i k l m n o p q r t u v w x y z

a b c d e f g h i j k l m n o p q r s t u v w x y z 0 5 a b c d e f g h i j k l m nopqrstuvwxyz 0 5 abcdefghijklm nopqrstuvwxyz 0 5

1 2 3 4 6 7 8 9 1 2 3 4 6 7 8 9 1 2 3 4 6 7 8 9


[e.i] how the gap between human and machines has been bridged

> 27/27

+4 /53


+ + + + + +

Layout by Illustrated by Typeface Foundry Printed by Binding

Tran Huynh Tran Huynh Blender Pro / Gestalten www.gestalten.com/fonts Tran Huynh Copy Factory, Palo Alto, Ca.

+4 /54 emotional intelligence [e.i]+4 emotional intelligence [e.i]





g n i w o l l o f h c a o r p p a l a n o i ta r a n o d e s a b s i t n o f e h T y e h t t e y , s m e t s y s d i r g n o tl i u b d n a , s e l p i c n i r p l a c i g o l .t n e m e l e n a m u h , l a n o i t o m e n a n i a t n o c


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.