Developing an Experimental Model for Architectural Responsiveness (via color) to Human Emotions meas

Page 1

Developing an Experimental Model for Architectural Responsiveness (via color) to Human Emotions measured through Facial Expressions

Rana Adnan Al Tanbour1 Department of Architecture, The University of Jordan, Amman, Jordan Mobile Num. 00962/777479460 Email address: r.tanbour@yahoo.com

*

Abdulsalam A. Alshboul2

.Department of Architecture, The University of Jordan, Amman, Jordan

1 Architect, research assistant, Department of architecture, The University of Jordan. 2 * Corresponding Author, Associate professor, Department of architecture, The University of Jordan, email: alshboul@ju.edu.jo

[Type text]


Developing an experimental model for architectural responsiveness (via color) to human emotions measured through facial expressions Abstract Over the past few years there has been an obvious and continuous change in the architectural design current demands, the position of the user in architecture has become more and more highlighted. Architecture is usually related to the first part of the emotion creation process. It always tried to manipulate user's emotions, imposes him to change his feelings according to its geometry and characteristics, like color. This research aims to develop a model of an interacting environment which interacts with its occupant, by reading his facial reactions through the live - camera and analyze his emotions by using FaceReader software created by Noldus, narrowing the possibilities to three basic emotions (anger, sadness and happiness). Responding to his emotion by changing one wall color with the use of a ceiling projector, aim theoretically to evoke better emotional states such as in our model calmness and happiness, narrowing the selective color response to two basic colors (green and yellow) based on psychological researches. The proposed model will be simulated by Unity 3d software, by using a C# Script for coding. Keywords: interactive architecture;emotional interaction; emotional improvement; color and emotion; facial expression; emotion recognition; FACS.

Introduction One of architects intentions when designing is trying to meet user's needs and expectations, by creating a suitable, comfortable, and adjustable environment for individuals. This can be done by expanding the design's potentials in order to reach the user's needs. Most existing buildings do not truly consider all of their occupant's needs. Occupant's experience is mainly linked with personal senses and emotions, which have been neglected during design stage ( Lehman, M., 2010). Interactive architecture combines sensor technologies with architectural spaces to adjust indoor environment to adapt user's needs; such techniques focus on user's emotions and their interaction with architectural spaces. (Schueler, N., 2010) Approaches and researches for measuring human emotions are developing continuosly, with varied tools, through the human voice, body gesture, biological indicators, or facial expressions. (Ekman, P., Sorenson, E. R. and Friesen. W. V. 1969). Facial expressions are the most reliable signs for measuring emotions, this is universally accepted across different cultures; expressions are normally tied to emotions, they are more often involuntary (Schnall & Laird, 2008). Based on Ekman’s FACS (Facial Action Code System) it is a common standard for categorizing systematically the physical expression of emotions. (Ekman, P. and Friesen, W., 1978) some Computer software were designed to adapt the FACS; by detecting facial emotional expressions and analyzing them. FaceReader is an example of computer software that detects and reads facial expressions, launched by Noldus Information Technology, it have been used in several studies with different fields such as learning, usability, e-commerce and human-computer interaction, it can classify facial expressions with different inputs, either live using a webcam, or offline, in video files or image (Terzis et al., 2010). [Type text]


human senses interact dynamically to percieve surrounding environments. This multisensing experience starts mostly by our vision, it affects our perception of the space through volume, light and most importantly color. (Pallasmaa, J., 2004). Color perception directly stimulates our emotions, it evokes certain emotions, differ from one human to another according to their cultural background, experiences and expectations, but in natural situations it evokes almost the same emotions based on the biological reactions to color (Garber et al., 2003). Architects use color theories in their designs in order to create certain feelings to users, by evoking emotions (Hope and Walch, 1990), but the problem relates to the fact that these physiological effects from the color stimulus conclude that physical reactions do not remain constant. (Meerwein, Rodeck and Mahnke, 2007). This research attempted to combine architectural, psychological, and technological disciplines into one model. In order to develop this model, such combination needs to be structured carefully because of this dynamic interaction between the three mentioned fields. Structure Integrating Architectural, psychological, and technological fields into one model. See figure (1), which presents the 3 disciplines our model attached to, with intersection between them.

Figure 1. Location of the proposed model in the common area among Architecture, Technology, and Psychology (Source, authors, 2013).

Method The model is a room which is intended to interact with its occupant, It will measure his emotions through his facial expressions, we chose this method because it's universally recognized and approved, and it's can be measured without direct contact with users like [Type text]


the other methods such as vocal or muscle tense recognition. The developed model should read his facial expressions through a hanged live – camera, and analyze his emotions according to the FACS, through FaceReader software adopted by Noldus information technology. Focusing on the emotional dimensional perspective, with three out of the six Ekman's Big Six basic emotions, which represents a sample from arousal and valence dimension emotion theory, (as a reflector of behavior whereas arousal presents the intensity of the emotional stimulus). (Lang et al., 1995, as cited by Partala, 2005) Dropping the (positive valence, low arousal) because of its slightly notable facial expressions, the selection will be one from each region which represents an overall sample for any emotional state;   

Happiness (positive valence, high arousal) Anger (negative valence, high arousal) Sadness (negative valence, low arousal)

The interacting will be through responding to his emotion by changing room color (the experiment will be on one wall of the room), the responding will aim to evoke better emotional states such as in our model calmness and happiness with green and yellow, when dealing with sadness and anger, and maintain the happiness states with yellow color, narrowing the selective color response to two basic colors (green and yellow) based on psychological researches on mapping emotions to color. (Kaya, N., 2004). See table 1, which shows the different Color - Emotion theories assigning each color to an emotional state with a conclusion of the common results. Table 1. Summary of different color – emotion theories, assigning each color to an emotional state. Theory/Color

Red

Blue

Green

Yellow

Goethe Claudia Cortes

Faith Anger/Love

Calm Faith/Greed

Joy Fear/Happiness/Joy

Naz Kaya

Anger/Love

Sadness Confident/ Sadness Calm

Happy

Color Wheel Pro

Emotionally/ intense/aggressive/ anger Anger Stress

Trust

Calm/Hopeful/ peaceful Greed

Confusion Sadness

Greed Calm/ Pleasure

Fear Happy

Shirly Willet Yan Xue Common Result

Joy/Happinness

Anger Sadness Calmness Happiness The developed model will be presented as a computer simulation using Unity 3D software, with C# script as a programming language to assign the emotional state with the right color change. See figure 2, which shows an Iconic / Graphical layout of the proposed model system.

[Type text]


[Type text]


Room characteristics The proposed experimental model will be simulated as a regular square room with dimensions of 5m*5m (as typical room), with the following characteristics:        

White painted walls with at least one empty wall. Normal lighting level of 300 Lux for normal tasks. Wall reflectance factor of 0.9. One window with a clean and normal transparent glass. One chair facing the empty wall for the user to sit on. Ceiling projector as the primary display device. A regular camera looks over the whole room, oriented towards the user's seat for face detection. A computer with the FaceReader software installed for processing, analysis, and decision about proper color to display.

Figure 3 shows the proposed layout, plan, and a section of the experimental model, showing the parameter settings needed to implement the experiment. It is important to mention here that this layout for the indoor environment model which was proposed is not the only way to address, it's just for experimental purposes, essentially to present the interactive relationship between user, emotions, and indoor environment.

[Type text]


Figure 3. (a) plan layout for the space where experiment where implemented. (b) vertical section of the same space showing the different variables settings (Source, Authors, 2013). [Type text]


Model Process Layout The process layout consists of two main process flows: 

The first one is the interacting components, which will be focusing on the model design and development, clarifying two main questions; o Is it possible for the indoor environment to interact with it's occupant's emotions? o How it will be constructed? The second one deals with the feedback process and its final objective: Has this interaction improved the occupant’s emotional state? While this research will be dealing with the first interacting parts, The answer to this question is out of the scope of this research. Figure 4 shows the experimental model process layout.

Model Experiment The first interaction part in this experiment consists of three main processes: emotion recognition through facial expressions, model simulation, and the final experimental model. For each process, tools, mechanism, and results, a further explanation will follow. Emotion recognition through facial expressions Tools Tools, elements, and participants used in this part of the experiment with a brief definition for its role are: -

Random user: [Type text]


-

Any user or for the experimental condition we called an actor to act different facial expressions of different emotions, which we decided to be: anger, sadness, and happiness. Computer: For the software progress and analysis phase, and also for the simulation process later. FaceReader Software: This software is essential for the emotion recognition through facial expressions, it is a software from Noldus information technology, based on Ekman FACS. Web camera: If the computer did not come with built camera we must have an external one for the live recording for the analyzing phase.

Mechanism For the interaction part we will need only one participant to test the system, he will be asked to set in front of the computer facing the web camera, at the same time the FaceReader software is on. Figure 5 shows the whole configuration of the experiment: computer with a stand camera and the FaceReader software detecting the participant face to start analyzing later.

Figure 5. Experiment settings and components, web cam, computer, and face reader software installed. (source: authors, 2013). The participant was asked to act three different emotions, which they are: happiness, sadness, and angry; as shown in figure 6, where the participant acted the three preselected emotions with the FaceReader analysis. [Type text]


Figure 6. Three emotions been set and analyzed by the FaceReader Software, showed by Software interface ( Source: Authors, 2013). In the analyzing phase we can notice that the FaceReader software is a software application to Ekman FACS. Figure 7 shows the participant face with the AC analysis system applied to it. [Type text]


Figure 7. Analysis visualization showed on the screen as for different emotional expressions during experiment processing. Results The output results in FaceReader presented in different charts, it can be as bar chart, pie chart, and lines compared with the timing. Most important types of outputs are a (.txt) file which allow us to open it with an Exel sheet for using it later in the simulation phase, the Exel file is presented as columns for each emotion, and bars for the intensity value. The first nine rows are a description of the project time, location, and frame number. The time period between each frame is 0.375 parts of a second. Figure 8 shows the Face Reader Excel sheet output for the participant.

Figure 8. Face reader excel sheet showing digital values for different emotional expressions during experiment implementation (Source: Author, 2013) [Type text]


Model Simulation Tools The tools used in this experiment phase are mainly computer software, which are: -

SketchUp Software: Used to build the experiment room and export it to Unity 3D software - C# Script: It is a C Sharp Programming Language, developed by Microsoft. This programming language was used for writing the code in the Unity 3D software, which was used as the interaction part in the simulation. - Unity 3D software: It’s a game engine software developed by Unity Technologies, it is normally used to develop video games for web plugins, desktop platforms, and mobile devices. It was used to import the SketchUp model and add the C# script that was written on to create the interaction. Mechanism A 3D model was built which represents the experiment room which was described before. Using SketchUp software, the model was basically a white room with a blocks of man, camera, projector, and a computer as shown in figure 9.

Figure 9. 3D model represents the experiment room, showing the different experiment settings, this was built using Sketchup software. The second step was writing the code using the C Sharp language, based on the Excel sheet we exported from the FaceReader software, with the right color response for each pre selected emotion which has been shown in table 1. Figure 10 shows a screenshot of the code written in C Sharp language.

Figure 10. code written in C sharp language to assign the right color response for each emotional state. [Type text]


Going into detail with the script, the model and the path for calling the excel file was identified, which represents the users’ emotions analysis; the three emotions were defined to the program, and let it ignore the other emotions. Set the value of deciding the emotional state for the user of a intensity over 0.1, so the coding were to decide if the value of the happiness cell is larger than the sadness and anger's cell, and larger than 0.1, interact with changing the environment color to yellow. And for the anger and sadness to interact with green color, and with the rest of the emotions we decided to remain white color for the environment as shown in Figure 11.

Figure 11. Excel sheet with the C Sharp script, highlighting the three preselected emotions.

[Type text]


Finally in the simulation phase, the Unity 3D software was used, with the 3D SketchUp model imported into it, the FaceReader Excel sheet, and the C Sharp script, in order to start the simulation. Figure 12 shows the Unity 3D software interface with the imported 3D SketchUp model, the FaceReader Excel sheet, and the C Sharp script.

Figure 12. Unity 3D software interface with the imported 3D SketchUp model, the FaceReader Excel sheet, and the C Sharp script.

[Type text]


Final Results All previous tools and mechanisms will deliver the interactive model, working all together as whole, answering the research question which was between architecture and user's emotions; can we switch the roles? The final results showed that it is possible to change the roles, user can finally have an indoor environment that can sense his emotions and interacts to them by changing its color, which will rise user's experience with his surrounding space to another level. The designed experimental model is divided into four experiences; the first condition will be in the starting phase without any emotional acting from the participant, which we called Neutral Phase, then Anger, Happiness, and sadness phases. Each phase will show two main steps:  

User's emotions recognitions through personal facial expressions, using FaceReader software. Indoor environment color changing, presented by computer simulated model.

Neutral Phase The neutral phase is actually a beginning phase for the participants, called (no emotions phase). The indoor environment will remain white, as no change in the indoor environment, figure 13, which shows the Neutral Phase interaction between user emotions and indoor environment.

Figure 13. Nuetral phase: no emotions detected. [Type text]


Anger Phase The Anger phase is when the participant acted angry, the indoor environment will change to green color in order to hypotheticaly calming down anger emotions, based on psychological research results and classifications, table 1. Figure 14 shows the Anger Phase interaction between user emotions and indoor environment.

Figure 14. Anger phase, detection of emotions of anger, and accordingly, color modification.

[Type text]


Happiness Phase The Happiness phase is when the participant acted happy, the indoor environment will change to yellow color in order to emphasize his emotion, based on psychological research results and classifications. Figure 15 shows happiness phase interaction between user emotions and indoor environment.

Figure 15. Interaction between user emotions and indoor environment: facial expressions showing feelings of happiness.

[Type text]


Sadness Phase The Sadness phase is when the participant acted sad, the indoor environment will change to green color in order to change or modify the detected emotions into a better emotional state. Figure 16 shows the sadness Phase interaction between user emotions and indoor environment.

Figure 16. Interaction between user emotions and indoor environment, a state of sadness.

[Type text]


Conclusions This work aimed at developing an interactive model which interrelates user's emotions with his environment, by creating a mechanism of detecting personal emotions and reacting accordingly, based upon stored data base related to color classifications in relation to emotional states. this research determined the goals of developing such a model, the first goal was achieved interactivity in a different way than the current interactive architecture models; by interacting with the user without his permission or awareness, Contrary to other models which needed user's awareness and interfere to begin the interaction. This was successfully done with our proposed model, through experimental model using Unity 3D software. The second goal was to improve the user's negative emotional state, and emphasize his positive emotional state. Which was achieved theoretically, through psychological researches of color effect on emotions, and its bodily indicators. Even though one of interactive architectural characteristics is focusing on the interactivity without questioning its utility. In order to achieve our first goal, our experimental model needed to connect user's emotions with the indoor environment through emotions recognition, theories, applications, and color responding simulation. Started with emotion theories in psychology, concluding the most suitable way for our research in measuring emotional state, which was the FACS, using the FaceReader software based on this system, to recognize facial expressions and detect user's emotions, in which we narrowed them into three out of six basic emotions presenting a state of negative and positive emotions. With anger which represents the section of (negative valence, high arousal) the recognition of the software was faster than sadness which represents the section of (negative valence, low arousal), but it was obviously noticed that positive emotions represented by happiness (positive valence, high arousal) was the fastest and most recognizable emotion due to its obvious action codes. In the experiment model that we simulated by Unity 3D software, we have had to delay the color responding to 1 second, but in the real experiment we must delay it more! To make sure that the effect of the color changing has enough time to make an emotional improvement. Finally; in conclusion, the interactive model with user's emotions succeeded in terms of interactivity, but it needs to be tested in terms of emotional improvement to test the theories which we relied on; this is probably is the task of psychologists whom they assigned specific emotions with specific colors, the question is: was that a convenient assignment or not? A question whose answer is far away from our scope here.

Possible Future Studies and Applications This work is probably a good starting point in other researches of interactivity with user's emotions in architecture as a real time interaction, it could be extended to include the rest of the basic emotions, in order to have better results. The response to user's emotions could be extended to include more indoor environmental changes by expanding the field of responsiveness of vision, only to

[Type text]


include all occupant senses such as vision, hearing, touch, and sensing the surrounding space. The model can response to anger, for example, by changing the indoor environmental theme, changing color to green (or gradient green), with dimming the artificial light to decrease the glare, and expanding room size by using flexible materials. All working to gether to improve negative emotions into positive emotional state. More importantly the applications for this model could vary from regular spaces such as bedrooms or living rooms for personal comfort, to improving the educational environment in classes, or rehabilitation centers, and psychiatric therapy rooms. The studies and researches on color, light and music therapy are well known in psychology, this model could offer a real help with cases such as depression and anger management control. The importance of involving the user more and more in architectural design, with integrating new technology such as real time interaction with personal emotions, which can be a starting point for many applications that attached with the user on many other levels that we used to do. Giving the user a different experience every time.

Acknowledgments Many thanks are dedicated to the University of Jordan for offering technical facilities to implement necessary experiments; without, this work would not have been completed. Thanks go also to all who have contributed to make this work possible, especially those who helped in programming preparation. Many regards and thanks to Noldus information technology for all the support and facilitates in FaceReader software.

References Ekman, P., and Friesen, W., (1978) Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto. Ekman, P., Sorenson, E. R. and Friesen. W. V. (1969). Pan-cultural elements in facial displays of emotions, Journal of Science, New Series, Vol. 164, No. 3875. (Apr. 4, 1969), pp. 86-88. Garber, L. L. & Hyatt, E. M. (2003). Color as tool for visual perception. In L. Scott & R. Batra (Eds.), Persuasive imagery: A consumer response perspective. (pp. 313-336). Mahwah: Lawrence Erlbaum Associates. Hope, A., and Walch, M. (1990). The Color Compendium. V.N.R, New York. Kaya, N., (2004). Relationship between color and emotion. Innovation.

Publisher: Project

Lehman, M., (2010). Bringing Architecture to the next level, (electric version). Meerwein, G., Rodeck, B., and Mahnke, F. H., (2007). COLOR – communication in architectural space, Boston: Birkhäuser. Mollon, J. (1995). Seeing Colour. In T. Lamb & J. Bourriau (Eds.), Colour Art & Science (pp. 127-150), Cambridge: Cambridge University Press. Pallasmaa, j., (2004). The eyes of the skin – Architecture and the senses, Great Britain: Wiley-Academy.

[Type text]


Partala, T. and Surakka, V. (2003). Pupil Size Variation as an Indication of Affective Processing. International Journal of Human Computer Studies, 59(1-2), 185- 198. Scherer, K.R., Schorr, A., and Johnstone, T., (2001). Appraisal Processes in Emotion: Theory, Methods, Research; Oxford University Press. Schnall, S., & Laird, J., (2008). Keep smiling: Enduring effects of facial expression and posture on emotional experience and memory. Clark University, USA. Schueler, Nora. (2010). Interactive Architecture, Master Thesis, Delft University of Technology- architecture faculty, Hyperbody, The Netherlands. Terzis, V., Moridis, C.N., and Economides, A.A. (2010). Measuring Instant Emotions During a Self-Assessment Test: The Use of FaceReader. Netherlands: The Proceedings of Measuring Behavior.

[Type text]


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.