Task: Impossible - Frustration and the Interface Agent Emily Sappington Parsons the New School for Design Eugene Lang the New School for Liberal Arts 66 5th Avenue, New York, NY 10011 emilysappington@gmail.com 860-371-7003 Dr. Marcel Kinsbourne Dr. Scott G. Pobiner Department of Psychology School of Design Strategies Eugene Lang the New School for Liberal Arts Parsons the New School for Design 80 5th Avenue, New York, NY 10011 66 5th Avenue, New York, NY 10011 KINSBOUM@newschool.edu PobinerS@newschool.edu ABSTRACT
INTRODUCTION
This study investigates the implementation of the classic psychological methodology of deception; executed in the form of an impossible computer task which measures user’s persistence despite inevitable frustration. Unlike many other studies on the topic of interface agents, persistence, instead of performance, is the optimal research finding for this experiment. In this study a text-only computer interface agent proved ideal for persistence over time and in the number of clicks participants made in attempting to solve the given impossible task. Additional postexperiment surveys reveal a preference towards trusting a female interface agent and feelings of frustration towards male interface agents. Findings on users’ responses to an impossible task in both written and physical input manifestations are discussed in terms of user experience.
People commonly have frustrating experiences while using computers. On many occasions patience and persistence are necessary to reduce frustration and resolve problems. Computer applications and websites may feature bugs or glitches that make completing a seemingly simple task more difficult. Such problems may not necessitate much technological understanding, but rather might require users to have patience when attempting to complete a given task, and users may need persistence to complete the task several times.
Author Keywords
User experience, frustration, interface, agent, gender, impossible task, deception, trust ACM Classification Keywords
H.1.2 [User/Machine Systems]: Human Factors, Psychology; G.3 [Probability and Statistics]: Experimental Design; J.4 [Social and Behavioral Sciences]: Psychology; H.1.2 [User/Machine Systems]: Software Psychology
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. Copyright 2011 ACM 978-1-4503-0267-8/11/05...$5.00.
As Picard and Klein found [2], computers have the potential to activate certain emotional states within users, and activating such states is exactly the purpose of this experiment. While research on the usefulness of interface agents often varies from case to case, studies have found that animated interface agents can be distracting and/or irritating [5]. The findings presented here offer additional insight as to the role that interface agents may play in the amount of time and effort (measured in mouse clicks over time) that a participant puts into a frustrating task. Participants interacted with an impossible task, built within the interface of a computer-generated gallery named “Grace Gallery”. Data was collected by providing test subjects with a laptop computer, one of three interface agent types, two human-interface device (mouse) options, and one impossible task. This study will measure the effect of the different interface agents’ presentation styles, on participants’ frustration levels as measured by mouse clicks and time spent on the interface agent.
RESEARCH
Many studies on human-computer interaction present the effects of various types of interface agents on performance in terms of memory and other contexts. In an attempt to consolidate useful research, various sources and methodologies were considered when constructing this experiment. Prior research on human-computer interaction and frustration illustrates that users who have little experience using computers tend to have higher levels of anxiety when working with them [7]. Research on interface agents and gender indicates [8] that cross-gender preference based on attraction for heterosexual users can occur. While the concept of users being more persuaded by their opposite gender is interesting, one must also consider findings that show an overall preference for female agents. In past research, more positive attributions were made towards female interface agents and more negative attributions were made towards male interface agents [10]. Similar findings can be noted for a study on how the "coolness" of an agent may shift perceptions on careers in engineering, in which female agents elicited more positive responses towards the field [12]. The “Grace Gallery” experiment questions how much the user liked the interface agent, how frustrating they found the agent, and how much they trusted the agent in order to provide similar insight.
experimenter bias was implemented by designing an interface that would randomly respond with one of 15 rejecting, yet encouraging responses. This method promotes similar interactions between participants despite agent-assignment, and thus was not dependent on the user's perceived emotional state [14]. The persuasiveness of human-like interface agents [12] was considered in the methods of this experiment in terms of relating the agent to the given participant population. Actors for the interface agents were chosen to be both relevant and appealing to participants who would primarily be New School University students and other local 18-35 year olds. Two Caucasian New School University students ages 22 and 24 were chosen to represent the male and female agents (Figure 1). Both male and female agents were presented with brown hair and wearing dark neutral business-casual attire, as if to be working in an art gallery. The setting of the art gallery was fitting considering the New School's relationship with Parsons the New School for Design and the fact that the majority of participants would come from this population.
METHODS
In this experiment a computer-simulated gallery named "Grace Gallery" was developed to deceive participants into believing that they were entering a virtual game-like setting. The experimenter explains that the participant will interact with a computer-generated art gallery and states that the task they will be given is a “simple computer task”. Interface agents (gallery assistants) greet users after a short animation of opening doors, which was included to increase believability and interest in the task. The interface agents state that the participant must help the agent hang a painting before he or she can be shown the rest of the gallery. Designing the Agents
Three interface agents were used for this study, a male, a female, and a text agent that did not produce audio feedback (Figure 1). Aesthetic decisions for the interface agents in “Grace Gallery” were carefully considered as extensive research has been done on different examples of human-like interface agents, be they video-generated or anthropomorphized graphic representations [1]. Bente et al. found that there were no major experimental differences, but rather marked similarities in how representation of a video-recorded human and a graphic representation without audio were perceived by users. This finding allowed for video-recorded agents to be used in "Grace Gallery" without concern that a differently animated agent would significantly alter the results. The other two interface agents test gender preference towards interface agents, as one is a man and the other is a similarly-aged and styled female. Rather than use a “Wizard of Oz” technique [5] for triggering interface agent responses, a control for
Figure 1. Interface agents: Male, Female and Text-only
Designing the Task
An impossible task was designed to engage participants in the way a game might, and also force them to foster a dependent relationship with the interface agent’s feedback. The purpose of “Grace Gallery’s” impossible task is to ensure that all participants receive the same feedback and may spend an infinite amount of time on the interface. Participants were not told beforehand exactly how long the task would take, thus allowing individuals to persist with the impossible task for as long as they wish to. The participant enters a relationship with the interface agent in which he or she trusts that the agent will only respond positively once the participant has clicked on the exact center of the wall. Variations in participants’ locus of control may determine how long participants will persist as those with a high locus of control tend to assume that their actions will result in a positive outcome, and those with a low locus of control may forfeit earlier as they tend to see less of a correlation between their actions and a positive outcome. The task itself is entirely impossible, as the interface was designed so that users would receive randomly-generated vague yet encouraging phrases from the interface agent. The agent appears to provide useful feedback by saying: “You’re close, but that still isn’t it, try again”, and “Try moving it a bit over”. Participants were timed as well as monitored by a video-camera. Participants’ mouse clicks were also recorded in terms of X and Y quadrants (in pixels) as well as how many times and how quickly each clicked their mouse. A post-experiment survey was created to inquire about participants’ feelings about their agents in terms of trust, frustration, and how much they liked the agent both in the beginning of and by the end of the experiment. HYPOTHESES
Due to the varying nature of research on interface agents, two hypotheses were considered for the purposes of this study. Often contradictory findings and evidence seemed both useful and correct, so both potential expectations are detailed. For both hypotheses there is expected to be correlation between the number of mouse clicks and time spent on the interface as participants persist with the task. Hypothesis 1
Animated interface agents, such as the video agents used in this experiment, often serve to entice and encourage users to interact with a particular program. Some previous research [15] argues that the presence of such agents in educational software motivates students. To increase engagement and incentive, researchers [14] believe that agents should display and understand user’s emotions. In the “Grace Gallery” interface careful consideration was paid to actors' intonations of sympathy and apologetic tones. Research has found that virtual speakers can be equally as persuasive as real-life human speakers [8], and thus the male and female interface agents in “Grace Gallery” should elicit similar feelings of trust and
believability when encouraging users to persist. The following examples of responses were constructed to motivate participants by presenting a sympathetic agent: "I'm sorry, it isn't right yet." and "Nope, but you've been close, keep trying." These phrases should support the hypothesis that implementing realistic human agents will increase users' desire to successfully complete the task, and thus increase time spent on the impossible task itself. It is expected that the human voice, presenting the same script as the text-only interface agent, will prove comforting to users who might feel frustrated and question continuing with the task. In this lens, participants assigned to the male and female agent conditions should persist longer and with more mouse clicks than those assigned to the text-only agent. A study on agents delivering emotional news [10] reported that bad news delivered by an animated agent evokes less anger than the same news when it is delivered textually by a computerized agent. Likewise these authors report that agents in general create a more positive mood within users, which in this context should allow them to persist with a frustrating and impossible task. Kramer et al. writes that the human-like agent's minimal cues, such as facial expressions and slight body movement, make it appear similar to fellow humans [10]. These realistic attributes cause users to exhibit human to human communication such as cooperation, which then may foster trust in the agent. Believability and Trust
It is believed that the realistic representation of the agents in this particular interface will make the agents, and perhaps the interface itself, seem more intelligent than a simple geometric form [9] which the text-only agent uses to present lines of text. This would support the hypothesis that human-like agents will be more trusted. Users’ interactions with an interface agent are important for creating a productive relationship. In this light, trust and believability on the part of the user in the agent is important. In this interface there are no rulers or guides for participants to be sure that they have the painting in the exact center of the gallery wall, and so they must rely on and trust the audio or written feedback they receive from the interface agent. Following the experiment the aforementioned survey records participants’ assessment regarding if they trusted the agent's feedback during the experiment. Based on this research, participants should respond that they believed and trusted the agents in the male and female conditions more so than the computer condition. This is because some research on believability and agent type [15] found that agents who are more expressive in design are rated on a numeric scale as being more believable. This, however, is dependent on the design, function and purpose of the agents as research on silent agents found [16] students preferred a text-based interface agent in a career-counseling system over an agent with an expressive face.
Hypothesis 2
In the previously mentioned study [16] users responded that with the computer agent they felt more relaxed and selfassured than if the same responses were presented from a 3D face. In pilot tests of the “Grace Gallery” interface some respondents noted that they found the verbal feedback from the interface agents to be annoying and abrasive. Similar responses of irritation with a speaking human-like interface agent have previously been reported [5] and so equivalent findings of annoyance would not be altogether surprising. In terms of persistence, it can then be expected that participants in this experiment might follow this trend and forfeit the painting-hanging task out of frustration with the interface agent. With this insight it can be predicted that users who are assigned to the text-only interface agent condition may persist with the task longer and will click more. PROCEDURE
Forty participants (19 men and 21 women) were tested with “Grace Gallery” at the New School University within the Psychology Department from June to August 2010. The Psychology experimentation rooms were chosen because they are equipped with video cameras that can monitor participants from various angles. In this context participants were monitored from the front with a camera aimed towards their face to capture facial expressions and spoken words, and from behind to capture physical movements and human-interface device changes. The rooms contained a laptop on which the participants would complete the task and two types of human-interface device options, one track-pad and one classic mouse. Participants arrived at the New School University and first signed consent forms regarding the experiment itself as well as acknowledgement that they were aware of their being video-taped. Upon completing consent forms, participants completed a PAI personality survey to test for outliers based on personality-type. The personality test served as a way of controlling for a participant who may have greatly skewed data due to a personality condition or disorder that varies greatly from other participants. Upon completing the personality survey participants were led to the experiment room and were given the following instructions: "You will now be given a simple computer task of hanging a painting in a virtual gallery. You may use whichever mouse you prefer. You may forfeit at any time. Listen to the computer's instructions entirely and come let me know when you have finished." Participants were then left in the room to listen to or read instructions provided by the interface agent and then attempt to properly hang the painting on the "Grace Gallery" wall. Upon exiting the experiment room, after participants had presumably given up or forfeited the experiment, each participant was debriefed and given a post-experiment survey to respond to their experience. A 1-5 scale was borrowed from similar experiments [4] which tested participants’ feelings towards the interface agent after the experiment. In the debriefing the experimenter explained the purpose behind the
deception (in saying that the task was simple), underscoring that all participants were unable to successfully hang the painting. The larger research goals of this experiment were also explained to each participant. LIMITATIONS
The most significant limitation to this study is the relatively small size of the participant group (40 people). Outliers may have skewed data and a much clearer picture of the role of interface agents in an impossible task could be achieved with a larger group. Similarly, several participants left the room smiling because they had "figured it out" (as one participant noted), that the task itself is impossible. The words “Psychology Department” printed on the walls where the experiment took place most likely reminded some participants of deception experiments they had prior knowledge of. Had this experiment been given in a setting other than the Psychology department at the New School, participants may not have been so suspecting of deception. A relatively higher number of clicks with the track-pad human- interface device can potentially be explained by the relative young age of many of the participants who may use laptops.
Figure 2. A comparison of number of clicks and time spent on the interface, broken down by agent type. RESULTS
The mean number of clicks that participants made within the interface was 80.31 with a Standard Deviation of 154.926, showing a great deal of variance between participants. On average, participants spent 4:17 minutes interacting with the interface before forfeiting and leaving the experiment room. Participant engagement ranged in time from 1:14 minutes minimum to 18:26 minutes maximum. A significant correlation was found between total mouse clicks and time spent on the interface (Figure 3). On a scale of 1-5, one being not very frustrating and 5 being very frustrating, participants responded giving the interface agents a mean score of 2.75 out of 5. Participants
responded on a similar scale that they liked the interface agents 2.97 at the beginning of the study, but this dropped across all agent-types and participant genders to 1.92 by the end of the experiment. After the experiment, participants responded on average that during the task they trusted the interface agents 2.15 on a scale of 1-5. The majority of the participants in this study were right-handed and used the classic computer mouse for the experiment.
the trial-by-rapid-fire method of entering information (Figure 6). Interestingly, as respondents wrote that they trusted the female interface agent more (Figure 4), participants who attempted to measure the screen with a physical object responded that they trusted the interface agent more during the experiment, than those who did not attempt to measure the screen (Figure 5).
Trusting the Interface
Many participants left the experiment room feeling as if they had done something wrong because they were not able to properly hang the painting and thus had to click the “forfeit” button. Several stated to the experimenter: “I
Figure 4. Mean scores from a 1-5 scale on how much participants trusted the interface agent. Mouse Choice
Figure 3. Mean scores on how much participants trusted the interface agent (male, female or text-only/computer) couldn’t get it” or “I couldn’t figure it out”. This feeling of self-blame could have been predicted given the presented simplicity of the task by the experimenter [17]. Not wanting to admit [16] that they were finding the task impossible, participants began to find other methods of testing their ability to complete the task. While all of the participants began the experiment leaning forward onto the table and sitting upright in their chair, video analysis revealed that many participants shifted their posture during the experiment. Several participants leaned backwards and tilted their heads so as to change their perspective on the screen in an attempt to find the center. Eight out of the thirty-six participants whose interactions with the computer were accurately recorded began to use their hands or another available device, such as a cell phone or pen, to measure the screen themselves. This observation shows that while the participant may not trust their eyes and the screen alone, they did believe that a physical object could better help them find the center of the screen. This physical test shows that participants assumed that they would soon complete the task, and when it too failed, they persisted somewhat frustrated before giving up. Commonly, one of the last attempts participants made was to click extremely quickly in approximately what was the same spot on the screen, certain that they had found the middle but frustrated that the computer was not recognizing this. For the purposes of this study this last-ditch attempt was noted as
This study offers some surprising insight on users and their physical efforts when encountering a frustrating task. Participants were given the choice as to which humaninterface device they would prefer to enter their clicks with in order to hang the painting, deciding between a classic mouse (connected to the computer through Bluetooth) or a track-pad directly on the laptop base. Participants who used or switched to the track-pad on the laptop clicked on average far more than users who used a standard mouse. While the range of participant clicks varied greatly (some clicked 4 times, others over 800 times) overall the track-pad provided ease of entry for users who wished to click repeatedly in the same spot on the screen in a trial-by-rapidfire approach. (Figure 6)
Figure 5. Measurement of mouse clicks and time spent on the interface by human-interface device choice. DISCUSSION
The second Hypothesis proved true in the case of this experiment as users persisted over a longer period of time and with more mouse clicks when they engaged with a textonly computer interface agent (Figure 2). However, aspects of Hypothesis 1 also fit the results as the female interface agent received higher marks on the 1-5 scale of how much participants trusted the interface agent (Figure 4). As anxiety inducing agents are discussed in other work, [4] these findings indicate that users were less affected and slowed-down by the text-only presentation of a computerized agent, as opposed to the human-like agents where participants spent less time on the task. Participants assigned to the text-only interface agent condition by far spent more time on the interface and clicked the mouse more (Figure 2). These results show a preference towards text-only interface agents without audio feedback for persistence with an impossible and frustrating task. Across all conditions, participants liked each interface agent less after having attempted and failed to hang the painting in “Grace Gallery” (Figure 7). In the post-experiment survey, male agents were marked as more frustrating for users across both genders. Both men and women responded that the computer agent was the least frustrating, rating the female agents as slightly more frustrating. Female participants’ views on the before and after perceptions of the interface agent differed more than male participants before and after the experiment (Figure 4). Just as research has shown that human-like animated agents can be persuasive [6], they can also elicit human emotions such as trust, as was found in the case of the female interface agent in support of Hypothesis 1.
Figure 6. Mean scores on how frustrating the interface agent was (male, female or text-only/computer) One unsurprising finding is that participants liked interface agents significantly less after attempting and failing to hang the painting. This should underscore beliefs that users have less favorable views of interface agents when the agent has had a relationship of only correcting and denying the user. Despite the humanized voice in both the male and female agents’ phrases of encouragement, users still reported liking their human-like and text-only agent less on a 1-5 scale after the experiment (Figure 7). Especially noticeable was the two-point drop in female participants’ scores of the female agent from before to after the experiment. This significant drop in rating provides insight into gender differentiations in response to gendered interface agents. In the comments section of the post-experiment survey, more participants responded writing the words “annoying” “frustrating” and “patronizing” for the male agent than for either of the other conditions. While the word “annoying” appeared in responses for all interface agents, it appeared the most in participants’ responses to the male agent. The female interface agent received comments such as “I liked her at first…” , “She was pretty” and “I wanted to visit the rest of the gallery” perhaps offering insight as to how participants were incentivized to persist with the female agent slightly more than the male. One participant summed up their experience with the frustrating and impossible task by writing: “It went from being a ‘person’ to an annoying computer prompt.” Responses such as this participant’s (in response to the female interface agent) fit with the finding that those who were assigned the text-only condition used fewer emotional words, one simply responding that it was “weird”. This feedback and others are useful to consider in analysis of user experience.
Figure 7. An example of the trial-by-rapid-fire attempt, one participant’s mouse clicks in “Grace Gallery�
Figure 9. Mean scores on a 1-5 scale of how frustrating participants rated each of the interface agents.
Trust
Within gender-pairings and across all three interface agents, both male and female participants responded that they trusted the female agent more (Figure 4). Computer textonly agents were trusted less than the female agents and male agents were surprisingly trusted slightly less than the text-only agent (Figure 4). As studies have shown how monitoring agents improve trust in website content [4] the same holds true for participants in this experiment.
Figure 8. Mean scores from a 1-5 scale on how much participants liked the agent before and after the task.
IMPLICATIONS FOR DESIGN:
This research has potential for several practical applications in terms of interface agent usage and overall knowledge about user experience. The results of participants persisting with the computer agent for the longest period of time are unsurprising due to insight gathered during early pilot tests on the interface. Most likely because this interface condition lacked audio feedback, it was less annoying to users and thus persisting with the inevitably impossible task was less of a bothersome experience. It should be noted that with a mean time spent on the interface of only 4 minutes and 17 seconds, users can get annoyed fairly quickly when their interactions with agents are frustrating. In order to elicit patience within users, auditory feedback, and the use of animated and human-like interface agents, may backfire and lead to irritated users. Though users may trust the computer text-only agent less than a female agent as it will not be ascribed the attributes of a human being, users may find the text agent less aggravating. A consideration for user experience, especially in this experiment's participant age group, should be noted for how one encounters a problem on the computer. Many participants looked beyond the computer itself as a means of problem-solving, eliciting the measuring assistance of their hands, pens and cellphones. This finding supports the notion that users trust concrete, tangible measurement and at times defer to it as a means of dealing with unclear feedback from an interface. Several participants seemed to take the approach of trial-by-rapid-fire (Figure 6) in that they repeatedly clicked in a particular area of the screen, varying their mouse's position only slightly. Designers should prepare for a population ages 18-35 to react to problems on the computer with excessive clicking, especially if these users are using a familiar track-pad, as was the case in this experiment. CONCLUSION
These findings show a relative deviation from some prior research on the role of interface agents during a given task.
The results expand upon the notion that female interface agents are viewed more favorably, and the findings expand upon how human-like agents are trusted more. Future studies on user age groups and human-interface device preference and performance would add to these conclusions on the use of track-pads. Similarly, future research on the role of gender differentiations in response to interface agents would prove useful. The findings of this experiment support Hypothesis 2 in that a text-only agent interface is ideal for persistence despite an impossible task. A correlation between time spent on the impossible task and mouse clicks was observed, as was a general preference towards trusting a female interface agent. ACKNOWLEDGMENTS
Special thanks to the New School Psychology Department, the New School University Institutional Review Board, Eli Bock, Bridget O’Hara Hale, Joey Labadia, Ingrid Wu and all who participated in this experiment. REFERENCES
1. Bente, G., Krämer, N. C., and Petersen, A. Computer Animated Movement and Person Perception: Methodological Advances in Nonverbal Behavior Research. Journal of Nonverbal Behavior. Human Services Press, Inc. (2001) Vol. 25, 3, 151-166. 2. Picard, R. W and J. Klein. 2001. Computers that Recognize and Respond to User Emotion: Theoretical and Practical Implications. Interacting with Computers. 14, (2002), 141-169. 3. Rickenberg, R. and Reeves, B. The Effects of Characters on Anxiety, Task Performance, and Evaluations of User Interfaces. CHI Papers 2000, ACM Press (2000) 1-6. 4. Klemmer, R.S., Thomsen, M., Phelps-Goodman, E., Lee, R. and Landay, J.A. Where do web sites come from? Capturing and interacting with design history. In Proc. CHI 2002, ACM Press (2002), 1-8. 5. Jaksic, N., Branco, P., Stephenson, P., and Encarnação, M. L. The Effectiveness of Social Agents in Reducing User Frustration. CHI 2006 • Work-in-Progress , ACM Press (2006) 917-922. 6.Schwartz, M. Guidelines for Bias-Free Writing. Indiana University Press, Bloomington, IN, USA, 1995.
7. Havelka, D., Beasley, F., and Broome, T. A Study of Computer Anxiety Among Business Students. MidAmerican Journal of Business. ABI/INFORM Global, Spring 2004, 19, 1, 63-71. 8. Zanbaka, C., Goolkasian, P., and Hodges, L.F. Can a Virtual Cat Persuade You? The Role of Gender and Realism in Speaker Persuasiveness. CHI 2006 Proceedings • Beliefs and Affect, ACM Press (2006) 1153-1162. 9. King, W. J., and Ohya, J. The Representation of Agents: Anthropomorphism, Agency and Intelligence. CHI Short Papers 1996, ACM Press (1996) 289-290. 10. Krämer, N. C., Bente, G, Eschenburg, F., and Troitzsch, H. Embodied Conversational Agents. Social Psychology 2009. Hogrefe & Huber Publishers (2009) Vol. 40, 1, 26-36. 11. Macaulay, M. The speed of mouse-click as a measure of anxiety during human-computer interaction. Behaviour & Information Technology, Taylor & Francis Group (2004) Vol. 23, No. 6, 427-433 12. Baylor, A. L., Rosenberg-Kima, R. B., Ashby Plant, E. Interface Agents as Social Models: The Impact of Appearance on Females’ Attitude Toward Engineering. CHI 2006 • Work-in-Progress, ACM Press (2006) 526531. 13. Wilson, K. Evaluating Images of Virtual Agents. CHI 2002 • Student Poster, ACM Press (2002) 856-857. 14. Elliott, C., Rickel, J., and Lester, J. 1999. Lifelike pedagogical agents and affective computing: An exploratory synthesis. Artificial Intelligence Today, volume 1600 of Lecture Notes in Computer Science. Springer-Verlag. 195-212. 15. Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stoner, B. A., Bhogal, R. S. The Persona Effect: Affective Impact of Animated Pedagogical Agents. 16. Sproull, L., Subramani, M., Kiesler, S., Walker, J. H., and Waters, K. When the Interface Is a Face. HumanComputer Interaction. (1996) Vol. 11, 2, 97-124. 17. Norman, D. A., The Design of Everyday Things. MIT Press (2002) Chap. 2