Frustration and the Interface Agent Emily Sappington
Abstract
New School University
This study investigates the implementation of the classic psychological methodology of deception, executed in the form of an impossible computer task which measures user’s persistence despite inevitable frustration. Unlike many other studies on the topic of interface agents, persistence, instead of performance, is the optimal research finding for this experiment. In this study a text-only computer interface agent proved ideal for persistence over time and in the number of clicks participants made in attempting to solve the given impossible task. Additional post-experiment surveys reveal a preference towards trusting a female interface agent and feelings of frustration towards male interface agents. Responses to the impossible task in both written and physical input manifestations are discussed in terms of user experience.
123 Author Ave. Authortown, PA 54321 USA emilysappington@gmail.com Dr. Marcel Kinsbourne New School University 66 West 12th Street. New York, New York 10011 KINSBOUM@newschool.edu Dr. Scott Pobiner Parsons School for Design 66 5th AvenueNew York, New York 10011 PobinerS@newschool.edu
Keywords User experience, frustration, interface, agent, gender, impossible task, deception, trust
ACM Classification Keywords
Copyright is held by the author/owner(s). CHI 2011, May 7–12, 2011, Vancouver, BC, Canada. ACM 978-1-4503-0268-5/11/05.
H.1.2 [User/Machine Systems]: Human Factors, Psychology; J.4 [Social and Behavioral Sciences]: Psychology; H.1.2 [User/Machine Systems]: Software Psychology
2
General Terms Frustration, persistence, interface agent, impossible task, trust
Introduction Testing the rate at which users become frustrated with a given interface when presented with a task provides insight as to how designers may be able to design an interface which delays frustration and allows users to have patience when attempting to resolve a problem. The findings presented in this study offer additional insight on the role that interface agents may play in the amount of time and effort (measured in mouse clicks over time) that a participant puts into a frustrating task.Picard and Klein found [1], computers have the potential to activate certain emotional states within users, and activating such states is exactly the purpose of this experiment.
Research Many studies on human-computer interaction present the effects of various types of interface agents on performance in terms of memory and other contexts. While research on the usefulness of interface agents often varies from case to case, studies have found that animated interface agents can be distracting and/or irritating [3]. Research on interface agents and gender indicates [5] that cross-gender preference based on attraction for heterosexual users can occur. In past research, more positive attributions were made towards female interface agents and more negative attributions were made towards male interface agents [7]. Similar findings can be noted for a study on how the "coolness" of an agent may shift perceptions on careers in engineering, in which female agents elicited more
positive responses towards the field [8]. The potential for design in software agentsresearch can be seen in how prior work has shown that the realistic representation of the agents in a particular interface will make the agents, and perhaps the interface itself, seem more intelligent than a simple geometric form [6]. A user already considering the realistic agent in these terms could then ascribe other human-like traits to an agent that appears “intelligent�, such as trustworthiness. Some research on believability and agent type [10] found that agents who are more expressive in design were rated on a numeric scale as beingmore believable. This, however, is dependent on the design, function and purpose of the agents as research on silent agents found [11] students preferred a text-basedinterface agent in a career-counseling system over an agent with an expressive face. Because a more animated or anthropomorphized agent does not always lead to feelings of trust, believability or even less frustration, more research must be done. Responses of irritation with a speaking human-like interface agent have previously been reported [3] and so equivalent findings of annoyance would not be altogether surprising in future studies.
Methods "Grace Gallery" was developed to deceive participants into believing that they were entering a virtual gamelike setting. Participants were randomly assigned to one of three interface agent conditions as a part of the between-groups methodology. The experimenter explains that the participant will interact with a computer-generated art gallery and states that the task they will be given is a simple computer task. Interface agents (gallery assistants) greet users after a short animation of opening doors, which was included to
3
increase believability and interest in the task. The interface agents state that the participant must help the agent hang a painting by clicking the center of a wall before he or she can be shown the rest of the gallery.The task itself is entirely impossible and the interface is designed so that users receive randomly generated vague yet encouraging phrases from the interface agent. The agent appears to provide useful feedback by saying phrases such as: “You’re close, but that still isn’t it, try again”. Participants were timed as well as monitored by video cameras. Participants’ mouse clicks were recorded in terms of X and Y quadrants as well as how many times and how quickly each clicked their mouse. A post-experiment survey inquired about participants’ feelings about their agent in terms of trust, frustration, and how much they liked the agent both in the beginning of and by the end of the experiment. Participants were not told beforehand exactly how long the task would take, allowing individuals to persist with the impossible task for as long as they wish to. Thus, during the experiment the participant enters a relationship with the interface agent in which he or she trusts that the agent will only respond positively once the participant has clicked on the exact center of the wall.
The Agents Three interface agents were used for this study, a male, a female, and a text agent that did not produce audio feedback.Two interface agents test gender preference as one is a man and the other is a similarlyaged and styled female. Rather than use a “Wizard of Oz” technique [3] for triggering interface agent responses, a control for experimenter bias was
implemented by designing an interface that would randomly respond with one of 15 rejecting, yet encouraging, responses. This method promotessimilar interactions between participants despite agentassignment, and thus was not dependent on the user's perceived emotional state [9].Actors for the interface agents were chosen to be both relevant and appealing to participants who would primarily be New School University students and other local 18-35 year olds. Two Caucasian New School University students ages 22 and 24 were chosen to represent the male and female agents. Both male and female agents were presented with brown hair and wearing dark neutral businesscasual attire, as if to be working in an art gallery.
Procedure Forty participants (19 men and 21 women) were tested with “Grace Gallery” at the New School University within the Psychology Department from June to August 2010. In this context, participants were monitored bytwovideo cameras. Participants were given the following instructions: "You will now be given a simple computer task of hanging a painting in a virtual gallery. You may use whichever mouse you prefer. You may forfeit at any time. Listen to the computer's instructions entirely and come let me know when you have finished." Participants were then left in the room to listen to or read instructions provided by the interface agent and then attempt to properly hang the painting on the "Grace Gallery" wall. Upon exiting the experiment room, after participants had presumably given up or forfeited the experiment, each participant was debriefed and given a post-experiment survey to respond to their experience. A 1-5 scale was borrowed
4
from similar experiments iments [[2], which tested participants’ feelings towards the interface agent after the experiment.
figure 2:Mean Mean time spent on the interface in minutes.
Trusting the Interface
figure 1.Mean Mean number of participant clicks show a far greater number of mouse clicks when participants were assigned to the text-only only computer interface ag agent (labeled “computer�).
Results The mean number of clicks that participants made within the interface was 80.31 with a Standard Deviation of 154.926, showing a great deal of variance between participants. On average, participants spent 4:17 minutes interacting racting with the interface before forfeiting and leaving the experiment room. Participant engagement ranged in time from 1:14 minutes to18:26 minutes. On a scale of 1 1-5, one being not very frustrating and five being very fru frustrating, participant responses s showed a trend towards finding the male interface agent more frustrating to work with (figure 4). Participants also responded that they trusted the female interface agent more than the male agent.
Participants responded that they trusted the female interface agent more and the male agents less(figure less 5) perhaps explaining the gap in time spent on either assigned interface.Possibly not wanting to prematurely admit [11]] that they were finding the task impossible, participants began to find other methods of testing their ability to complete the task. Eight of the participants began to use their hands or another available device, such as a cell phone or pen, to measure the he screen themselves after clicking alone proved insufficient.. Commonly, one of the last attempts participants made was to click extremely quickly in approximately what was the same spot on the screen, certain that they had found the middle but frustrated that the computer was not recognizing this. For the purposes of this study this last-ditch last attempt was noted as the trial-by-rapid-fire fire method of entering information (Figure 3).
5
figure 3: One participant’s mouse clicks using the trial trial-byrapid-fire fire method attempting to solve the impossible task.
figure 4: Mean 1-5 5 numeric scores on frustration.
Discussion Users sers persisted over a longer period of time and with more mouse clicks when they e engaged with a text-only computer interface agent (figures1&2). Both male and female participants responded that they trusted the female agent more (figure igure 5).Similar to other research on anxiety-inducing inducing agents [2] these findings indicate that users were less affected ffected and slowed-down by the
text-only only presentation of a computerized agent. agent Previous studies have shownthat shown monitoring agents improve trust in website content [2] [ the same holds true for participants in this his experiment.These experiment. results show a preference towards text-only text interface agents without audio feedback for persistence with an impossible possible and frustrating task. In the post-experiment survey, male agents were marked as more frustrating for participants of both genders. Both men and women responded that the computer agent was the least frustrating, rating the female agents as slightly more frustrating than the text-only only agent.(figure agent 4)As well as being persuasive [4], ], agents can also elicit human emotions such as trust, as was found in the case of the female interface agent.
figure 5: Mean 1-5 5 numeric scores on trust.
Conclusion: These findings show a relative deviation from some prior research on the role of interface agents during a given task. The results expand upon the notion that female interface agents are viewed more favorably across genders, and the findings expand upon how h human-like like agents are trusted more. The findings of this experiment support the notion that a text-only
6
agent interface is ideal for persistence despite an impossible task. A correlation between time spent on the impossible task and mouse clicks was observed, as was a preference to trust a female interface agent.
Further Research: Contrasting two different gendered interface agents reveals a preference towards a female agent in terms of trust and less frustration in users. Overall a textonly agent led users to persist for a longer period of time, which suggests that human-like interface agents may sometimes be a disadvantage. Further research could use different experimental methodology to examine how exactly users trust female interface agents versus male. In terms of frustration, the trialby-rapid-fire approach to problem solving on computers shows that users appeared to elicit this as a final attempt to solve the impossible task.It would be interesting to do further research into why users take this approach in their computer interactions.
Acknowledgements Special thanks to the New School Psychology Department, the Institutional Review Board, Eli Bock, Bridget O’Hara Hale, Joey Labadia, and Ingrid Wu.
References: [1] Picard, R. W and J. Klein. 2001. Computers that Recognize and Respond to User Emotion: Theoretical and Practical Implications. Interacting with Computers. 14, (2002), 141-169. [2] Klemmer, R.S., Thomsen, M., Phelps-Goodman, E., Lee, R. and Landay, J.A. Where do web sites come
from? Capturing and interacting with design history. In Proc. CHI 2002, ACM Press (2002), 1-8. [3] Jaksic, N., Branco, P., Stephenson, P., and Encarnação, M. L. The Effectiveness of Social Agents in Reducing User Frustration. CHI 2006 o Work-inProgress , ACM Press (2006) 917-922. Schwartz, M. Guidelines for Bias-Free Writing. Indiana University Press, Bloomington, IN, USA, 1995. [4] Havelka, D., Beasley, F., and Broome, T. A Study of Computer Anxiety Among Business Students. MidAmerican Journal of Business. ABI/INFORM Global, Spring 2004, 19, 1, 63-71. [5] King, W. J., and Ohya, J. The Representation of Agents: Anthropomorphism, Agency and Intelligence. CHI Short Papers 1996, ACM Press (1996) 289-290. [6] Krämer, N. C., Bente, G, Eschenburg, F., and Troitzsch, H. Embodied Conversational Agents. Social Psychology 2009. Hogrefe & Huber Publishers (2009) Vol. 40, 1, 26-36. [7] Macaulay, M. The speed of mouse-click as a measure of anxiety during human-computer interaction. Behaviour & Information Technology, Taylor & Francis Group (2004) Vol. 23, No. 6, 427-433 [8] Wilson, K. Evaluating Images of Virtual Agents. CHI 2002 o Student Poster, ACM Press (2002) 856-857. [9] Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stoner, B. A., Bhogal, R. S. The Persona Effect: Affective Impact of Animated Pedagogical Agents. [10] Sproull, L., Subramani, M., Kiesler, S., Walker, J. H., and Waters, K. When the Interface Is a Face. Human-Computer Interaction. (1996) Vol. 11, 2, 97124. [11] Norman, D. A., The Design of Everyday Things. MIT Press (2002) Chap.