Bachelor's Thesis
A Usability Framework for mobile Web sites: How usability attributes and metrics can help in optimizing Mobile Web sites. A literature review.
Sondra Lynell Duckert 30. May 2012 E-Concept Development TEAM 3, Third Semester Spring 2012
Table of Contents Abstract Introduction .................................................................................................................................
1.
I. Methodology 1. Problem Discussion .................................................................................................................. 2. Problem Formulation ............................................................................................................... 3. Delimitations ............................................................................................................................
2. 2. 2.
II. Literature Review 4. Usability eory ....................................................................................................................... 5. Definitions and Perceptions of Usability ................................................................................... 6. A Review of existing usability measurement standards and models ........................................... 6.1 Usability and ISO Software Quality models and definitions .............................................. 6.1.1 Quality of Use .......................................................................................................... 6.1.2 Usability Assurance - Context of Use ........................................................................ 6.1.3 Software Quality Evaluation ..................................................................................... 6.1.4 SQuaRE Model ........................................................................................................ 6.2 Usability in Traditional Software Quality models .............................................................................. 6.3 Related Measurement Models ............................................................................................ 7. Usability Conceptual Models: A Common read ...................................................................
3. 3. 4. 5. 5. 5. 6. 6. 6. 6. 7.
III. Research and Analysis 8. Foundation of a usability framework ........................................................................................ 8.1 Usability Review ................................................................................................................ 8.1.1 Usability Guidelines ................................................................................................. 8.2 Metric Development - Goal Question Metrics (GQM) .................................................... 8.3 Usability Framework .......................................................................................................... 8.4 Results and Discussion .......................................................................................................
10. 10. 10. 10. 11. 12.
IV. Limitations and Conclusion 9. Limitations and Exceptions ...................................................................................................... 10. Conclusion ............................................................................................................................. 11. Supplement ............................................................................................................................
13. 13. 14.
Reflections .................................................................................................................................... References .................................................................................................................................... Bibliography ................................................................................................................................. Appendix ......................................................................................................................................
14. 15. 16. 18.
1. Table 4. A comprehensive listing of sixty-three (63) usability concepts. 2. Figure 4. ISO 9241-11 (1998) Ergonomics requirements for office with visual display terminals (VDTs) 3. Figure 5. ISO 9126-1 (1999) Software Quality Model 4. Figure 6. ISO/IEC 9126-4 (2001) Quality in use 5. Figure 7. Relationship between Internal quality, External quality and Quality in use. 6. Figure 8. ISO/IEC 25010 (2011)SQuaRE model 7. Figure 9. McCall's Quality Model 8. Figure 10. Boehm's Quality Model
A Usability Framework for mobile Web sites: How usability attributes and metrics can help in optimizing Mobile Web sites. A literature review. Sondra L. Duckert BSc., Multimedia Design and Communications Abstract
DOCUMENT INFO
ABSTRACT
Bachelors Thesis, 2012 E-Concept Development TEAM 3, Third Semester Delivery: Wednesday, 30 May 2012
Recent usability studies for the mobile web opened up more questions implying that usability problems were the results of poorly designed Web sites, or poorly developed applications or browser systems or devices, the users or the evaluation methods. Many of these dynamic factors can affect usability, however it is the quality of use and the usability attributes of a product, that will have an impact on a development process. This thesis is an exploratory study about how usability attributes that describes "a measure of how well actions are being performed with an interface," can improve the usability of mobile Web sites. It will review previous studies of usability concept models, evaluation methods and metrics. The objective of this thesis is to fully understand the fundamentals of usability, select the best usability attributes to construct a foundation suitable for measurement and present a goal oriented usability framework for the mobile Web that can help in improving its optimization.
Keywords: Usability Evaluations methods Mobile Web sites Optimization Page 1 Footnotes, pg1
Introduction
Introduction Advances in wireless technology have given users more mobility, ubiquitous connectivity, location based services and 'anytime, anywhere' access to information. A research study conducted by Gartner (2010) estimated that smartphones and browserequipped enhanced phones will surpass 1.82 billion units by 2013, eclipsing the total of 1.78 billion PCs.[1] This demand for mobile devices also places more emphasizes on the usability of mobile devices as well as their software-intensive systems. Thus, making usability an important issue for mobile technology. Usability is the most widely used concept in determining performance issues for software quality systems. It is a recognized factor used for evaluating a systems' ability to satisfy users expectations. Usability, which replaced the 1980s' term 'user-friendly,' is a key concept of human-computer interaction (HCI) and a property of interactive software. In the 1990s' each new shift in technology came with it the discovery of new usability attributes that were used in determining the 'quality of use' of a system as well as 'context of use'. This prompted some researchers to question whether usability should be redefined as a property of systems or a property of usage. The usability of a product is said to not only be affected by the features of the product, but also by user behavior, the tasks being carrying out, technical, organizational, and in
some cases the physical environment in which the product is used.[2] Understanding usability from a property perspective, of what is usable, varies between researchers and has as a result rendered the idea of usability a continuous and expanding property with a variety of overlapping definitions. Brian Shackel (1981) was the first to give a wide definition of usability. It describes usability as, "A system's capability in human functional terms to be used easily and effectively by the specified range of users, given specified training and support, to fulfill a specified range of tasks, within the specified range of environmental scenarios." [3] His definition presented two important usability attributes namely, ease of use and effectiveness. Preece (1984) defined usability as, "A measure of the ease with which a system can be learned or used, its safety, effectiveness and efficiency, and the attitude of its users towards it." [4] Her definition overlaps Shackely a little, yet it identifies another important attribute, safety. Usability designed for the mobile Web plays a different role than in desktop Web sites. In essence, mobile sites come with a new set of considerations to think about as they provide a different user experience. With the mobile Web, the importance of relevance, reliability, speed, flexibility, personalization and other variables are taking center stage over design and aesthetics. Mobile-wireless information 1/25
systems must be measured on the bases of traditional systems metrics, e.g of maintainability, minimum complexity, lack of faults, etc. (Ganfi, 2008). Traditional evaluation methods such as user testing (Dumas & Redish,1993), inspection methods like Nielsen(1992) and Schneiderman (1998) heuristic evaluations, thinking-aloud methods, cognitive walkthroughs and inquiry methods such as focus groups and interviews, have all been used to measure usability for desktop Web sites. There are however limited studies regarding usability measurement, models and metrics strictly for mobile Web sites. Some argue that the underlying technology is the same for both the desktop and mobile Web and that conventions would suggest that there is no need to test what we already know. However the context of use and thus usability of both are inherently different. The main objective of this paper is to review usability definitions, models and attributes, identify usability standards and attributes for measurements and create a goal oriented framework based on a foundation of usability guidelines and evaluations, that can help to I. Methodology mobile Web sites. optimize
I. Methodology blem Discussion
1. Problem Discussion Mobile devices are typically used in a more dynamic context, like shopping, checking-in for a later (airline) flight or using the GPS function to find the nearest coffee house. Fundamentally, 'mobile' refers to the user, and not the device or the application. (Ballard, 2007, p.3) Most sites on the Web were not built with mobile use in mind. However, everyday people access these sites through mobile devices. Mobile users are typically not restricted of movement, nonetheless they are restricted to the technical limitations of the device itself and its software-intensive systems.
Mobile devices are context dependent shaped by the interaction between users, functionality problems and ! ! ! ! software. The use of usability concept models and ! evaluation analysis of software systems, based on targeted user involvement, can help in identifying problems within those systems. However, the notion that there is no agreed upon understanding of what really influences usability, their attributes have become ! ! ! ! ! ! the main interaction points.! In contrast, these! constructs are sometimes difficult to quantify because they can present conflicts with each other. Usability per se, cannot be measured directly but metrics
mapped to quality characteristic can. For example if a system is learned by a novice user, learnability no longer becomes an important attribute for testing, memorability and efficiency takes its place when the system is used repeatedly. Moreover, when in the process does this takes place, and what then is the classification of the user, e.g. novice, intermediate or expert? The acknowledgement that usability concept models, and attributes are not consistently defined makes it difficult to carry them out in practice. Deciding which elements of usability are suitable for measurement, and how to understand the relationship between different measures remains a challenge. In todays' practice it is suggested that the selection of usability attributes seems to be a case of putting hard numbers to information we already know. This weakens usability studies because the way that usability attributes and 2. Problem Formulation evaluation methods are selected to justify 'quality or context of use' furthers implies that those who define what usability is, is subject for interpretation.
2. Problem Formulation Considering our discussion, this thesis proposes the following questions for mobile Web sites and usability. Main Question: How can the usability of mobile Web sites be evaluated effectively using methods that were created for desktop Web sites? Q 1. Are there specific features of usability reasonably important to users of traditional Web sites than to users of mobile sites? This question will explore if users expect to have the same satisfaction regardless of how or where they access information from the web. Q 2. Can usability be measured only for features and attributes? This question will explore if factors such as targeted users, tasks, technology and context of use, have an impact on the selection of usability attributes for evaluation.
3. Delimitations of the study This is not a technical document or a guide for implementation. It does not seek to define new 3. Delimitations usability standards nor give detailed advice about the economic benefits for mobile businesses. It also does not address the future of mobile Web sites or the future of usability or evaluation methods. 2/25
II. Literature Review 4. Usability Theory Usability theory is about human factors that would either improve or hinder a product or system success. It Footnotes, pg3 a group of ideas about how people interact with includes technology and how the content and/or context of use II. Literature Review influence the behavior of the user. In other words, 4. Usability Theorytheory studies how humans learn and usability remember systems, 'how the human eye processes certain images and other aspects of the user-technology experience, and determines which type of system would allow them to do so the most efficiently and with the least struggle'.[5] Some of the first studies of usability included human information processing and performance related to time and errors e.g. reliability, human cognition and mental models (important in the foundation for the design of interfaces), plans of actions, group collaboration, dynamics and workplace context. In essence, usability theory, is about language, people communication, receivers, systems and the context in which it is delivered.
5. Definition and Perceptions of Usability 5. Definition and perceptions
Defining usability began back in the 1970s' with human factor engineering and 'software psychology.' In the 1980s, cognitive science and engineering focused on displays, error recognition and interactions; and in the 1990s' organization task analysis focused on group interaction, accessibility, customization and internalization.[6] As previously mentioned in the introduction, Shackel (1981) was one of the first to offer his views about usability. He proposed that, usability is a "system's capability in human functional terms that referred to a specified range of users, tasks, and contexts." Many of the top usability specialists have presented their own criteria for defining what usability is. Steve Krug (2000) who frequently states that usability is 'not rock surgery', but a way in 'making sure that something works well', places emphasizes on the ability of the user to do the task at an expected level of experience, and to signify the importance of user modeling and the need to design interactive systems that match users experiences and skills.[7] Another usability pundit, Jakob Nielsen defined usability as "the engineering ideal of solving a problem for the consumer," and it aims to place "your customers' needs at the center of your Web strategy" (2000 p. 11-14). His definition together with Ben Shneiderman (1998) placed usability within a
broader interpretation, with the acceptance of a system capabilities that recognized the difference between usability and the utility of a system. They identified five key attributes of usability that applies to all aspects of a system that interacts with users. The attributes are: ease of learning (learnability), speed of performance (efficiency), low error rate, retention over time (memorability), and user attitude (subjective satisfaction). These attributes are commonly use throughout a system development lifecycle.[8] Nielsen views usability as a property with several dimensions, each composed of different components. Admittedly he recognizes that the different factors can conflict with each other. The International Organization for Standardization have several different takes on the concept of usability. The standard ISO 9241-11(1998) refers to usability as, "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use."[9] Effectiveness is related to "the accuracy and completeness with which users achieve specified goals", efficiency is the "resources expended in relation to the accuracy and completeness with which users achieve specified goals," and satisfaction as the "freedom from discomfort and positive attitudes towards the use of the product." Usability presented in this manner indicates the attributes are not 'absolutes' but measured in accordance in context of use. In the ISO/ IEC 9126-1(2001) standard usability is classified as one component with internal and external software qualities. It is defined as, "The capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions." [10] Abran et al.(2003) proposed extending the ISO 9241-11 definition by adding two further attributes: learnability and security. Nigel Bevan (1995) also developed a wide approach to usability as a "quality of use." He defined usability as, "Quality of use should be the major design objective for an interactive system: does the product enable the intended users to achieve the intended task?" His approach is said to directly link quality of use to the idea of usefulness. He contends that if a product is considered "usable, but not useful" would make sense if usability is defined in terms of ease of use, but that is a contradiction in terms from the broader view point, i.e. human factors. In 2000 Shneiderman introduced the term 'universal usability.' Shneiderman claims that, "Universal usability will be met when affordable, useful, and usable technology accommodates the vast majority of the global population: this entails addressing challenges of technology variety, user diversity, and gaps in user 3/25
knowledge in ways only beginning to be acknowledged by educational, corporate, and government agencies."[11] In essence, 'universal usability' is about the value of universal designs and the diversity in human capabilities, technological environments and contexts in use. This general look at the existing literature on usability represents definitions that are overlapping and broad and are mostly exclusive to software systems. Thus, most usability concepts come across as being based on the relationship between how some designers and 6.developers A review of existing can usabilitycreate a system that communicates its functionality to users plainly, and (in) how users can accomplish specific tasks based on predefined and Footnotes, pg4 somewhat subjective attributes, easily as possible. If we think about usability and context of use, which is defined as the 'actual conditions under which a given artifact/software product is used, or will be used in a normal day to day working situation[12], we can see how the idea of usability has shifted the usability paradigm over the years from 'how-to' in regards to usability rules to, 'in what context' does usability methods presents itself for the users. Bevan & Macleod (1994) wrote about understanding the context of use of a product in, Usability measurement in context. Thomas & Macredie (2002) argued how usability evaluations are ill-suited for emerging products and systems in their book, Introduction to the new usability, and Shami, Leshed, & Klein (2005) tackled, 'Context of use evaluation of peripheral displays' (CUEPD) by presenting a method that captures context of use, using individualized scenario building, enactment and reflection. It also uses two peripheral displays to evaluate and uncover important design elements.[13] These authors see context of use as an important factor in understanding new technology by focusing on products that were designed around human inputs and outputs and thus challenged the definitions and limitations of usability as it now stands. Many of the usability concepts presented were not designed from the same perspective, which could explain why the definitions given by researchers and experts differ. In 1984 Ken Eason stated that although the concept of usability had increasingly played an important role in HCI, it had not been well-defined as there is no universally accepted definition of usability. [14] Nevertheless, the definitions given by researchers presents four common factors that could affect the usability of interactive systems, namely the user, the task, the technology and the context of use.[15]
6. A Review of existing usability measurement standards and models There are several usability definitions that are placed in a framework of quality models, that view usability as a vital characteristic for the evaluation of quality software systems. Product quality measuring is difficult to assess, since there is no consensus on the meaning of quality. The International Organization for Standardization (ISO) together with The International Electrotechnical Commission (IEC), breaks down quality into several characteristics, that are then further placed into sub characteristics. The ISO/IEC9126 (2001) defines both internal metrics to be measured without having to operate the system and external metrics to be measured while testing the system, classified different types of international standards for usability based on design, environment, resources constraints, context of use, etc. The different types have therefore been categorized according to: (1) Product effect (output, effectiveness and satisfaction at the time of use of the product); (2) Product attributes (interface and interaction); (3) Process used to develop the product; (4) Organization's Capability. The standards were further consolidated in two major categories: Product oriented (ISO 9126, 2001: ISO 14598, 2001) and Process oriented (ISO 9241, 1992/2001; ISO 13407, 1999). [16] Below is a conceptual view of ISO usability standards. Organizational Capability Life Cycle Process
Process quality
Product quality
Development Process
Product
Quality in use Product Effect
Capability of User Centered use Process ISOTR 18529 ISO 13407
Interface Usability in and Interaction context ISO 9241 parts: ISO 9241-11 10, 12-17 ISO 14598-1 ISO/IEC 9126-3 ISO/IEC 9126-1,4
Figure 1. Categories of ISO usability standards
The most cited standards (ISO 9241-11, ISO14598-1, ISO 9126-1,4) listed in Figure one, are briefly touch upon in the following sections.
4/25
6.1 Usability and ISO Software Quality models and definitions
6.1 Usability and ISO
6.1.2 Usability Assurance
Just as with the definition of usability, defining software ! ! (2003)! defines quality has also! been a !challenge. Suryn it as, "The application of a continuous, systematic, disciplined, quantifiable approach to the development and maintenance of quality of software products and systems; that is, the application of quality engineering to software." The IEEE defines software quality as (1) The degree to which a system, component, or process meets specified requirements, and (2) The degree to which a system, component, or process meets ! ! ! ! customer or user needs or expectations.[17] According to Bevan (2005), 'user perceived quality' embraces the relationship between the quality of an interactive product and the needs of the user. Thus providing a means of measuring the usability of a product. ISO have endorsed a variety of models that define and measure software quality and usability. The following models are related to software quality and 6.1.1 Quality of use usability measurement standards. Footnotes,pg5 ! !
6.1.1 Quality of Use back to ISO/IEC9126 Quality of use as
defined by Bevan (1995) is "the extent to which a product satisfies stated and implied needs when used under stated conditions."[18] The next two models were combined to replaced ISO/IEC 9126 (1999) model. The first part describes the internal and external quality, the second part is the quality of use. Together they provide the framework for the evaluation of software quality.
- ISO/IEC 9126-1, (1999), This software quality model has three aspects, namely quality in use, external quality and internal quality. It defines usability as, "The ! ! ! ! ! ! ! capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions." Here usability is refer specifically to the usability attributes of a product. This standard can be broken down into six different factors, namely, functionality, reliability, usability, efficiency, maintainability, and portability. [19] - ISO/IEC 9126-4 (2001), this standard introduces the concept of 'quality of use' that is directly related to the user and has valued based perspectives. This model provides a framework for evaluating software product quality. It is influence by six different factors and has a higher-order for software quality that can be applied to every kind of software. It is also broken down into four different factors, namely effectiveness (usefulness), productivity, satisfaction and safety[20]. In this model the difference between usability and the 'quality in
use' is a matter of 'context of use'. Meaning that success depends on a specific situation in which the product or system is used.
6.1.2 Usability Assurance - Context of Use ISO 9241-11(1998), This standard is in contrast to ISO 9126 as it refers to the outcome of the interaction in a context. It measures user performance and satisfaction (human-factor requirements) to the extent in which set goals are achieved. When this standard was created it was meant to further specify the contents of usability assurance statements based on ergonomic requirements for office work with visual display terminals (VDTs). Its purpose was to ensure that the screen has the technical attributes required to achieve quality of use. It is based on the results for the effectiveness, efficiency and satisfaction and defines usability as, "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and with satisfaction in a specified context of use."[21] The three usability attributes used for measuring, are: (1) Effectiveness. This is how well users achieve their goals; (2) Efficiency is what resources are needed and used to achieve a goals; and (3) Satisfaction. This is how users feel about using a system.
users
intended objectives
task
goals usability: extent to which goals are achieved with effectiveness, efficiency and satisfaction
equipment environment Context of use
outcome of interaction
effectiveness efficiency satisfaction
Product
Quality of use measures
,Figure 2. Usability framework [8] (ISO/DIS 9241-11 1994-10-17) Quality of use and context of use are interrelated, where the former is dependent on the nature of the users, task and environment and the latter is dependent on the context in which it is used, i.e. users, tasks and environments.
5/25
6.1.3 Software Quality Evaluation and Measurements
6.1.3 Software Quality Eva
ISO/IEC 14598-1 (1999), !
!
!
!
!
!
!
This standard is an information technology and software product evaluation model used for measuring the quality in use from the perspective of internal software quality attributes. In this model, when usability is evaluated the focus is on improving the user interface while the context of use is treated as a given. It assumes that user needs and goals are ! ! ! ! ! ! ! ! expressed as a set of requirements for the behavior of the product in use. The requirements should be expressed as metrics that can be measured when the software is used in its intended context. (Bevan et al. 1994) [22]
maintainability and portability. The quality factors in the McCall model are considered sub-characteristics in the ISO 9126 model. The main difference between the two models is that in the McCall model one quality criterion can affect many quality factors, whereas in the ISO 9126 one sub-characteristic impacts only one quality characteristic. However, Pressman (2001) notes that, "Unfortunately, many of the metrics defined by McCall et al. can be measured only subjectively," which makes this model difficult to use as a framework in setting specific quality requirements.
In 1978 improvements to the McCall model was made by Barry W. Boehm. Boehm goal was to try to define software quality as a set of attributes and metrics. His model defined three primary basic software requirements: (1) As-is utility, is the extent to which the 6.1.4 SQuaRE as-is software can be used (i.e. ease of use, reliability, 6.1.4 SQuaRE Model and efficiency); (2) Maintainability, ease of identifying what needs to be changed as well as ease of ISO/IEC 25010 (2011) is a Systems and Software ! ! ! ! ! ! ! Footnotes, pg6 modification and retesting; (3) Portability, is the ease of Mcall Quality Requirements and Evaluation model. It changing software to accommodate a new replaces the ISO 9126-4 (2001) model, a software environment. Each primary was partnered with quality product quality requirement and evaluation model. factors that represented the next level in Boehm's Currently there are three quality models in the SQuaRE model. Both models try to breakdown the software series: The quality in use model, is similar to ISO artifact into constructs that can be measured. Both 9126,1-4, and the product quality model is similar to models also primarily used in a bottom up approach ISO 9241-11 and the data quality model. The quality to quality. [25] models provide a set of quality characteristics relevant to a wide range of stakeholders and users. It states that, "Usability can either be specified or measured as a product quality characteristic in terms of its sub-characteristics, or specified or measured directly by measures that are a subset of quality in use."[23] In this model the concept of quality in use has been broaden to embrace a wider range of issues than was common in usability (Bevan, 6.3 Related Measurements 2009).
6.2 Usability in Traditional Software Quality Models (Non-ISO/IEC)
6.2 Usability in Traditional
The McCall quality factors (1977) hierarchical model of software quality characteristics was produced for the US Air Force. He wanted to make a connection between users and developers. This model includes ! ! ! ! ! ! ! ! 'attributes' or factors that are divided into: (1) Product operation factors (correctness, integrity, reliability, usability); (2) Product revision factors ( maintainability, testability, flexibility,) and (3) Product transition factors (interoperability, reusability, portability). [24] McCall quality factors is often compared to the ISO 9126 model because they share a few of the same attributes, namely reliability, usability, efficiency,
6.3 Related Measurement Models There are no universal measures for usability that are relevant to every software development project and few clear guidelines are available detailing how to select or measure specific aspects of usability (Cockton, et al. 2004.) However, it is possible to evaluate usability directly from the usability attributes of a product, according to Bevan (2005). A typical way to specify and measure usability is to select an attribute, preferably one that will make a product usable and then measure whether they exist in the product. The issue with this method is in specifying which attributes, as it will most likely depend on the context in which the product was designed for. Some of the more traditional usability evaluation methods (UEMs) include: Guideline Reviews based on interaction design guidelines by Smith and Moiser (1986), Heuristic Evaluation (Nielsen & Molich, 1990),Schneiderman (1992) Cognitive Walkthroughs (Lewis, Polson, Wharton, & Rieman, 1990), Usability Walkthroughs (Bias, 1991), Formal Usability Inspections (Kahn & Prail, 1994) and Heuristic Walkthroughs (Sears, 1997).[26] This study will not focus on UEM as such, because they are traditionally 6/25
used for evaluating desktop sites, but will instead review examples of usability (empirical) metrics as possibilities for the mobile Web. There are a number of usability measurement methods in literature that can help in measuring software quality usability. One such method is through empirical testing as proposed by Marciniak (2002). These methods focused on specific usability metrics for design and development. They include Metrics for Usability Standards in 7. Usability Conceptual Models Computing, (MUSiC) developed by the European MUSiC project.[27] Examples of this method evaluates specific metrics such as the effectiveness of a system to carry out tasks, temporal efficiency and cost of task performance. A disadvantage with this method is that it does not reflect some important aspects of usability like satisfaction or learnability. Another method the SUMI, Software Usability Measurement Inventory, developed by Kirakowski & Corbett (1993) is also a part of the MUSiC method. It measures user satisfaction in specific usability areas, namely effectiveness, efficiency, helpfulness, control, and learnability. It is based on the quality of a software system from the end user's point of view and uses standardized questionnaires in which the users answers according to whether they Agree, Don't Know, or Disagree.[28] The Diagnostic Recorder for Usability Measurement (DRUM) by Macleod & Rengger (1993) is software tool also developed for the MUSiC method for usability evaluation. It is an analysis tool that tests a product's performance based on usability metrics: 1) tasks time, 2) snag, help, and search times, 3) effectiveness, 4) efficiency, 5) relative efficiency and 6) productive period. The results are analyzed in real time during the recording. [29] Another example is the Skill Acquisition Network (SANe) also developed by Macleod & Rengger (1993), it is a measurement model for interactive devices. Its approach is based on user interactions and defines user tasks, the device and procedures for how to execute the tasks. This method includes both a task model and a device model with a total of 60 different metrics; 24 are quality metrics. Scores collected from the quality metrics are then combined to form five quality measures: efficiency, learning, adaptiveness, cognitive workload and error correction. There is no one 'universally accepted' method for evaluating usability, just as there is no universally accepted definition of usability, it is, therefore suggested, that more than one method be selected to obtain the most reliable results. Reiterer and Oppermann (1993) explain that, "There is no 'single' best evaluation method. All methods have some Footnotes, pg
disadvantages, or consider only a limited number of the factors influencing an evaluation, but many of them contain useful ideas, or are very appropriate for the evaluation of a specific factor. What is needed is a combination of different evaluation methods for the different foci of an evaluation."
7. Usability Conceptual Models A Common Thread There is no agreed upon definition regarding usability, as it cannot be expressed in one objective measure. Some of the definitions give a thorough outline while others are more focused on the elements that can influence usability. Each model therefore is a conceptual view with 'hypothetical constructs', or attributes that can be used to measure usability. Many of the models and standards found in literature describe usability each with a different set of attributes that are exclusively limited to software systems. Attributes are features or characteristics that describes a measure of how well users actions are being carried out with a product or system. The following six models are the most cited models used to describe different usability concepts. For example, Eason's Model (1984) divides usability into three areas based on their independent relationship between the platform and the task being performed. Shackle (1984,1991) presented four important attributes, namely effectiveness, learnability, flexibility and attitude in which a system must pass to be considered usable. Nielsen (1993) model focused on the acceptance of a product by its users. His attributes are divided into practical and social acceptance: learnability, efficiency, memorability, errors, and satisfaction. ISO 9241-11 (1998), self-titled the 'baseline usability model,'[30] is concerned with the ergonomics standards of usability and has three attributes, namely effectiveness, efficiency and satisfaction along with 82 pages of guidelines. ISO 9126 (2001) a software engineering product quality model describes six categories with usability defined as ease of use. Moreover, in 2011 the ISO 9126 model was replaced by the ISO 25010, a software product quality requirement and evaluation model, SQuaRE. The 25010 qualities standard is less abstract than its predecessor. It states, "usability can either be specified or measured as a product quality characteristic in terms of its subcharacteristics, or specified or measured directly by measures that are a subset of quality in use."[31] 7/25
Listed in the following table is a taxonomy of usability conceptual models and comparisons.[32] Footnote, pg8
Table 1. Usability attributes in different standards, models and definitions.
Model
Attributes/Sub-Attributes Task
User Eason Model (1984)
System
Shackle Model (1991)
Frequency
The number of times a task is performed by a user.
Openness
Extent to which a task is modifiable.
Knowledge
The knowledge that the user applies to the task. It may be appropriate or inappropriate.
Motivation
How determined the user is to complete the task.
Discretion
The user's ability to choose not to use some part of a system.
Ease of Learning
The effort required to understand and operate an unfamiliar system.
Ease of use
The effort required to operate a system once it has been understood and mastered by the user.
Task match
The extent to which information and functions that a system provides matches the needs of the user.
Effectiveness
It is described as system's performance is better than some required level, by some required percentage of the specified target range of users, within some required portion of the range of usage environments.
Learnability
It is the training of users after some specific time from installation of the system. Also, includes user's re-learnability time for training and support systems.
Flexibility
It is the positive changes or variations in the system to the existing ones. It is the acceptance of users within their levels of discomfort, tiredness, frustration and personal effort.
Attitude
Nielsen Model (1993)
Learnability
The system should be easy to learn and understand. It should be easy for the user to get their job or task executed using the software system.
Efficiency
Efficiency of the system is directly related to its productivity. The more efficient a system is its throughput is correspondingly high.
Memorability
It is best suited for intermittent users. The user can return to the system's previous state without starting away from the beginning.
Errors
The error rate in any system should be less. If any error is occurred, the system should be able to recover from it. It is the pleasant feeling that user gets while or after using the system. It can be observed as likeability for the system and fulfillment of specified task.
Satisfaction
ISO 9241-11 (1998)
Effectiveness
It is the performance measure of a system to complete a specified task or goal successfully within time.
Efficiency
It is the successful completion of a task by a system. It relate to accuracy and completeness of the specified goal.
Satisfaction
It is acceptability of a system by the users, in specified context of use.
Understandability
ISO 9126 (2001)
Definitions
The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use.
Learnability
The capability of the software product to enable the user to learn its application.
Operability
The capability of the software product to enable the sure to operate and control.
Attractiveness
The capability of the software product to be attractive to the user.
Usability compliance
The capability of the software product to adhere to standards, conventions, style guide, or regulations related to usability.
Effectiveness
Effectiveness
The accuracy and completeness with which users achieve specified goals.
Efficiency
Efficiency
Resources expended in relation to the accuracy and completeness with which users achieve goals.
Usefulness
Satisfaction
Trust Pleasure
The degree to which user needs are satisfied when a product or system is used in a specified context of use
Comfort
Quality in use
Economic risk mitigation
Freedom from risk
The degree to which a product or system mitigates the potential risk to economic status, human life, health, or the environment.
8/25
Model
Quality in useAttributes/Sub-Attributes Freedom from risk
Health and safety risk mitigation
Definitions The degree to which a product or system mitigates the potential risk to economic status, human life, health, or the environment.
Environmental risk mitigation Context completeness
Context coverage
The degree to which a product or system can be used with effectiveness, efficiency, freedom from risk and satisfaction in both specified contexts of use and in contexts beyond those initially identified.
Flexibility
Functional completeness
Functional suitability
Functional correctness
The degree to which a product or system provides functions that meet stated and implied ends when used under specified conditions.
Functional appropriateness Time-behavior
Performance efficiency
Resource utilization
The performance relative to the amount of resources used under stated conditions.
Capacity Co-existence
Compatibility
ISO 25010* (2011)
Interoperability
The degree to which a product, system or component can exchange information with other products, systems or components, and/or perform its required functions, while sharing the same hardware or software environment.
Appropriateness recognizability Learnability Operability User error protection
Usability
The degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.
User interface aesthetics Accessibility Maturity Availability
Reliability
Fault Tolerance
The degree to which a system, product or component performs specified functions under specified conditions for a specified period of time.
Recoverability
Product quality
Confidentiality Integrity
Security
Non repudiation Accountability
The degree to which a product or system protects information and data so that the persons or other products or systems have the degree of data access appropriate to their types and levels of authorization.
Authenticity Modularity Reusability
Maintainability
Analyzability Modifiability
The degree of effectiveness and efficiency with which a product or system can be modified by the intended maintainers.
Testability Adaptability
Portability
Installability Replaceability
The degree of effectiveness and efficiency with which a product or component can be transferred from one hardware, software or other operational or usage environment to another.
Located in the Appendix is a comprehensive listing of sixty-three (63) usability concepts Footnotes: pg. 10
9/25
III. Research and Analysis III. Research And Analysis
8. Foundation of !a Usability framework 8. Foundation ! ! ! ! ! ! !
!
!
!
!
!
!
This chapter is divided into three parts in identifying usability guidelines to construct a foundation suitable for measurement. The first section (8.1) will reexamine the usability standards and conceptual models and attributes for evaluation. In the second section (8.2) a metric for evaluation will be developed based on the guidelines created in section 8.1.1. To define measurable goals (guidelines) this study will use the 'Goal Question Metrics' (GQM) (Basil et al. 1994) method. This method will be used to build a framework (section 8.3) that can help improve the optimization of the mobile Web. The objective will be to develop a set of metrics in a question format, thus making the questions a part of the metrics layout. The first step in the GQM approach is to identify the goal, then propose a set questions in how to achieve the goal and lastly use metrics to determine its value. Usability guidelines selected in the first section will act as the goals in this study, accompanied by questions that will appraise each goal. The goal will be refined into several questions to ensure that they are 8.1 Usability Review measurable.
who describes the metrics development process in mobile wireless information systems and applications. The selected guideline are therefore derived from usability models characteristics which are affected by software system. Table 2. Usability guideline practices. No.
Guidelines (goals)
1
Accuracy
2
Response time
3
The system should be easy to learn and Ease of learning understand.
4
Task The time it takes to complete a task accomplishment successfully.
8.1.1 Usability Guidelines Deciding on which crucial aspects to highlight as guidelines was challenging, especially when the goal was to try to stay away from aspects that are easily measurable. The chosen method used to select guideline goals is a rendition, based on the work presented by Ruti Gafni,
The accuracy and completeness with which users achieve specified goals. A system must respond promptly.
5
Expectation
6
Attractiveness
The capability of the software product to be attractive to the user.
7
Error tolerant
Errors made by the user while completing a task.
8
Pleasure
The extent to which the functions that a system provides matches the needs of the user.
9
Trustfulness
10
Engaging
How satisfying is it for a user to interact with a system.
11
Features
Relevant system features are available.
12
Performance
The system performance relative to the amount of resources used under stated conditions.
8.1 Usability Review There are a number of metrics available to measure usability, however for this study the most common factors selected were based on a perception of usability 'as the quality of use in a context'. Moreover the ISO 9241-11 specifically addresses the definition ! usability ! measurement ! ! and thus ! have! been selected ! ! of ! ! ! ! ! ! ! ! as the foundation for this framework. This perception only discusses ! 8.1.1 Usabilitystudies Guidelines related the 'software quality of use in context,' thereby excluding all broad definitions of usability. ISO 9241-11 standard will be used to create usability guidelines and metrics. Three quality characteristics have been selected as base guidelines for measurement, namely effectiveness, efficiency, and satisfaction, as they are the most cited characteristics in literature.
Explanation
13 14 15
Reading
How difficult it would be complete a task.
The trust that a user has, in that the software will function properly.
The time it takes to read a text screen
Provide support/ The system should provide assistance to those that need additional help. Help User effort
The extent to which the software produces suitable results equal to the user investments.
Each guideline will be the goal for our GQM model to 8.2 Metric Development obtain usability metrics.
8.2 Metric Development: Goal, Question, Metrics (GQM) Method The Goal, Question, Metrics (GQM) method was develop for NASA (Basili & Rombach, 1988) to understand the types of changes, modifications and defects, that were being made to a set of flight dynamic projects. It is a goal-driven method for developing and maintaining meaningful metrics based on three levels, namely conceptual (goals) "what to accomplish," operational (questions) "how to meet the goal" and; quantitative (metrics) "metrics used to answer the question." Originally GQM was used to define and evaluate goals for a particular project and environment, and is mostly used in a business driven environment. However its use has been extended to larger perspectives such as quality 10/25
improvement, progress measurement and project planning. [33] Moreover, because the GQM is used as the goal setting method it is ideal for defining and measuring software development systems as well as in measuring usability guidelines and metrics. The results of this approach is the specification of a measurement system, targeting a particular set of issues, and a set of rule for the interpretation of the measurement data.[34] This method is ideal for this research study because it allows for me to evaluate the quality of specific processes and systems or products.
Footnotes, pg 11 8.3 Usability Framework
8.3 Usability Framework As previously stated, each guideline will be the goal for our GQM model in achieving usability metrics. Fifteen guidelines were identified and six will act as goals for GQM model. Any non-relevant guideline will be removed. Table 3. Usability Guidelines Quality characteristics
Goal
Guidelines - No errors (few errors) - Successful completion of tasks - Accurate
Accuracy Effectiveness
- Ease to input data - Ease of system usage
Task accomplishment Features
- Support/help
Efficiency
- Time response - Completed tasks
Performance
- While using the system - Confidence in the system
Safety Satisfaction Attractiveness
- User interface
Assessment questions were created for the guidelines based on the goal. Accuracy: 1. How many links completed without site abandonment? 2. Does the site provide balance? 3. How accurate is the site information? Task accomplishments: 1. How easy is it to input data ? 2. Is the site easy to learn? 3. Are required fields highlighted? Features: 1. Does the site provide support or help? 2. Does the site provide search filter options? 3. Does the site provide local search functions?
Performance: 1. Does the website load quickly? 2. How much interactivity is available on the site? Safety: 1. Are links safe for clicking? 2. Are privacy measures in place? Attractive: 1. How do users perceive the website? 2. Will users remember the aesthetics?
11/25
The next step in the GQM process after the questions are determined is to mapped the metrics to each viewpoint. Questions How many links completed without site abandonment?
Metrics
Does the site provide balance?
Measures
Guidelines Goals Accuracy
How accurate is the site information?
Time taken to learn each task
How easy is it to input data?
Provide support/help
Is the site easy to learn?
Is the site flexible
Are required fields highlighted?
Number of errors
Effectiveness
Successful site browsing
Task Accomplishment
The number of features The number of input actions
Features
Does the site provide support or help? Does the site provide search filter options?
Efficiency
Amount of resources used Site optimization (screen size)
Does the site provide local search functions?
Performance
Does the website load quickly? How much interactivity is available on the site?
Time it takes to connect to a network Site response time Provide data protection
Safety Satisfaction Attractiveness
Rating scale for: Are links safe for clicking? How do users perceive the website? Are privacy measures in place? Will users remember the aesthetics?
- Support - Input/output - Features - Interface - Site flexibility
Figure 3. Framework - GQM Model to Evaluate Mobile Usability
8.4 Results and Discussion The essence of this method lies in the confirmation that the metrics can enable objective quality evaluations and comparison of mobile Web site usability. Not all the questions listed can be answered objectively, such as those pertaining to user satisfaction. I believe however that Figure 3. accurately describes a usability metric model that can 8.4 Results be use and to effectively Discussion evaluate mobile Web sites and provide valuable information that can be used to improve its optimization. The optimization of mobile designed Web sites can benefit from this framework in the following ways: Take for example 'Successful site browsing.' Its about streamlining mobile Web site content. By streamling content the Web site can create a better user experience by limiting everything to items that people would most likey be searching for. With improved 'Site interfaces and Features', optimizing these sections can help in making sure that the branding elements from your standard site matches your mobile site because your mobile site is a brand touchpoint and should reflect and promote your brand essence. Its essential that users who are already familiar with your company recognize a similar designs, which is an important consideration for loyal customers. Limiting the number of 'Input actions', is also key when optimizing mobile Web sites. Besides the obvious that most people suffer from 'fat-finger' syndrome, typing anything using a smartphone is difficult. Conventions would suggest using dropdown menus and 'pre-populated' fields or checklists, as a way to enter data. Google has even made it easier by letting companies search keywords that are specific to mobile devices. Using a metric framework to help optimize mobile Web sites doesn't have to be difficult to create. Testing how developers can keep content simple but useful, limit the number of images, display brand elements accross your sites,etc. and by testing the site a few times using specific metrics. The GQM method used in this study has shown how one can develop usability metrics. However the model needs to be validated for future work to verify all metric that were created are applicable for the mobile Web.
12/25
III. Limitations and Conclusions limitations 9. Limitations and Exceptions This thesis began as a usability study for mobile commerce sites, however after conducting research on usability and noticing that their continues to be a debate in defining what 'usability' is, I changed my focus to research the conflict, and why usability pundits differ so much when it comes to defining usability guidelines for testing the mobile Web. My aim was to tried to connect the two by providing a workable framework that could be use for testing the mobile web. However my ambitions are greater than my skills and the need for further study is acknowledged. Therefore, the limitations and exceptions of this study includes self-reported data, for the fact that some of the researched data is either no longer valid or cannot be properly verified by a reliable source. Another limitation is the method used to construct my usability framework for mobile Web sites called for validation to confirm that the metrics could possibly c o n c l uimprove s i o n the optimization of mobile Web sites. This was not done and therefore presents only a partially subjective view.
10. Conclusion The objective of this thesis was to fully understand the fundamentals of usability, select the best usability attributes to construct a foundation suitable for measurement and present a goal oriented usability framework for the mobile Web that could help in improving its optimization. This study reviewed the comparisons made between the different perceptions supplement in defining usability, and revealed that they were either vaguely defined or partially overlap one another. However I think that the current practices for measuring usability as it applies to traditional Web sites are still viably for the mobile Web, even though there is no definitive usability definition that all can agree upon. This paper proposed a usability framework for the Fo o t nWeb o t etos demonstrate p 1 3 Pagehow 13: Footnotes mobile attributes and metrics relate to broader issues of effectiveness, efficiency and satisfaction. This framework shows how measures of system usability are dependent on attributes that supports different aspects of user experiences. This framework could also be expanded to helping in the optimization of new concepts of mobile usability as well as in the selection of relevant evaluation measures.
Main Question: How can the usability of mobile Web sites be evaluated effectively using methods that were created for desktop Web sites? Q 1. Are there specific features of usability reasonably important to users of traditional Web sites than to users of mobile sites? To answer the main question regarding if usability evaluations of mobile Web sites can be done effectively using methods that were created for the desktop Web. I believe that the limitations and challenges for mobile devices and mobile Web sites in particular, that prevents them from providing users a mobile experience as good as their desktop experience lies in their characteristic features; small screen size, navigational obstacles, Internet access speed, battery life, quality and context of use. This is not to say that the guidelines that are use to evaluate them are no longer useful, however it does brings into question of whether they are still reliable. Mobile devices offers many benefits and speciality features because they are context dependent devices. They allows users to interact with content that can changed their expectations and experience around whatever they are doing, with that content, in spite of any annoyances. Nevertheless, studies have shown that more users are choosing mobile devices over PCs' because of their ability to offer 'anytime, anywhere' access to information. Q 2. Can usability be measured only for features and attributes? Usability can be measured for features and attributes based on context of use. Context, how a user interacts with a system, has always been an important element of usability principles as it can reflect on users mobile usage patterns. 'Winning moments that matters starts with understanding what your consumers want to do with your business in mobile.' (Google playbook, pg.6) Context is about understanding human relationships to the people, places and things in the world. It is poised to become the key in delivering 'hyperpersonalized' experiences regardless of how one connects to the Internet.
13/25
11. Supplement There continues to be a debate on what the definition of the 'mobile Web' should be, considering that mobile does not necessarily mean movement or that mobile devices are so radical that a separate Web is needed. What if a o mobile Fo t n o tuser e s pis1not 4 on the go but is sitting stationary somewhere typing in a short text message? Are they still consider a mobile user? Studies have shown that most people use their mobile devices while engaging in some other media, such as watching television, waiting for a flight to depart or playing video games, [35] TV ad time now equals to mobile usage time. Usability pundit Jakob Nielsen discussed how companies should create a 'separate, stripped-down' version for the 'mobile use case,' which is the notion that mobile users are always on the go, frequently distracted, 'info snaking' in sessions of ten-seconds. He summarizes in his recently published mobile usability guideline that, "Good mobile user experience requires a different design than what's needed to satisfy desktop users. Two designs, two sites, and cross-linking to make it all work."[36] One of Nielsen's critics, designer Josh Clark, rebuttals that rather than stripping down mobile sites, the question should be how to do more with mobile experience. "The notion that you should create a separate, stripped-down version for 'the mobile use case' might be appropriate if such a clean mobile use case existed, but it doesn't."[37] He goes on to state that a growing number of people are using mobile as the only way they access the web. If a Web site is stripped-down to satisfy the limitations of mobile devices, a company could be doing its users a disservice by isolating content and possibly causing some users to miss out on key amenities. Clark's analogy to Nielsen's idea of downsizing Web sites to fit the mobile web,' it's like an author stripping out chapters from a paperback just because it's smaller.' Another debate about mobile devices and Web sites turns to the idea of Mobile First by Luke Wroblewski (2011). Mobile first is about how companies that have yet to build a desktop Web site should first build a mobile and then transfer that strategy over to a stationary site. Wroblewski presents several reasons for going mobile first, '..the best reason to start with a mobile web solution is that it builds on web design and development skills you already have. You don't have to wait to get started.' [37] Critics of this idea says that the Mobile First principle causes the Page 14:ofFootnotes isolation Web sites and that this will place more focus on the device instead of the user and that 'mobile first' yields incomplete experiences. The final debate that's making noise is the idea of W3C's 'One Web'. This debate contends that, 'There is no mobile Web. There is only The Web, which we view in different ways. There is also no Desktop Web or Table Web.' I included these discussion about mobile Web to show how the new issues regarding usability and mobile devices have taken the experts on even different paths; all depending on their interpretation of what usability is. Mobile devices may not have changed what usability is to the experts, but the devices and their 'mobility' have changed what usability is to its users.
Reflections My motivation for selecting this subject is the result of my time spent as an Intern at SnitkerGroup, a global user research and usability company, in Copenhagen, Denmark. While at SnitkerGroup I was selected to lead a usability test project. I was in charge of finding the participants, creating tasks questions, selecting which usability testing program to use, analyzing the participants testing videos and presenting a professional report with recommendations for website improvements. It was one of the highlights of my stay there. The project was a success but the experience lead me to want to know more about usability because I could sense that my knowledge regarding the fundamentals of usability was missing. Therefore, my goals are for this thesis is to learn the 'story' of usability, how it became an important factor in determining the success of software products, what the issues of usability are today and what areas in technology are researchers looking forward to investigating in the future.
14/25
References Page 1: Footnotes (back to page link) [1] Justin, J. (2010) Forecast: Mobile Web Access To Surpress PCs In 2013 by more than 100 million. MobileMarketingWatch. [2] Usability standards. Usability Net.org, http://www.usabilitynet.org/tools/r_international.htm [3] Shackel, B. & Richardson, S.J., (1991). Human Factors for Infomatics Usability. Chapter two, New York, NY. Cambridge University Press [4] Preece, J., Rogers, Y., Sharpe, H., Benyon, D. Holland, S. & Carey, T. (1994) Human-Computer Interaction: Concepts and Design (Ics S.), Addision-Wesley, Wokingham, UK Page 3: Footnotes [5] Usability Theory, http://stcbok.editme.com/Usability-Theory [6] Carroll, John M, (2003) Virginia Tech, Introduction to Human Computer Interaction. HCI and Usability: History and Concepts. [7] Krug, Steve.(2009) Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems. New Riders: Berkley, CA. USA. [8] Nielsen, Jakob(2000) Designing Web Usability. New Riders Publishing: Indianapolis, IN. USA [9] ISO 9241-11(1998). International Standard Organization. Ergonomic requirements for office work with visual display terminals (VDTs). Part 11: Guidance on usability. [10] Bevan, N. (2009). Extending quality in use to provide a framework for usability measurement. Proceedings of HCI International 2009. San Diego, California, USA. Page 4: Footnotes [11] Schneiderman, B. (1992). Designing the User Interface: Strategies for Effective Human-Computer Interaction (2nd ed.), Reading, MA: Addison-Wesley. [12] Context of use, defined by Interaction design encyclopedia, http://www.interaction-design.org/ encyclopedia/context_of_use.html [13] Shami, N.S., Leshed, G. & Klein, D. (2005). Context of use evaluation of peripheral displays (CUEPD) INTERACT 2005, NCS#3585, 579-587 [14] Eason, K.D, Towards the experimental study of Usability, Behavior and Information Technology, Vol. 3, Issue 2, 1984. p. 133-143 [15] Bevan, Nigel (1995). Measuring usability as quality of use. MUSiC Method. Software Quality Journal, 4, 115-150. [16] Abran,A. Khelifi, A. Suryn, W.(2003) Usability Meanings and Interpretations in ISO Standards. Software Quality Journal, 11, 323-336, 2003, Kluwwer Academic Publishers. The Netherlands. [17] Suryn, W., C么t茅 M. Marc-Alexis & Georgiadou, Elli. (2003). Software Quality Mode Requirements for Software Quality Engineering. Page 5: Footnotes [18] Bevan, N. (2009). Extending quality in use to provide a framework for usability measurement. Proceedings of HCI International 2009. San Diego, California, USA. [19] Abran, A., Khelifi, A., Suryn, W. & Seffah, A. (2003). Consolidating the ISO Usability Models. Concordia University, Montreal, Canada. [20] Suryn, W., C么t茅 M. Marc-Alexis & Georgiadou, Elli. (2003). Software Quality Mode Requirements for Software Quality Engineering. [21] ISO 9241-11(1998). International Standard Organization. Ergonomic requirements for office work with visual display terminals (VDTs). Part 11: Guidance on usability. Page 6: Footnotes [22] Bevan, N., & Macleod, M.(1994). Usability measurement in context. Behavior and Information Technology, 13, 132-145. [23] ISO 25010 (2011) (E) International Standard Organization. Systems and software engineeringSystems and software Quality Requirements and Evaluation (SQuaRE)-System and software quality models.
15/25
[24] McCall, J.A., Richards, P.K. & Walters, G.F. (1977). Factors in Software Quality, Springfield, VA: National Technical Informational Service. [25] Boehm, B. W., Brown, J.R., Kaspar, J.R. Lipow, M.L. & MacCleod, G. (1978). Characteristics of Software Quality. New York: American Elsevier. [26] Fitzpatrick,R.(1998). Strategies for Evaluating Software Usability Methods, Dublin Institute of Technology, School of Computing.Volume:353, Issue: 1.1998. [27] Seffah, A., Donyaee, M., Kline, R.B, Padda, Harikrat K., (2007). Usability measurement: A roadmap for a Consolidated Model. Page 7: Footnotes [28] Kirakowski, J., & Corbett, M. (1993), 'SUMI': The Software Usability Measurement Inventory,' British Journal of Educational Technology, 1993, 24,(3), pp. 210-212 [29] Macleod, M. & Rengger, R. (1993). The development of DRUM: A software tool for video-assisted usability evaluation. [30] ISO 9241-11(1998). International Standard Organization. Ergonomic requirements for office work with visual display terminals (VDTs). Part 11: Guidance on usability. [31] ISO 25010 (2011) (E) International Standard Organization. Systems and software engineeringSystems and software Quality Requirements and Evaluation (SQuaRE)-System and software quality models. Page 8: Footnotes [32] Maden Antika, Dubey Sanjay, Taxonomy listing of usability models, PDF, International Journal of Engineering Science and Technology (IJEST), Vol. 4 No.02 February 2012:591 Page 11: Footnotes [33] Basili, V. , Calderia, G., & Rombach, H.D. 'The Goal Question Metric Approach', Encyclopedia of Software Engineering, 1994. (pg. 2) [34] Gafni, R. 'Framework for Quality Metrics in Mobile-Wireless Information Systems', Interdisciplinary Joural of information, Knowledge and Management, 2008, pp.26-31. Page 13: Footnotes [35] Resigner, Don. (2011) Most people check e-mail, surf the Web when watching TV. Cnet. [36] Nielsen, Jakob (2012) Mobile Sites vs. Full Site. Page 14: Footnotes [37] Clark, Josh (2012) Nielsen is wrong on mobile. .net Magazine, Opinions, April 12, 2012 [38] Wroblewski, Luke (2011). Mobile First. A Book Apart, New York, New York (Chpt. 1, pp. 16)
Bibliography bibliography
Ballard, Barbara. (2007) Designing the Mobile User Experience. John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England. Cockton, Gilbert (2012). Usability Evaluation. In: Soegaard, Mads and Dam, Rikke Friis (eds.). "Encyclopedia of Human-Computer Interaction." Aarhus, Denmark: The Interaction-Design.org Foundation Constantine,LL. & Lockwood, L.A.D. (1999). Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design, New York: Addison-Wesley Dumas, J. & Redish, Janice C. (1999), A Practical Guide to Usability Testing, Intellect, Ltd. (UK), 1999. Fitzpatrick, R. &Higgens, C.(1998). Usable software and its attributes: a synthesis of software quality: European Community Law and Human-Computer Interaction. Fitzpatrick, R. (1996). Software Quality: Definitions and Strategic Issues. MSc Computing Science (ITSM), Staffordshire University, School of Computing Report, April, 1996. Hartson, R.H, Andre, T.S, and Williges, R.C (2000). Criteria for Evaluating Usability Evaluations Methods.
16/25
IEEE 610.12(1990), Institute of Electrical and Electronics Engineers standard glossary of software engineering terminology, IEEE std. 1990. Los Alamitos, CA Krug, Steve.(2000) Don't Make Me Think: A Common Sense Approach to Web Usability. New Riders Publishing: Indianapolis, IN. USA. Lindholm, C. & Keinnonen T.(2003), Managing the Design of User Interfaces. Lindholm, C. Keinonen,T. & Kiljander, H (eds.). Mobile Usability: How Nokia Changed the Face of the Mobile Phone, McGraw-Hill, New York. Marciniak, J.J.(2002), Encyclopedia of software Engineering, 2, 2nd edn, Chichester:Wiley. MediaPost Publications. Gartner: Mobile To Outpace Desktop Web By 2013, January 14, 2010 Molich, R., & Nielsen, J. (1990). Improving a human-computer dialogue. Communications of the ACM 33 (3), 338-348. Nielsen Wire(2012) Double Vision - Global Trends in Tablet and Smartphone Use while Watching TV. Powell, Kevin (2011) 'Why Mobile First isn't enough: Developing a better user experience.', Under the Beo Studios Pressman, Scott (2005), Software Engineering: A Practitioner's Approach (Sixth, International ed.), McGraw-Hill Education, p. 388. Sears, A., (1995). AIDE: A step toward metric-based interface development tools, Proceedings of the ACM Symposium on User Interface Software and Technology, New York: ACM Press. pp. 101-110. Shackle, B.(1984). The Concept of Usability. In J.L. Bennett, D. Carver, J. Sandelin & M. Smith, eds. Visual Display Terminals: Usability Issues and Health Concerns, pp. 45-88. Englewood Cliffs, NJ: Prentice-Hall. Shackle, B.(1986), Ergonomics in design for usability. In Harrison, M.D. & Monk, A.F., editors, People and computers, Proceedings of the Second Conference of the BCS HCI specialist group pp. 44-64, Cambridge, Cambridge University Press. Smith,S.L. & Mosier, J.N. (1986). Guidelines for designing user interface software. Mitre Report ESDTR-86-278, The MITRE Cooperation, Bedford, MA. Thomas, P. & Macredie, R.(2002). Introduction to the new usability. ACM Transaction on Computer-Human Interaction (TOCHI), Volume 9, Issue 2, (June 2002), pp. 69-73 Welie, Martijn van, van der Veer, Gerrit & EliÍns (1999). Breaking down Usability. Faculty of Computer Science, Vrije Universities Amsterdam.
17/25
Appendix
(back)
Table 4. Usability concept models/standards.
Researchers/Authors
Usability Concepts
Foley, J. and Van Dam, A. (1982)
User interface guidelines
Smith, S. & Miser, J. (1984)
Describe usability as product's attribute
Eason, K.D. (1984)
Interrelated usability and functionality
Goud, J.D. (1985)
Defined usability in terms of learnability, usefulness, and ease of use.
Shneiderman, B. (1986)
Guidelines for error prevention, discussed the systems's response time, data entry within HCI.
Shackle, B. (1986)
Defined usability with factors effectiveness, learnability, flexibility and attitude.
Tyldesley, D.A. (1988)
Mentioned 22 factors that could be used to build the metrics and specifications.
Doll, W.J. & Torkzadeh, G. (1988)
End User Computing Satisfaction Instrument (EUCSI).
Ravden, S. & Johnson, G. (1989)
Presented software inspection as usability evaluation mechanism.
Igbaria, M. & Parasuraman, S. (1989) Enjoyability is directly proportional to acceptance of a system. Booth, P. (1989)
Modified Shackle's criteria into usefulness, effectiveness, learnability and attitude.
Polson, P.G. & Lewis, C.H. (1990)
Gave problem solving strategies for novice users to interact with the complex interface.
Holcomb, R. & Tharp, A. (1990)
Presented a software usability model for the system designers to decide which usability sub attributes should be included.
Shackle, B. (1991)
Elaborated the usability concept.
Mayhew, D.J. (1992)
Reviewed usability principles to describe the desirable properties of the interface.
Grudin, J. (1992)
Practical acceptability of the system within the various categories like cost, support, system usefulness.
Nielsen, J. (1993)
Presented usability heuristics for the inspection method of usability evaluation. He classified usability to, learnability, efficiency, memorability, errors, and satisfaction.
Dumas & Redish (1993)
Explained their definition of usability on the basis of focus on users, usability means, use of product by users for productivity, users are busy people trying to accomplish tasks, decision of user about when the product is easy to use.
Preece et al. (1993)
Categorized usability into sub attributes namely: safety, effectiveness, efficiency and enjoyableness.
Beimal et al. (1994)
Principles of acceptance for usability.
Nielsen, J. and Levy (1994)
Worked on user satisfaction assessment of product.
Logan (1994)
Divided usability into social and emotional dimension.
Caplan (1994)
Defined apparent usability as an important consideration in the design of a software system.
Preece et al. (1995)
Related usability to overall performance of the system and user satisfaction.
Lamb (1995)
Claimed usability as a wider concept which includes content usability, organizational usability and inter organizational usability.
Guillemette (1995)
Reviewed and defined usability with respect to effective use of information system.
Kurosu & Kashimura (1995)
Divided usability into Inherent usability and Apparent usability.
Nielsen, J. (1995)
Presented "Discount usability engineering".
Botman (1996)
Presented "Do it yourself usability evaluation"
Butler (1996)
Dealt with usability engineering.
Harrison & Rainer (1996)
Reviewed a model used for computing satisfaction - EUCSI.
Kanis & Hollnagel (1997)
High degree of usability can be determined when the error rate of usability is minimum.
Gluck (1997)
Correlated Usability to usefulness and usableness.
Tractinsky (1997)
Contributed in explaining the concept of Apparent usability.
Lecerof & Paterno
Declared functionality being essential to usability.
Thomas (1998)
Categorized usability sub attributes into three categories: outcome, process, and task.
ISO 9241-11 (1998)
"Guidance on usability" which discusses usability for the purpose of system requirement specifications and its evaluation.
Veldof, Prasse & Mills (1999)
Related usability, user's reaction and system development.
Vanderdonckt (1999)
Design guidelines and principles to build an effective user friendly interface.
Kengeri et al. (1999)
Explained usability using effectiveness, likeability, learnability and usefulness.
18/25
Researchers/Authors
Usability Concepts
Squires & Preece (1999)
Usability concept was regarded for pedagogical value for e-learning systems.
Arms (2000)
Aspects of usability that are interface design, functional design, data and metadata, and the computer systems and networking.
Alred et al. (2000)
Related usability to technical/system and human factors.
Battelson et al. (2000)
Explained interface design that is easy to learn, remember, and use, with few errors.
Hudson (2001)
The concept of web usability was described.
Turner (2002)
Illustrated a checklist for the evaluation of usability.
Blandford & Buchanan (2002)
Explained usability in terms of technical, cognitive, and social design. Also, looked into the future work on methods for analyzing usability.
Palmer (2002)
Explained usability in context of web usability.
Oulanov & Pajarillo (2002)
Interface effectiveness as one of the most important aspects of interaction
Matera et al. (2002)
Gave 'Systematic usability evaluation".
Guenther & Pack (2003)
Illustrated the difficulties in defining usability.
Campbell & Auction (2003)
Explained usability as a relationship between tools and it users.
Abran et al (2003)
Referred usability as a set of multiple concepts, performance of the system, execution time of a specified task, user satisfaction and ease of learning.
Whitney Quesenbery (2001, 2002, 2003)
Presented "the five E's of usability" which include effectiveness, efficiency, engagement, error tolerance, and ease of learning.
Villers (2004), Dirges & Cohen (2005), Miller (2005)
Expressed usability evaluation methods should consider pedagogical factors.
Krug, S. (2006)
Studied usability from the user's perspective based on their experience.
Dee &Allen (2006)
End-user interface conforms to usability principles.
Seffah, Donna, Kline & Padda (2006) Gave 10 usability factors namely, efficiency, effectiveness, productivity, satisfaction, learnability, safety, trustfulness, accessibility, universality, and usefulness are associated with twenty-six usability measurement criteria. Brophy & Craven (2007)
Explained web usability.
Tom Tullis & Bill Albert (2008)
Presented 'Tips and Tricks for Measuring the User Experience'.
Thomas S. Tullis (2009)
Explained 'Top Ten Myths about Usability'.
Gardner-Bonneau (2010)
Explained the effectiveness sustained by the software system when technical changes are made to it.
Jennifer C. Romano Bergstrom et al. (2011)
Conducted iterative usability testing.
19/25
ISO 9241 (back section 6.1.2 )
ISO 9241 Ergonomics requirements for office with visual display terminals (VDTs)
General
Material Requirements
Environment
1. General Introduction
3. Visual display
2. Guidance on task requirements
5. Workstation
11. Usability specifications measures
7. Display with reflections
12. Information presentation
8. Display colors
13. User Guidelines
9. Non-keyboard input devices
14. Menu dialogues
4. Keyboard
6. Environmental requirements
Software
10. Dialogue
15. Command dialogues 16. Direct manipulation dialogues 17. Form filling dialogues
(Source: [19])
Figure 4. ISO 9241- 17 parts. ISO 9241 Subject description Part 1: General Introduction contains general information about the standard and provides an overview of each of the parts. Part 2: Task Requirements discusses the enhancement of user interface efficiency and the well being of users by applying practical ergonomic knowledge to the design of VDT wok tasks. Part 3: Display Requirements specifies requirements for visual displays and their images. Part4: Keyboard Requirements specifies the characteristics that determine the effectiveness in accepting keystrokes from a user. Part 5:Workstation Requirements specifies the design characteristics of workplaces in which VDTs are used. Part 6: Environmental Requirements specifies characteristics of the working environment in which VDTs are used. Part 7: Display requirements with reflections describe how to maintain usable and acceptable VDT image quality by evaluating the reflection properties of a screen and the image quality of the screen over a range of typical office lighting conditions. Part 8: Requirements for displayed color states specifications for display color images, color measurement metrics, and visual perception tests. Part 9: Requirements for non-keyboard input devices specifies requirements for the design and usability of input devices other than keyboards. Part 10: Dialogue Principles specifies a set of high-level dialogue design principles for command languages, direct manipulation, and form-based entries. Part 11: Guidance on Usability explains the way in which the user, equipment, task, and environment should be described-as part of the total system-and how usability can be specified and evaluated. Part 12: Presentation of Information specifies requirements for the coding and formatting of information on computer screens. Part 13: User Guidance specifies requirements and attributes to be considered in the design and evaluation of the software user interfaces. Part 14: Menu Dialogues provides conditional requirements and recommendations for menus in user-computer dialogues. Part 15: Command Dialogues provides conditional recommendations for common languages. Part 16: Direct Manipulation Dialogues provides guidance on the design of manipulation dialogues in which the user directly acts upon object or object representations (icons) to be manipulated.
20/25
back to Software Quality -ISO 9126
9126 - 1 Software Quality Model
External and Internal Quality
Functionality
Reliability
Usability
suitability
maturity
understandability
accuracy
fault tolerance recoverability
learnability
reliability compliance
attractiveness
interoperability security functionality compliance
operability usability compliance
Efficiency time behavior resource utilization efficiency compliance
Maintainability analyzability changeability stability testability maintainability compliance
Portability installability co-existence replaceability portability compliance
(Source: [18])
Figure 5. Software Quality characteristics and sub characteristics - ISO 9126- 1
Quality characteristics: Functionality • Suitability - The capability of the software to provide an adequate set of functions for specified tasks and user objectives. • Accuracy - The capability of he software to provide the right or agreed results or effects. • Interoperability - The capability of the software to interact with one or more specified systems. • Security - The capability of the software to prevent unintended access and resist deliberate attacks intended to gain unauthorized access to confidential information or to make unauthorized modifications to information or to the program so as to proved the attacker with some advantage or to deny service to legitimate users. Reliability • Maturity - The capability of the software to avoid failure as a result of faults in the software. • Fault tolerance - The capability of the software to maintain a specified level of performance in case of software faults or of infringement of its specified interface. • Recoverability - The capability of the software to re-establish its level of performance and recover the data directly affected in the case of a failure. Usability • Understandability - The capability of the software product to enable the user to understand whether the software is suitable, and how it can be used for particular tasks and conditions of use. • Learnability - The capability of the software product to enable the user to learn its applications. • Operability - The capability of the software product to enable the user to operate and control it. • Attractiveness - The capability of the software product to be liked by the user. Efficiency • Time behavior - The capability of the software to provide appropriate response and processing times and throughput rates when performing its function under stated conditions. • Resource utilization - The capability of the software to use appropriate resources in an appropriate time when the software performs its function under stated conditions. Maintainability • Analyzability - The capability of the software product to be diagnosed for deficiencies or causes • Changeability - The capability of the software product to enable a specified modification to be implemented. • Stability - The capability of the software to minimize unexpected effects from modifications of the software. • Testability - The capability of the software product to enable modified software to be validated. Portability • Installability - The capability of the software to be installed in a specified enviornment. • Co-existence - The capability of the software to co-exist with other independent software in a common environment sharing common resources. • Replaceability - The capability of the software to be used in place of the other specified software in the environment of that software. • Adaptability - The capability of the software to be modified for different specified environments without applying actions or means other than those provided for this purpose for the software considered.
21/25
Quality in use - ISO/IEC 9126-4
(back)
Quality in use
Effectiveness
Productivity
Safety
Satisfaction
Figure 6. Quality in use model. Adapted from ISO/IEC, 2001a (source: X)
How Internal Quality, External Quality and Quality in Use are linked.
User quality needs
use and feedback
contribute to specifying
External quality requirement
indicates
External quality validation
contribute to specifying
Internal quality requirement
Quality in use
indicates
verification
Internal quality
Figure 7. Relationships between the different aspects of quality. Adapted from (ISO/IEC 2001a)
22/25
Product Quality Interoperability
Compatibility
Co-existence Installability Replaceability
Functional completeness Functional correctness Functional appropriateness
Functional Sui tabi l ity
Availability
Po rt a b i l i ty
Adaptability
Maturity Recoverability
Reliability
Fault tolerance
ISO 25010 Modularity
Learnability
Reusability
Appropriateness recognizability
Analyzability
Usability
Ma i n ta i n a b i l i ty
Operability User error protection
Modifiability
User interface aesthetics
Testability
Accessibility Confidentiality
Time behavior Resource utilization Capacity
Integrity
P e rf o rma n c e E ffi c i e n c y
S ecuri ty
Non-repudiation Accountability Authenticity
Quality in use Economic Risk Mitigation Health and Safety Risk Mitigation Freedom from Risk Environmental Risk Mitigation Effectiveness
E ff e c ti ve n e s s
ISO 25010
Efficiency
E f fic i e n c y
Context Completeness
Context Coverage
Usefulness Trust Pleasure Comfort
Flexibility
S a ti sf a c ti o n
23/25
McCall's Quality Model
(back to McCall)
Figure 9. McCall's Quality Model Adapted from Pfeeger (2003) and McCall et al. (1977
24/25
Boehm's Quality model (back)
Figure 10. Boehm's quality model Adapted from Pfeeger (2003), Boehm et al. (1976, 1978)
25/25