Journey of an Insight

Page 1

The Journey of an Insight: Cost-Justifying User Centered Design By Karan Shah, MSc Student – Faculty of Industrial Design Engineering, TU Delft ROI (Return on investment) analysis, Cost-benefit analysis and business cases are commonly used to provide quantitative estimates of the potential value generated by UCD methods before actually implementing them. Though they are comparatively better cost-justification tools, they sometimes involve predicting the value of intangible variables (such as organizational image, brand perception etc.) which can’t be quantified. This ambiguity makes these tools less analytically sound, and poses a sense of risk to the decision makers. In light of these findings, this paper proposes a ‘tool’ as a required component in the UCD process that measures the progress and financial impact of UCD by mapping the journey of new insights. This tool confirms validation (at a later stage) of the assumptions made in the cost-justification process and reinforces it, hence reassuring the decision makers to invest in user-centered activities. It can be employed as a technique-combination along with a business case (which has cost-justification data) to promote UCD.

ABSTRACT

This paper studies the effectiveness of several measures of success and tools of cost-justification in usercentered design (UCD) that companies today commonly employ. It analyzes the limitations of these approaches that pose a sense of risk and skepticism to the clients/directors, and investigates the factors that motivate their decisions to invest in user-centered activities. Driven by this understanding, it proposes a method that enhances the conventional paradigm of UCD by incorporating a persuasive cost-justification tool within it that makes it self-accountable and eliminates the sense of risk and ambiguity that clients/directors tend to associate with it. The proposed method is a technique of tracing the journey of every insight as it surfaces during the UCD or other design phases, and using randomized controlled trials to quantify the financial value generated specifically due to UCD initiatives. When used in combination with a business case, it creates a system of cost-justification that is robust and trustworthy as it later validates all the assumptions made initially in the business case.

METHODS

Keywords

First, the prior studies on the current state of UCD were researched. The extent to which the method was used, considered effective and the ways in which it was implemented by companies were examined. Several surveys that focused on the organizational impact and practice of UCD [3], measures of UCD effectiveness [2], the contribution of UCD methods to strategic usability [4] and the likes were researched. Evaluative techniques like Randomized Controlled Trails (RCT) that are used in alternate processes (similar in context to UCD) to measure effectiveness [5] were also looked into. After getting an understanding of the lack of standardization and credibility in the current measures of effectiveness in UCD, a crucial need for quantitative evidence of returns before investing in UCD was realized. This lead to the research of existing case studies on companies that had successfully used costjustification methods like ROI analysis and cost-benefit analysis to develop business cases to propose projects in usability engineering, human factors and UCD [1][6][7][8]. An in-depth study into the implementation of cost-benefit analysis [4] to understand the minor flaws that made it incomplete as a foolproof costjustification tool was conducted. To investigate further, different ways in which practitioners have manipulated and enhanced cost-benefit analysis and clubbed it with other methods to create a more holistic justification tool were researched.

UCD, measures of UCD success, cost-justification, insight mapping, business case INTRODUCTION

User Centered Design (UCD) is a multi-disciplinary design approach that actively involves users to improve the understanding of task requirements, design iterations, product usability and evaluation [3]. Despite it’s proven success, clients/directors frequently feel skeptical about the adoption of UCD methods, given the high risk of false investments and the lack of predictability of success. They tend to draw comparisons of UCD with other potential methods, and thus seek concrete quantitative estimates of the value that any UCD activity generates. As decision makers, it becomes crucial for them to demand quantitative evidence of the additional financial value generated by UCD methods before investing in them. The conventional UCD program doesn’t include any measures of progress as a required component. While some companies apply qualitative and quantitative measures of the effectiveness of UCD, many companies have no measures in place at all [2]. All of the commonly applied measures are retrospective in nature and hence are not suitable to convince the managers to make the initial investment decision. 1


methods is driven by a strong concern about the costbenefit tradeoff.

RESULTS UCD – Causes for Skepticism & doubt

Rosenbaum et al. [4] found that the major obstacles to creating greater strategic impact using UCD methods in companies today include resource constraints, development and management doubts about the value of UCD or usability engineering, and deficiency in usability knowledge. Insights from other similar studies reveal that many developers refrain from usability engineering techniques because they find them intimidating in their complexity, time consuming, and too expensive to implement [2][3].

The Client/Director’s point of view

The initial investment to be made in UCD is a crucial cause of concern for the decision maker who has limited resources. Due to the lack of assurance of success and the large investment at stake, thorough justification and drivers of trust in the UCD program are necessary. Determining goals and objectives before executing a UCD activity is critical in order to focus the measurement process. Measurements taken during the user-centered activity can be analyzed and mapped back to the original goals. From the point of view of convincing a client/director, determining the financial value/benefit generated by a UCD activity should be one amongst the many goals of the UCD process. This highlights the critical need to alter the conventional UCD program and include in it measures of progress as a required component.

The most commonly stated quantitative and qualitative measures to gauge the effectiveness of UCD methods by experienced UCD practitioners were [2]: • • • • • • • • • •

External (customer) satisfaction Enhanced ease of use Impact on Sales Reduced helpdesk calls Pre-release user testing/feedback External (customer) critical feedback Error/success rate in user testing Users’ ability to complete required tasks Internal (company) critical feedback Savings in development time/costs

While this intervention should help justify the cause of UCD to some extent, it does not entirely solve the problem. The main problem with this approach is that it is necessarily retrospective. It may justify the value for a completed project, but the clients/directors don’t always see the value provided for a past project as helping them predict the level of value they might expect for a future project. In today’s corporate culture, there is much less interest in what was accomplished yesterday than in the ability of an organization to demonstrate that it is in the critical path for the business tomorrow [1]. The evidence that justifies a new UCD project needs to assure additional financial benefits as a specific outcome of the UCD approach before the project starts.

All of the above stated measures are retrospective in nature. A lot of them have intangible qualitative variables that are difficult to quantify, and hence add to the uncertainty of decision makers. When the same respondents were asked to cite measures of UCD success that they applied in their own companies, they ended up identifying new criteria such as acceptance of UCD by designers and design for user requirements. Several respondents reported that there were no measures in place at all [2]. In general, there is a clear lack of standardization in applied measures of UCD success and no fixed, convincing and effective protocol was found. While customer satisfaction was mentioned by many UCD professionals as a primary measure that was tracked, there was no reference to setting satisfaction targets or comparing user results to them in the typical UCD process that they described [3]. Hence, they saw the measurement of customer satisfaction as outside of the UCD process. It was concluded that the typical UCD program does not include any measures of success as a required component.

Another dimension to the problem that the client/director with limited resources faces is not whether an activity is good or bad in isolation, but rather which amongst the potential investments (in Method A or Method B) best enables the organization to achieve its goals [1]. Should resources be invested in the UCD activity or something else? To address this problem, the usability engineering teams of companies use business cases that help them make well-reasoned investment decisions [6]. A company tries to allocate resources to projects that will accomplish its organizational goals. The directors are forced to make decisions about the distribution of limited resources among their several organizational goals and their benefits. This protocol can be applied in a similar manner by UCD teams as well. A business case claims to provide an objective and explicit basis for these investment decisions by employing techniques of cost justification like Return on Investment (ROI) analysis and cost-benefit analysis that estimate quantitative projections of the net financial value generated by competing projects based on which they are judged against each other.

Regarding the nature of preferred methods, informal low cost methods like iterative design, usability evaluation, task analysis, informal expert review, and field studies were observed to be the most widely used [3]. An interesting observation made by Vredenburg et al. in their survey [2] was that methods like the user requirements analysis, which are usually more expensive and difficult to execute, were considered very important in practice but were mentioned by only few practitioners as commonly used. Observations like these show that the choice of the preferred UCD

2


Cost Justification

DISCUSSION

The key findings from the literature research reveal that there is no one effective method or tool that measures or foretells the success of a UCD approach in it’s entirety. There is no single flawless measurement instrument that can be used to convince decision makers to invest in UCD. While the commonly applied measures of UCD effectiveness are erratic, retrospective and lack standardization; proven tools of cost-justification like cost-benefit analysis are unable to factorize intangible variables of UCD benefits, hence leaving space for inaccuracy and doubt.

In human factors cost-benefit analysis, one general approach is to compare the costs and benefits of a proposed usability engineered product with those for a product developed as usual without human factors. If there are several competing UCD proposals for a product’s development, they can be contrasted with each other too. The goal of the analysis is to determine the cash value of the positive difference in a product due to human factors and UCD, by identifying and financially quantifying each of the expected costs (e.g. Human factors & Programming personnel, end user, walk-through and testing) and benefits (e.g. lower training costs, higher sales, increased productivity) of usability engineering [1].

To build a successful measurement instrument for UCD, a framework is needed in which value for the company can be explored, measured, and analyzed from multiple perspectives. The task of convincing the decision makers to invest in UCD is extremely challenging, and from the understanding acquired after the aforementioned research and reasoning, two approaches towards it seem possible:

In case of UCD, estimates of all significant costs of the UCD activity and the benefit variables in the project’s life cycle will be required for analyzing the relationship between the two. While most of the costs have tangible variables, the benefits of UCD are a multivariate function of end-user productivity improvements, customer or company benefits, increase in developer productivity and the likes. They have both tangible and intangible variables. Tangible variables can be quantified financially, whereas intangibles, such as organizational image and brand perception are not easily measured; e.g., although it is possible to measure the impact of an improved web-interface on user productivity, the task of cost-justification is considerably more complex than simply measuring human performance in using the website. Methods of quantifying such intangible variables may or may not surface at a later stage in the project. Hence, a costbenefit analysis of a given UCD project would factor only those variables that are identifiable and measurable in nature. Assumptions will have to be made to factor the quintessential (and at times unavoidable) intangible variables.

Providing a convincing business case that states quantitative evidence based on wellresearched and logical assumptions that a proposed UCD project will deliver added value. Making the UCD methods self-accountable by incorporating standardized and foolproof measures of success in them.

While both these approaches by themselves have scope for imperfections, the most effective way to convince the decision makers would be a carefully formulated ‘technique-combination’- an ideal combination of them both. To simultaneously provide a convincing business case and the assurance that the UCD method includes a foolproof measure of success that will later prove the accuracy of the assumptions made in the business case. The assurance that the proposed UCD method is selfaccountable is an important ingredient that seeks trust from the investor by offering psychological comfort. While a part of this method is retrospective (and hence doesn’t assure additional financial benefits as a specific outcome of the UCD approach before the project starts), its incorporation in UCD can give a sense of security (because of the self-accountability) to the decision makers and make it easier for them to make the investment decision.

Sassone and Schaffer (1978) claimed that because costbenefit models and studies are usually intended to be decision aids, they must be analytically sound, accurate, credible to both the user and the audience, and they must be tractable [4]. Upon further researching the effectiveness of costbenefit analysis as a tool for cost-justification in usability, it was clear that the assumptions used to justify costs and benefits and the multiplication approach for estimating value are not always persuasive. The assumptions strongly resemble the justification used to initiate projects that (over experience) directors have learned often don’t fulfill the promises initially made. Directors have realized that in this field, the multiplication approach to estimating value can neglect other variables in the environment that erase gains [1]. Also, business managers who are unable to quantify a part of the business in a way that allows assessment and control tend to eliminate that part from their business.

To make the UCD method self-accountable, this paper proposes a method that gauges the benefits that are derived specifically from the UCD component of the design process. The net value generated by UCD (as derived from this method) should ideally match the value addition by UCD that was projected in the business case. The method draws on the very essence of any usercentered activity, the generation of meaningful insights from user experiences and feedback. Meaningful insights (with potential value) can surface during any stage of the user research or the design phases that 3


follow. Each one of these insights is eventually translated in some way or the other to a tangible product attribute (be it aesthetic, functional or experiential) and it adds value to the experience of product-usage by affecting the user in conscious or subconscious ways. While one insight may result in multiple product attributes, it may also happen that multiple insights generated during varying stages of UCD might cause the addition of a single product attribute. The key lies in documenting every insight as it emerges during the product development process on a methodical basis and tracing it to the respective product attribute that it leads to over time. There will be multiple links and cross-connections. The complexity of this structure can be deconstructed into the individual journeys of every insight in an insight map as shown in Figure 1.

To begin with, all the product attributes can be categorized into two sets: Set 1 – Attributes that trace back to insights that were generated specifically due to UCD techniques. Set 2 – Attributes that trace back to insights that were generated during any of the design phases except for the UCD ones. A Randomized controlled trial can be implemented using two product prototypes: Prototype 1 – Has all the attributes in Set 2 Prototype 2 – Has all the attributes in Set 1 & Set 2 By introducing these two prototypes to the target user group for a planned trial duration and analyzing the sales, feedback and results, we can quantify the returns per unit of Prototype 1 and 2. Prototype 2 should ideally generate more returns than Prototype 1, and the difference between the two should portray in tangible terms the additional value that was generated as a consequence of performing UCD. In ideal circumstances, this additional value should match the cash value of the positive difference in the product that was estimated in the business case, and hence validate the assumptions that were made in it. There is a scope for fallacy in this approach of validation. But if the initial business case was thorough and based on well researched and logical assumptions, these values should match, allowing some contingency due to human or unpredictable factors.

In situations where the client/director seeks validation, the eventual aim is to gauge the value of every insight in financial terms so that the aggregate value of all insights that emerged from UCD can be calculated. To facilitate this, this paper proposes the use of Randomized controlled trials (RCT) [5].

An added potential benefit of incorporating this method is that the prospect of eventual validation will push the UCD teams to pitch more thorough and foolproof business cases to their clients/directors. It makes the entire system more robust and self-regulatory. CONCLUSION

This method is proposed specifically to help the cause of convincing clients/directors to invest in a UCD approach. In essence, the technique-combination of projective cost-justification (business case) + incorporating methods of verification within UCD (by mapping the journeys of insights) that is proposed adds to the conventional paradigm of UCD by making it selfaccountable and persuasive. It needs thorough testing before it can be deemed effective. It will need to be improvised and honed over trials in diverse real time projects, and mastered with experience. Over time, faster and more cost-effective alternatives to the randomized controlled trials may surface as the method is put into practice. Based on the research and proposed hypothesis, it can be logically concluded that for large organizations investing colossal amounts in user research and product development, investing a bit more to incorporate this method might prove worthwhile.

Figure 1. The journey of insights through prototype development and RCT to their eventual financial value.

4


REFERENCES

5. Haynes, L., Goldacre, B., & Torgerson, D. (2012). Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials. Cabinet OfficeBehavioural Insights Team.

1. Lund, A. M. (1997). Another approach to justifying the cost of usability.interactions, 4(3), 48-56. 2. Vredenburg, K., Mao, J. Y., Smith, P. W., & Carey, T. (2002, April). A survey of user-centered design practice. In Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves(pp. 471-478). ACM.

6. Herman, J. (2004, April). A process for creating the business case for user experience projects. In CHI'04 extended abstracts on Human factors in computing systems (pp. 1413-1416). ACM.

3. Mao, J. Y., Vredenburg, K., Smith, P. W., & Carey, T. (2005). The state of user-centered design practice. Communications of the ACM, 48(3), 105109.

7. Graefe, T. M., Keenan, S. L., & Bowen, K. C. (2003, April). Meeting the challenge of measuring return on investment for user centered development. InCHI'03 extended abstracts on Human factors in computing systems (pp. 860-861). ACM.

4. Bias, R. G., & Mayhew, D. J. (Eds.). (2005). Costjustifying usability: An update for the Internet age. Morgan Kaufmann.

8. Marcus, A. (2005). User interface design's return on investment: examples and statistics. Cost-justifying usability. San Francisco: Morgan Kaufman.

5


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.