We Tried to Warn You: The Organizational Architecture of Failure Peter Jones
Principal, Redesign Research
Introduction At the IA Summit 2007, I presented a position and supporting case study illustrating the roles of IA in organizational process, and discussed the potential we have in preventing the mistakes that can lead to organizational meltdowns. We tried to warn you! Our Role in Massive Organizational Failure portrayed organizational failure as a multidimensional, socially-mediated series of mistakes that can lead to enterprise-wide breakdown. I believe these massive internally-generated failures to be more prevalent in larger companies with centralized decision making. These mistakes are compounded by managers’ rational decisions that fail to imagine the “organizationally unimaginable.” My case study was based on two companies and their projects I knew sufficiently from personal experience, and from publicly available information, and backed up by interview data. There are many kinds of failure in large, complex organizations – breakdowns occur at every level of interaction, from interpersonal communication to enterprise finance. Some of these failures are everyday and even helpful, allowing us to safely and iteratively learn and improve communications and practices. Other failures – what I call large-scale – result from accumulated bad decisions, organizational defensiveness, and embedded organizational values that prevent people from confronting these issues in real time as they occur. So while it may be difficult to acknowledge your own personal responsibility for an everyday screwup, it’s impossible to get in front of the train of massive organizational failure once its gained momentum and the whole company is riding it straight over the cliff. There is no accountability for these types of failures, and usually no learning either. Leaders do not often reveal their “integrity moment” for these breakdowns. Similar failures could happen again to the same firm. I believe we all have a role to play in detecting, anticipating, and confronting the decisions that lead to breakdowns that threaten the organization’s very existence. In fact, the user experience function works closer to the real world of the customer than any other organizational role. We have a unique responsibility to detect and assess the potential for product and strategic failure. We must try to stop the train, even if we are many steps removed from the larger decision making process at the root of these failures. Organizations as Wicked Problems
Consider the following scenario: A $2B computer systems integrator provider spends most of a decade developing its next-generation platform and product, and spends untold amounts in labor, licenses, contracting, testing, sales and marketing, facilities. Due to the extreme complexity of the application (user) domain, the project takes much longer than planned. Three technology waves come and go, but are accommodated in the development strategy: Proprietary client-server, Windows .T application, Internet + rich client. By the time Web Services technologies matured, the
1
product was finally released as a server-based, rich client application. However, the application was designed too rigidly for flexible configurations necessary for the customer base, and the platform performance compared poorly to the current product for which the project was designed as a replacement. Customers failed to adopt the product, and it was a huge write-off of most of a decade’s worth of investment. The company recovered by facelifting its existing flagship product to embrace contemporary user interface design standards, but never developed a replacement product. A similar situation occurred with the CAD systems house SDRC, whose story ended as part two of an EDS fire sale acquisition of SDRC and Metaphase. These failures may be more common that we care to admit. From a business and design perspective, several questions come to mind: • • • • •
What were the triggering mistakes that led to the failure? At what point in such a project could anyone in the organization have predicted an adoption failure? What did designers do that contributed to the problem? What could IA/designers have done instead? Were IA/designers able to detect the problems that led to failure? Were they able to effectively project this and make a case based on foreseen risks? If people act rationally and make apparently sound decisions, where did failures actually happen?
I suggest this situation was not an application design failure; it was a total organizational failure. In fact, it’s a fairly common type of failure, and preventable. Obviously the market outcome was not the actual failure point. But as the product’s judgment day, the organization must recognize failure when goals utterly fail with customers. So if this is the case, where did the failures occur? It may be impossible to see whether and where failures will occur, for many reasons. People are generally bad at predicting the systemic outcomes of situational actions – product managers cannot see how an interface design issue could lead to market failure. People are also very bad at predicting improbable events, and failure especially, due to the organizational bias against recognizing failures. Organizational actors are unwilling to acknowledge small failures when they have occurred, let alone large failures. Business participants have unreasonably optimistic expectations for market performance, clouding their willingness to deal with emergent risks. We generally have strong biases toward attributing our skills when things go well, and to assigning external contingencies when things go badly. As Taleb (2007)1 says in The Black Swan: "We humans are the victims of an asymmetry in the perception of random events. We attribute our success to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. This causes us to think that we are better than others at whatever we do for a living. Ninety-four percent of Swedes believe that their driving skills put them in the top 50 percent of Swedish drivers; 84 percent of Frenchmen feel that their lovemaking abilities put them in the top half of French lovers." (p. 152).
2
Organizations are complex, self-organizing, socio-technical systems. Furthermore, they can be considered “wicked problems,” as defined by Rittel and Webber (1973)2. Wicked problems require design thinking; they can be designed-to, but not necessarily designed. They cannot be “solved,” at least not in the analytical approaches of so-called rational decision makers. Rittel and Webber identify 10 characteristics of a wicked problem, most of which apply to large organizations as they exist, without even identifying an initial problem to be considered: 1. 2. 3. 4. 5.
There is no definite formulation of a wicked problem. Wicked problems have no stopping rules (you don’t know when you’re done). Solutions to wicked problems are not true-or-false, but better or worse. There is no immediate and no ultimate test of a solution to a wicked problem. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly. 6. Wicked problems do not have an enumerable set of potential solutions. 7. Every wicked problem is essentially unique. 8. Every wicked problem can be considered to be a symptom of another [wicked] problem. 9. The causes of a wicked problem can be explained in numerous ways. 10. The planner has no right to be wrong.
These are attributes of the well-functioning organization, and apply as well to one pitched in the chaos of product or planning failure. The wicked problem frame also helps explain why we cannot trace a series of decisions to the outcomes of failure – there are too many alternative options or explanations within such a complex field. Considering failure as a wicked problem may offer a way out of the mess (as a design problem). But there will be no way to trace back or even learn from the originating events that the organization might have caught early enough to prevent the massive failure chain. So we should view failure as an organizational dynamic, not as an event. By the time the signal failure event occurs (product adoption failure in intended market), the organizational failure is ancient history. Given the inherent complexity of large organizations, the dynamics of markets and timing products to market needs, and the interactions of hundred of people in large projects, where do we start to look for the first cracks of large-scale failure?
Types of Organizational Failure
How do we even know when an organization fails? What are the differences between a major product failure (involving function or adoption) and a business failure that threatens the organization? An organizational-level failure is a recognizable event, one which typically follows a series of antecedent events or decisions that led to the large-scale breakdown. My working definition: “When significant initiatives critical to business strategy fail to meet their highest-priority stated goals.”
3
When the breakdown affects everyone in the organization, we might say the organization has failed as whole, even if only a small number of actors are to blame. When this happens with small companies, such as the start-up I worked with early in my career as a human factors engineer, the source and the impact are obvious.
Our company of 10 people grew to nearly 20 in a month to scale up for a large IBM contract. All resources were brought into alignment to serve this contract, but after about 6 months, IBM cut the contract – a manager senior to our project lead hired a truck and carted away all our work product and computers, leaving us literally sitting at empty desks. We discovered that IBM had 3 internal projects working on the same product, and they selected the internal team that had finished first. That team performed quickly, but their poor quality led to the product’s miserable failure in the marketplace. IBM suffered a major product failure, but not organizational failure. In Dayton, meanwhile, all of us except the company principals were out of work, and the firm folded within a year. Small organizations have little resilience to protect them when mistakes happen. The demise of our start-up was caused by a direct external decision, and no amount of risk management planning would have landed us softly. I also consulted with a rapidly growing technology company in California (Invisible Worlds) which landed hard in late 2000, along with many other tech firms and start-ups. Risk planning, or its equivalent, kept the product alive – but this start-up, along with firms large and small, disappeared during the dot-bomb year. To what extent were internal dynamics to blame for these organizational failures? In retrospect, many of the dot-bombs had terrible business plans, no sustainable business models, and even less organic demand for their services. Most would have failed in a normal business climate. They floated up with the rise of investor sentiment, and crashed to reality as a class of enterprises, all of them able to save face by blaming external forces for organizational failure.
Organizational Architecture and Failure Points
Recognizing this is a journal for information architects, I’d like to extend our architectural model to include organizational structures and dynamics. Organizational architecture may have been first conceived in the 1992 HBR article “The CEO as organizational architect.” The phrase has seen some academic treatment, but is not found in organizational science literature or MBA courses to a great extent. Organizations are “chaordic” as Dee Hock termed it, teetering between chaotic movement and ordered structures, never staying put long enough to have an enduring architectural mapping. However, structural metaphors are useful for planning, and good planning keeps organizations from failing. So let’s consider the term organizational architecture metaphorical, but valuable – giving us a consistent way of teasing apart the different components of a large organization related to decision, action, and role definition in large project teams.
4
Let’s start with organizational architecture and consider its relationships to information architecture. The continuity of control and information exchange between the macro (enterprise) and micro (product and information) architectures can be observed in intra-organizational communications. We could honestly state that all such failures originate as failures in communications. Organizational structure and processes are major components, but the idea of “an architecture,” as we should well know from IA, is not merely structural. An architectural approach to organizational design involves at least: Structures Business processes Products Practices People and roles Finance Communication rules Styles of interaction Values
Enterprise, organizational, departmental, networks Product fulfillment, Product development, Customer service Structures and processes associated with products sold to markets User Experience, Project management, Software design Titles, positions, assigned and informal roles Accounting and financial rules that embed priorities and values Explicit and implicit rules of communication and coordination How work gets done, how people work together, formal behaviors Explicit and tacit values, priorities in decision making
Since we would need a book to describe the function and relationships within and between these dimensions, let’s see if the whole view suffices. Each of these components are significant functions in the organizational mix, all reliant on communication to maintain its role and position in the internal architecture. While we may find may have a single communication point (a leader) in structures and people, most organizational functions are largely self-organizing, continuously reified through self-managing communication. They will not have a single failure point identifiable in a communication chain, because nearly all organizational conversations are redundant and will be propagated by other voices and in other formats. Really bad decisions are caught in their early stages of communication, and become less bad through mediation by other players. So organizations persist largely because they have lots of backup. In the process of backup, we also see a lot of cover-up, a significant amount of consensus denial around the biggest failures. The stories people want to hear get repeated. You can see why everyday failures are easy to catch compared to royal breakdowns. So are we even capable of discerning when a large-scale failure of the organizational system is immanent? Organizational failure is not a popular meme; employees can handle a project failure, but to acknowledge that the firm broke down - as a system - is another matter. According to Chris Argyris (1992), organizational defensive routines are “any routine policies or actions that are intended to circumvent the experience of embarrassment or threat by bypassing the situations that may trigger these responses. Organizational defensive routines make it unlikely that the organization will address the factors that caused the embarrassment or threat in the first place. (p. 164)” Due to organizational defenses most managers will place the blame for such failure on individuals rather than the consequences of poor decisions or other root causes, and will deflect critique of the general management or decision making processes. Figure 1 shows a pertinent view of the case organization, simplifying the architecture (to People, Process, Product and Project) so that differences in structure, process, and timing can be drawn. Projects are not considered part of architecture, but they reveal time dynamics and mobilize all the constituents of architecture. Projects are also where failures originate. The timeline labeled
5
People
“Feedback cycle” shows how smaller projects cycled user and market feedback quickly enough to impact product decisions and design, usually before initial release. Due to the significant scale, major rollout, and long sales cycle of the Retail Store Management product, the market feedback (sales) took most of a year to reach executives. By then, the trail’s gone cold.
Organization / Management / Employees
Project
Process
Strategic / Exec Management Ongoing – people change roles
Product Management
Software Development
Marketing
UX / UI IA
Project A: Retail Store Management 5-6 years
Product
Project B: Supply Pipeline Project C: Online Education
2 years
1 year
Retail Product Started
Feedback cycle:
Supply Released
2-4 months
Online Ed Released
4-8 months
Retail Product Released
8-12 months
Timeline
Figure 1. Failure case study organization – Products and project timeframes.
Over the project lifespan of Retail Store Management, the organization: • Planned a “revolutionary” not evolutionary product • Spun off and even sequestered the development team - to “innovate” undisturbed by the pedestrian projects of the going concern • Spent years developing “best practices,” for technology, development, and the retail practices embodied in the product • Kept the project a relative secret from rest of the company, until close to initial release • Evolved technology significantly over time as paradigms changed, starting as an NT clientserver application, then distributed database, finally a Web-enabled rich client interface. Large-scale failures can occur when the work domain and potential user acceptance (motivations and constraints) are not well understood. When a new product cannot fail, organizations will prohibit acknowledging even minor failures, with cumulative failures to learn building from small mistakes. This can lead to one very big failure at the product or organizational level. 6
We can see this kind of situation (as shown in Figure 1) generates many opportunities for communications to fail, leading to decisions based on biased information, and so on. From an abstract perspective, modeling the inter-organizational interactions as “boxes and arrows,” we may find it a simple exercise to “fix” these problems. We can recommend (in this organization) actions such as educating project managers about UX, creating marketing-friendly usability sessions to enlist support from internal competitors, making well-timed pitches to senior management with line management support, et cetera. But in reality, it usually does not work out this way. From a macro perspective, when large projects that "cannot fail" are managed aggressively in large organizations, the user experience function is typically subordinated to project management, product management, and development. User experience – whether expressing its user-centered design or usability roles – can be perceived as introducing new variables to a set of baselined requirements, regardless of lifecycle model (waterfall, incremental, or even Agile). To make it worse (from the viewpoint of product or requirements management), we promote requirements changes from the high-authority position conferred by the reliance on user data. Under the organizational pressures of executing a top-down managed product strategy, leadership often closes ranks around the objectives. Complete alignment to strategy is expected across the entire team. Late-arriving user experience “findings” that could conflict with internal strategy will be treated as threatening, not helpful. With such large, crossdepartmental projects, signs of warning drawn from user data can be simply disregarded, as not fitting the current organizational frame. And if user studies are performed, significant conflicts with strategy can be discounted as the analyst’s interpretation. There are battles we sometimes cannot win. In such plights, user experience professionals must draw on inner resources of experience, intuition, and common sense and develop alternatives to standard methods and processes. The quality of interpersonal communications may make more of a difference than any user data. In Part II, we will explore the factors of user experience role, the timing dynamics of large projects, and several alternatives to the framing of UX roles and organizations today.
Part II. Failure is a Matter of Timing
In Part I of We Tried to Warn You, three themes were developed: Organizations as wicked problems, the differences of failure leverage in small versus large organizations, and the description of failure points. These should be considered exploratory elements of organizational architecture, from a communications information architecture perspective. While the organizational studies literature has much to offer about organizational learning mechanisms, we find very little about failure from the perspective of product management, management processes, or organizational communications. Researching failure is similar to researching the business strategies of firms that went out of business (e.g., Raynor, 2007). They are just not available for us to analyze, they are either covered-up embarrassments, or they become transformed over time and much expense into “successes.” In The Strategy Paradox, Raynor describes the “survivor’s bias” of business research, pointing out that internal data is unavailable to researchers for the dark matter of the business universe, those that go under. Raynor shows how a large but unknowable proportion of businesses fail pursuing nearly perfect strategies. (Going concerns often survive 7
because of their mediocre strategies, avoiding the hazards of extreme strategies). A major difference in the current discussion is that organizational failure as defined here does not bring down the firm itself, at least not directly, as a risky strategy might. But it often leads to complete reorganization of divisions and large projects, which should be recognized as a significant failure at the organizational level. One reason we are unlikely to assess the organization as having failed is the temporal difference between failure triggers and the shared experience of observable events. Any product failure will affect the organization, but some failures are truly organizational. They may be more difficult to observe. If a prototype design fails quickly (within a single usability test period), and a project starts and fails within 6 months, and a product takes perhaps a year to determine its failure – what about an organization? We should expect a much longer cycle from originating failure event to general acknowledgement of failure, perhaps 2-5 years. There are different timeframes to consider with organizational versus project or product failure. In this case study, the failure was not observable until after a year or so of unexpectedly weak sales, with managers and support dealing with customer resistance to the new product. However, decisions made years earlier set the processes in place that eventuated as adoption failure. Tracing the propagation of decisions through resulting actions, we also find huge differences in temporal response between levels of hierarchy (found in all large organizations). Failures can occur when a chain of related decisions, based on bad assumptions, propagate over time. These micro-failures may have occurred at the time as “mere” communication problems. In our case study, product requirements were defined based on industry best practices, guided by experts and product buyers, but excluding user feedback on requirements. Requirements were managed by senior product managers and were maintained as frozen specifications so that development decisions could be managed. Requirements become treated as-if validated by their continuing existence and support by product managers. But with no evaluation by end users of embodied requirements – no process prototype was demonstrated – product managers and developers had no insight into dire future consequences of product architecture decisions. Consider the requisite timing of user research and design decisions in almost any project. A cycle of less than a month is a typical loop for integrating design recommendations from usability results into an iterative product lifecycle. If the design process is NOT iterative, we see the biggest temporal gaps of all. There is no way to travel back in time to revise requirements unless the tester calls a “show-stopper,” and that would be an unlikely call from an internal usability evaluator. In a waterfall or incremental development process, which remains typical for these large-scale products – usability tests often have little meaningful impact on requirements and development. This approach is merely fine-tuning foregone conclusions. Here we find the seeds of product failure, but the organization colludes to defend the project timelines, to save face, to maintain leadership confidence. Usability colludes to ensure they have a future on the job. With massive failures, everyone is partly to blame, but nobody accepts personal responsibility.
8
The Roles of User Experience As Figure 1 shows, UX reported to development management, and was further subjected to product and project management directives. In many firms, UX has little independence and literally no requirements authority, and in this case was a dotted-line report under three competing authorities. That being the case, by the time formal usability tests were scheduled, requirements and development were too deeply committed to consider any significant changes from user research. With the pressures of release schedules looming, usability was both rushed and controlled to ensure user feedback was restricted to issues contained within the scope of possible change and with minor schedule impact. By the time usability testing was conducted, the scope was too narrowly defined to admit any ecologically valid results. Usability test cases were defined by product managers to test user response to individual transactions, and not the systematic processes inherent in the everyday complexity of retail, service, or financial work. Testing occurred in a rented facility, and not in the retail store itself. The context of use was defined within a job role, and not in terms of productivity or throughput. Individual screen views were tested in isolation, not in the context of their relationship to the demands of real work pressures – response time, database access time, ability to learn navigation and to quickly navigate between common transactions. Sequences of common, everyday interactions were not evaluated. And so on. The product team’s enthusiasm for the new and innovative (including UX in its design role) may shield listening to the users’ authentic preferences and skill in a well-supported current system the new product intends to replace. And when taking a conventional approach to usability, such fundamental disconnects with the user domain may not even be observable. Many well-tested products have been released only to fail in the marketplace due to widespread user preference to maintain their current, established, well-known system. This especially so if the work practice requires considerable learning and use of an earlier product over time, as happened in our retail system case. Very expensive and well-documented failures abound due to user preference for a wellestablished installed base, with notorious examples in air traffic control, government and security, medical / patient information systems, and transportation systems. When UX is “embedded” as part of a large team, accountable to product or project management, the natural bias is to expect the design to succeed. When UX designers must also run the usability tests (as in this case), we cannot expect the “tester” to independently evaluate the “designer’s” work. The same person in two opposing roles, the UX team reporting to product, and restricted latitude for design change (due to impossible delivery deadlines) – we should consider this a design failure in the making. In this situation, it appears UX was not allowed to be effective, even if the usability team understood how to work around management to make a case for the impact of its discoveries. But the UX team may not have understood the possible impact at the time, but only in retrospect after the product failed adoption. We have no analytical or qualitative tools for predicting the degree of market adoption based on even well-designed usability evaluations. Determining the likelihood of future product adoption failure across nationwide or international markets is a judgment call, even with survey data of sufficient power to estimate the population. Because of the show-stopping
9
impact of advancing such a judgment, it’s unlikely the low-status user experience role will push the case, even if such a case is clearly warranted from user research. The Racket: The Organization as Self-Protection System
Modern organizations are designed to not fail. But they will fail at times when pursuing their mission in a competitive marketplace. Most large organizations that endure become resilient in their adaptation to changing market conditions. They have plenty of early warning systems built into their processes – hierarchical management, financial reports, project management and stagegate processes. The risk of failure becomes distributed across an ever-larger number of employees, reducing risk through assumed due diligence in execution. The social networks of people working in large companies often prevent the worst decisions from gaining traction. But the same networks also maintain poor decisions if they are big enough, are supported by management, and cannot be directly challenged. Groupthink prevails when people conspire to maintain silence about bad decisions. We then convince ourselves that leadership will win out over the risks; the strategy will work if we give it time. Argyris’ organizational learning theory shows people in large organizations are often unable to acknowledge the long-term implications of learning situations. While people are very good at learning from everyday mistakes, they don’t connect the dots back to the larger failure that everyone is accommodating. Called “double loop learning,” the goal is learn from an outcome and reconfigure the governing variables of the situation’s pattern to avoid the problem in the future. (Single-loop learning is merely changing one’s actions in response to the outcome). Argyris’ research suggests all organizations have difficulties in double-loop learning; organizations build defenses against this learning because it requires confrontation, reflection, and change of governance, decision processes, and values-in-use. It’s much easier to just change one’s behavior. What can UX do about it?
User experience/IA clearly plays a significant role as an early warning system for market failure. Context-sensitive user research is perhaps the best tool for available for informed judgment of potential user adoption issues. Several common barriers to communicating this informed judgment have been discussed: •
Organizational defenses prevent anyone from advancing theories of failure before failure happens.
•
UX is positioned in large organizations in a subordinate role, and may have difficulty planning and conducting the appropriate research.
•
UX, reporting to product management, will have difficulty advancing cases with strategic implications, especially involving product failure.
•
Groupthink – people on teams protect each other and become convinced everything will work out.
•
Timing – by the time such judgments may be formed, the timeframes for realistic responsive action have disappeared.
10
Given the history of organizations and the typical situating of user experience roles in large organizations, what advice can we glean from the case study? Let’s consider leveraging the implicit roles of UX, rather than the mainstream dimensions of skill and practice development. UX serves an Influencing role – so let’s influence User experience has the privilege of being available on the front lines of product design, research, and testing. But it does not carry substantial organizational authority. In a showdown between product management and UX, product wins every time. Product is responsible for revenue, and must live or die by the calls they make. So UX should look to their direct internal client’s needs. UX should fit research and recommendations to the context of product requirements, adapting to the goals and language of requirements management. We (UX) must design sufficient variability into prototypes to be able to effectively test expected variances in preference and work practice differences. We must design our test practices to enable determinations from user data as to whether the product requirements fit the context of the user’s work and needs. We should be able to determine, in effect, whether we are designing for a product, or designing the right product in the first place. Designing the right product means getting the requirements right. Because we are closest to the end user throughout the entire product development lifecycle, UX plays a vital early warning role for product requirements and adoption issues. But since that is not an explicit role, we can only serve that function implicitly, through credibility, influence and well-timed communications. UX practice must continue to develop user/field research methods sensitive to detecting nascent problems with product requirements and strategy.
UX is a recursive process – which we might call socialization User experience is highly iterative, or it fails as well. We always get more than one chance to fail, and we’ve built that into practices and standards. Practices and processes are repeated and improved over time. But organizations are not flexible with respect to failure. They are competitive and defensive networks of people, often with multiple conflicting agendas. Our challenge is to encourage organizations to recurse (recourse?) more. We should do this by creating a better organizational user experience. We should follow our own observations and learning of the organization as a system of internal users. Within this recursive system (in which we participate as a user), we can start by moving observations up the circle of care (or the management hierarchy if you will). I like to think our managers do
11
care about the organization and their shared goals. But our challenge here is to learn and perform from double-loop learning ourselves, addressing root causes and “governing variables” of issues we encounter in organizational user research. We do this by systematic reflection on patterns, and improving processes incrementally, and not just “fixing things” (single-loop learning). We can adopt a process of socialization (Jones, 2007) rather than institutionalization, of user experience. Process socialization was developed as a more productive alternative to top-down institutionalization for the introduction of UX practices in organizations introducing UX into an intact product development process. While there is strong theoretical support for this approach (from organizational structuration and social networks), socialization is recommended because it works better than the alternatives. Institutionalization demands that an organization establish a formal set of roles, relationships, training, and management added to the hierarchy to coordinate the new practices. Socialization instead affirms that a longer-term, better understood, and organizationally resilient adoption of the UX process occurs when people in roles lateral to UX learn the practices through participation and gradual progression of sophistication. The practices employed in a socialization approach are nearly the opposite (in temporal order) of the institutionalization approach: •
Find a significant UX need among projects and bring rapid, lightweight methods to solve obvious problems
•
Have management present the success and lessons learned
•
Do not hire a senior manager for UX yet, lateral roles should come to accept and integrate the value first
•
Determine UX need and applications in other projects. Provide tactical UX services as necessary, as internal consulting function.
•
Develop practices within the scope of product needs. Engage customers in field and develop user and work domain models in participatory processes with other roles.
•
Build an organic demand and interest in UX. Provide consulting and usability work to projects as capability expands. Demonstrate wins and lessons from field work and usability research.
•
Collaborate with requirements owners (product managers) to develop user-centered requirements approach. Integrate usability interview and personas into requirements management.
•
Integrate with Product Development. Determine development lifecycle decision points and user information required.
•
Establish User Experience as process and organizational function
•
Provide awareness training, discussion sessions, and formal education as needed to fit UX process.
•
Assessment and renewal, staffing, building competency
12
We should create more opportunities to challenge failure points and process breakdowns. Use requirements reviews to challenge the fit to user needs. Use a heuristic evaluation to bring a customer service perspective on board. In each of those opportunities, articulate the double-loop learning point. “Yes, we’ll fix the design, but our process for reporting user feedback limits us to tactical fixes like these. Let’s report the implications of user feedback to management as well.” We can create these opportunities by looking for issues and presenting them as UX points but in business terms, such as market dynamics, competitive landscape, feature priority (and overload), and user adoption. This will take time and patience, but then, its recursive. In the long run we’ll have made our case without major confrontations. Conclusions
Scott Cook, Intuit’s Founder, famously said at CHI 2006: “The best we can hope to bat is .500. If you're getting better than that, you're not swinging for the fences. Even Barry bonds, steroids or not, is not getting that. We need to celebrate failure.” Intelligent managers actually celebrate failures – that’s how we learn. If we aren’t failing at anything, how do we know we’re trying? The problem is recognizing when failure is indeed an option. How do we know when a project so large – an organizational level project – will bellyup? How can something so huge and spectacular in its impact be so hard to call, especially at the time decisions are being made that could change the priorities and prevent an eventual massive flop? The problem with massive failure is that there’s very little early warning in the development system, and almost none at the user or market level. When product development fails to respect the user, or even the messenger of user feedback, bad decisions about user interface, architecture, compound and push the product toward an uncertain reception in the marketplace. Early design decisions compound by determining architectures, affecting later design decisions, and so on through the lifecycle of development. These problems are well-known in cases of services design for well-established work practices. In cases with which I have personal experience, significant problems occur when readily available domain and user knowledge is discarded in favor of new or "innovative" designs that attempt a discontinuous product within an established but complex domain. These problems can be compounded even when good usability research is performed. When user research is conducted too late in the product development cycle, and is driven by usability questions related to the product and not the work domain, development teams are fooled into believing their design will generalize to user needs across a large market in that domain. But at this point in product development, the fundamental platform, process, and information architecture decisions will have been made, and user research is constrained from revisiting questions that have been settled in earlier phases by marketing and product management.
13
References
Argyris, C. (1992). On organizational learning. London: Blackwell. Howard, R. (1992). The CEO as organizational architect: an interview with Xerox's Paul Allaire. Harvard Business Review, 70 (5), 106-121. Jones, P.H. (2007). Socializing a Knowledge Strategy. In E. Abou-Zeid (Ed.) Knowledge Management and Business Strategies: Theoretical Frameworks and Empirical Research, pp. 134164. Hershey, PA: Idea Group. (In press). Raynor, M.E. The strategy paradox: Why committing to success leads to failure (and what to do about it). New York: Currency Doubleday. Rittel, H.W.J. and Weber, M.W. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155-169. Taleb, N.N (2007).The Black Swan: The Impact of the Highly Improbable. New York: Random House.
14