Change Management Strategy in Action: Independent Evaluation for Learning

Page 1

Change Management Strategy in Action: Independent Evaluation for Learning Informed by trends and citing predictions about the future of evaluation, this prĂŠcis lays out requirements, considerations, and steps for planning and delivering change that would support the recalibration of independent evaluation so it might better serve learning. Olivier Serrat 14/04/2018


1 Informed by trends and citing predictions about the future of evaluation, this précis lays out requirements, considerations, and steps for planning and delivering change that would support the recalibration of independent evaluation so it might better serve learning. Independent Evaluation for Learning Many organizations (e.g., United Nations agencies, multilateral development banks, international institutions, bilateral aid agencies, national agencies, nongovernment organizations, etc.) conduct independent evaluations that assess policies, strategies, programs, and projects—including their design, implementation, results, and business processes—to systematically and objectively determine relevance, coherence, efficiency, effectiveness, impact, and sustainability. (Evaluations of institutions are less common.) Of course, there are different types of evaluation (e.g., formative, summative, process, outcomes, and impact) but what lessons these uncover serve two distinct and irreconcilable purposes, namely, accountability and learning: the first is concerned with the provision of information to the public; the second means to improve the development effectiveness of plans through feedback of lessons learned. The characteristics (e.g., basic aim, emphasis, clientele, selection of topics, etc.)—and the very conduct and delivery—of evaluation for accountability and evaluation for learning are quite different, as shown in the Table below. And, while evaluation for accountability predominates, evaluation for learning is the area where most audiences find the greatest need today and tomorrow;1 however, this need remains largely unmet because the underlying political issues as well as the content and process for required change are complicated. Table: Characteristics of Accountability and Lesson-Learning as Objectives of Evaluation Activity Item Basic Aim Emphasis Favored By

Selection of Topics Status of Evaluation

1

Accountability as the Objective The basic aim is to find out about the past. Emphasis is on the degree of success or failure. Parliaments, treasuries, media, pressure groups

Topics are selected based on random samples. Evaluation is an end-product.

Lesson-Learning as the Objective The basic aim is to improve future performance. Emphasis is on the reasons for success or failure. Development agencies and their staff, developing countries, research institutions, consultants Topics are selected for their potential lessons. Evaluation is part of the project cycle.

The parties that can learn from evaluation for learning include the beneficiaries who are affected by the work being evaluated; the people whose work is being evaluated (including implementing agencies); the people who contribute to the evaluation (including direct stakeholders); the people who conduct the evaluation; the people who commission the evaluation; people who are or will be planning, managing, or executing similar interventions in the future; and the wider community.


2 Item Status of Evaluators

Importance of Data from Evaluations

Accountability as the Objective Evaluators should be independent (and impartial). Data are only one consideration.

Importance of Feedback

Feedback is relatively unimportant. Adapted from Cracknell (2000).

Lesson-Learning as the Objective Evaluators usually include staff members of the aid agency. Data are highly valued for the planning and appraising of new development activities. Feedback is vitally important.

The Problématique of Independent Evaluation The problématique behind the selection of evaluation's purpose, viz. accountability or learning, is multifaceted: at the insistent request of shareholders tasked with reporting to political leadership, taxpayers, and citizens, feedback from evaluation studies focuses on accountability (and hence provides for command, control, and finger-pointing); however, evaluation for accountability does not serve as an important foundation of learning organizations even if evaluations are shifting to new horizons (e.g., from the project to the program to the country and sometimes regional and global levels). Hence, these days, the primary audiences for evaluations are most commonly boards of directors—the members of which, across the aforementioned agencies, are appointed by the very same shareholders—and, far less often, incountry audiences (e.g., policy makers, counterparts) or staff in the concerned agencies (e.g., senior management, policy units, line offices and departments). All the while, the growing practice of self-evaluation (by line offices and departments) means that the centralized units which are given responsibility for independent evaluation are ever more hard-pressed to increase and demonstrate value added from their activities (that fewer and fewer audiences beyond boards of directors refer to). In short, owing primarily to an outdated emphasis on evaluation for accountability, the contribution of evaluation units to organizational performance has plateaued: evaluation reports are hardly ever read (and are fought tooth and nail if they are). The problématique of purpose that evaluation units face is convoluted indeed: this is because the common framework and methodologies that boards of directors (heeding weighty advice from the Organisation for Economic Co-operation and Development) have imposed or encouraged across the aforementioned aid architecture offer insufficient elbowroom for change toward evaluation for learning despite the fast-rising demand for that. The issue is intractable because mandatory logic models that demand early determination of strategic elements (e.g., inputs, outputs, outcome, and impact) and their causal relationships, indicators, and what assumptions or risks may influence success or failure impose (before the fact) significant limitations that affect the design of evaluation systems and irremediably constrain the potential for learning.2 2

Logic models usually assume simple, linear cause–effect relationships: they overlook unintended or unplanned outcomes; do not make explicit the theory of change underlying the initiative; do not cope well with multi-factor, multi-stakeholder processes; undervalue creativity and experimentation in the pursuit of long-term, sustainable impact (the "lockframe" problem); encourage fragmented rather than holistic thinking; and require a high level of planning capacity.


3 Objectives of Evaluation Activity: The Need for Organizational Change and the Issues to Address Originally, evaluation units formed integral part of their respective organizations; but, they were declared independent in the late 1990s following the introduction of results-based management and now report to board of directors. The independence of the evaluation function has encouraged differentiation—not integration—and an "Us vs. Them" mentality at (quite polar) opposite from what is needed by evaluation for learning, which hinges on partnerships. (Martin, 2002) (Consequently, other offices and departments do their best to ignore what studies the evaluation units generate.) The walls that evaluation units have built around themselves since the late 1990s act as a strong disincentive to change and work against psychological commitment to any sort of change—after all, the argument goes, evaluation units must be in the right since they do the bidding of boards of directors. (And so, resistance to change in evaluation units is simultaneously blind, ideological, and political.) In the many organizations mentioned earlier, evaluation for accountability is passé but lessonlearning as the primary objective of evaluation activity remains a pipe dream. Still, ten predictions by Gargani (2012) hinted at quickly shifting sands: in 10 years, he argued, (a) most evaluations will be have become internal (and so will not conducted by evaluation units); (b) valuation reports will have become obsolete; (c) evaluations will have abandoned data collection in favor of data mining; (d) national registries of evaluations will have been created; (e) evaluations will be conducted in more open ways; (f) the Request for Proposals will Rest in Peace (because most evaluations will be conducted internally); (g) evaluation theories (plural) will have disappeared (because a comprehensive, contingent, context-sensitive theory will have emerged); (h) the demand for evaluators will have grown; (i) the number of training programs in evaluation will have increased; and (j) the term "evaluation" will have gone out of favor (replaced by some new term such as "social impact and management" to highlight the process of managing interventions over that of merely understanding them). [It will take longer than Gargani (2012) expected for these changes to eventuate but the points he made are valid; regarding what remains to be done, Gargani (2012) might also have called for the introduction of more participatory technologies (e.g., appreciate inquiry, learning histories, the Most Significant Change technique, and outcome-mapping) and underscored the considerable potential of remote sensing technology and social media analytics.] It is poignant that relevance, coherence, efficiency, effectiveness, impact, and sustainability should now become the criteria by which one could (or perhaps should) judge the performance of evaluation units: independent evaluation needs a shot in the arm, with 70%–80% of evaluation reports given over to evaluation for learning. Independent Evaluation for Learning: Planning and Delivering Organizational Change At first glance, the implications for leadership are not easy to discern. In the circumstances delineated above, reforming evaluation units should call for nothing less than revolutionary change in what used to be an open system; yet, even evolutionary change is hard to contemplate: their patterns, structures, and processes are those of tightly coupled systems. Tightly coupled systems are thought easier to change but this is not the case: first, reform would have to be sanctified by the international system framed by the Evaluation Cooperation Group (and its 10 member organizations) that was established in 1996 to promote a more harmonized approach to evaluation methodology; second, past that larger system, change would fully impact evaluation units and their experts (most of whom would need training in user-centric evaluation, associated methodologies, and new techniques). (Indeed, at the level of the individual, change would assuredly have deep implications for selection, recruitment,


4 replacement, and displacement, not forgetting coaching and counseling.) Evaluation units would have to revisit sacrosanct assumptions about the mission, purpose, and raison d'être of evaluation units—primarily the notion of independence—as well as what competencies are required if the objective of evaluation activity shifts. Heeding Drucker (1994), new assumptions would have to fit emerging reality, be congruent with one another, be known and understood by all staff in the host organizations (not just staff in evaluation units), and be tested constantly. If change of such political and organizational dimensions were acceptable to both the international community and, say, a multilateral development bank, perhaps on a trial basis, the steps to take to ensure a successful implementation process for the proposed changes would— informed first by an organizational diagnosis such as Cameron and Quinn's Competing Values Framework, aka Organizational Culture Assessment Instrument (OCAI), (Cameron & Quinn, 2011)3 and then framed by the Burke–Litwin Model (Burke & Litwin, 1992)4—assuredly need to spring from the complete set of activities that Burke lists for discontinuous change leaders: (i) prelaunch phase (leader self-examination; gathering information from the external environment, not forgetting the adversarial internal environment in this case; establishing the need for change; providing clarity of vision and direction); (ii) launch phase (communicating the need for change; initiating key activities; dealing with resistance); (iii) postlaunch phase (multiple leverage; taking the heat; consistency; perseverance; repeating the message); and (iv) sustaining the change (dealing with unanticipated consequences; momentum; choosing successors; launching yet again new initiatives) (Burke, 2014, pp. 328–329). Conspicuously in relation to establishing the need for change, providing clarity of vision and direction, initiating

3

In evaluation units, the Organizational Culture Assessment Instrument developed by Cameron and Quinn would likely reveal a hierarchy culture buttressed by elements of a clan culture. Toward evaluation for learning, a change management strategy in action would probably see adhocracy as the preferred culture, with a theory of effectiveness and quality improvement strategy informed by elements of a market culture. 4 Even if its effectiveness is subject to how well each of the 12 dimensions identified are explored and put to use, the Burke–Litwin Model is one of the most comprehensive causal model of organizational performance and change and—compared with others—would permit better framing (e.g., understanding, categorizing, and interpreting) of the complex circumstances in which evaluation units operate, as shown in Figure 1. The mix of transformational factors (long-term levers), transactional factors (operational levers), and individual and personal factors (short-term levers) the model demarcates and the interrelationships that it flags would provide ample and much-needed opportunity to hypothesize how the performance of evaluation units is affected by external and internal factors [including external environment, mission and strategy, leadership, organizational culture, structure, management practices, systems (policies and procedures), work unit climate, task and individual skills, individual needs and values, motivation, and individual and organizational performance]. Other models—for example by Leavitt, Mintzberg, Nadler–Tushman, Porras, and Weisbord—integrate content and process but they are more about organizational functioning than they are about change. Tichy's (1982) attention to technical, political, and cultural systems is welcome, critical as they are to understanding organizations in general and change in particular, but Tichy (1982) skims over the all-important psychological aspects of change; does not distinguish transformational and transactional dimensions; and, in the final analysis, overplays the need for alignment and congruence to the detriment of vivifying change. [Paradoxically, the technical, political, and cultural dynamics of Tichy's (1982) framework are lifeless.] And so, there is added reality in the Burke–Litwin model that one looks for in vain elsewhere.


5 key activities, and dealing with resistance, consideration of the "Change Formula" would prior to that help define and power the pre-launch and launch phases.5 Figure 1: The Burke–Litwin Causal Model of Organizational Performance and Change

Burke & Litwin (1992). Schein's (1999) insights regarding group boundary management (and criteria for inclusion and exclusion) would be relevant: indeed, the boundaries of evaluation units would need to shift 5

The "Change Formula" has undergone modifications since it was developed by Gleicher in the early 1960s to underscore what key factors and relationships can affect readiness for change: in its first iteration, the Change Formula read C = (ABD) > X, where C = Change, A = Level of dissatisfaction with the status quo; B = Clear or understood desired state; D = Practical first steps to the desired state, X = "Cost" of changing. Subsequently, Dannemiller refined the Change Formula to read D x V x F > R, where D = Dissatisfaction with how things are now, V = Vision of what is possible, F = First, concrete steps that can be taken towards the vision; and R = Resistance. (Note the addition of multipliers.) With other inputs by Beckhard and Harris (1977), the Change Formula now usually reads C = D x V x F x S > R, where C = Change; D = Dissatisfaction with how things are now; V = Vision of what is possible; F = First, concrete steps that can be taken towards the vision; S = Support systems; and R = Resistance.


6 substantially.6 Based on this and the highly technical nature of evaluation work, the kind of leadership required would not be of the either–or but of the both–and variety: in other words, a leader–manager able to transform and transact would be needed, this to encourage individual and collective sense-making and turn resistance into resource in a psychology of safety, trust, and commitment. Assuredly also, beginning in Process Consultation mode, the leader–manager would need to act in the three capacities of expert, doctor, and process consultant and continually move from one mode to another as the situation dictates (Schein, 1999). By such means, he/she would need not only to build a helping relationship with the evaluation unit but also, crucially, help that unit (re)build a helping relationship with the host organization, especially its senior management, policy units, and line offices and departments. Active inquiry and listening that balances problem-solving with appreciative inquiry; the deciphering of hidden forces and process with due respect for intrapsychic processes (observation—emotional reaction—judgment—intervention) and the traps of misperception they are prey to; careful attention to face-to-face dynamics, levels of communication, and deliberate feedback; and a focus on both group task accomplishment and interpersonal and group management would be "par for the course". Conclusion This précis has argued that, to enable adaptability and resilience in the face of change, learning must now be at the core of every organization. Evaluation provides unique opportunities to learn throughout the management cycle: however, to reap these opportunities, evaluation must be designed, conducted, and followed-up with learning, not accountability, in mind for the fundamental reason that the objective of the first is to prove while that of the second it to improve. That said, the précis makes clear that evaluation is an eminently political issue: a change management strategy to boost evaluation for learning would do well to first conduct an organizational diagnosis using Cameron and Quinn's Competing Values Framework (Cameron & Quinn, 2011); consider early all implications from the "Change Formula"; leverage the Burke– Litwin Model (Burke & Litwin, 1992) in light of the intricate patterns, structures, and processes of evaluation units (that no other causal models of organizational performance and change would help frame so well); follow through the complete set of activities that Burke lists for discontinuous change leaders (Burke, 2014, pp. 328–329); and both absorb Schein's (1999) insights regarding group boundary management—which is vitally important here—and adhere also to his precepts for Process Consultation in the delivery thereof. Most likely, a plan of action across the 12 dimensions of the Burke–Litwin Model would span three years in the four phases of prelaunch, launch, postlaunch, and sustaining the change (which might call for a follow-up rolling work program). (Research associated with the application of the Organizational Culture 6

Involving stakeholders (and so broadening boundaries) is a major challenge facing learningoriented evaluations, made all the harder by underinvestment in the architecture of knowledge management and learning. To begin to build learning into evaluations, for instance, one would have to make the drafting of the terms of reference a participatory activity that involves stakeholders, consider the utilization of the evaluation from the outset, spend time getting the evaluation questions clear and include questions about unintended outcomes, bring stakeholders into the process, ensure that the "deliverables" include learning points aimed at distinct audiences, build in diverse reporting and dissemination methods for a range of audiences, ensure there is follow-up by assigning responsibilities for implementing recommendations, and build in a review of the evaluation process.


7 Assessment Instrument and itemizing of the Change Formula would only take about two person-months at the onset but provide essential inputs toward subsequent definition and refinement of the action plan.) The change management strategy in action is depicted in Figure 2. Without presupposing the outcome of research associated with Organizational Culture Assessment Instrument and the Change Formula, it is likely that—among the 12 dimensions—a critical mass of recommendations would concern about eight of them; they include mission and strategy (e.g., revisiting the mission, purpose, and raison d'être of the evaluation unit—primarily the notion of independence), leadership (e.g., reframing the leadership structure and composition to place an accent on leader–manager role models), organizational culture (e.g., recasting explicit and implied values, principles, customs, rules, and regulations that influence organizational behavior to encourage learning partnerships with other units of the host organization), systems (policies and procedures) (e.g., devising or adopting methodologies and techniques for user-centric evaluations for learning), work unit climate (e.g., gauging how the evaluations experts think and feel, and what they expect, about the kinds of relationships they share with other experts in their teams and other staff in the host organization they would have to work more closely with), task and individual skills (e.g., devising and conducting training in user-centric evaluation, associated methodologies, and new techniques), individual needs and values (e.g., exploring levels of engagement among the evaluation experts to identify what quality factors will enrich jobs and lead to better job satisfaction), and motivation (e.g., assessing the motivation levels of the evaluation experts to determine their willingness to achieve the new mission and strategy and—where willingness is insufficient—formulate motivational triggers necessary to deliver these). Figure 2: Change Management Strategy in Action

OCAI Assessment: Clan, Adhocracy, Market, Hierarchy (Now & Preferred)

Change Formula C=DxVxFxS>R Two Person-Months

Prelaunch, launch, postlaunch, & sustaining the change

Burke– Litwin Model

Process Consultation (including boundary management)

Three Years


8 References Beckhard, R., & Harris, R. (1977). Organizational transitions: Mapping complex change (2nd ed.). Reading, MA: Addison-Wesley. Burke, W. (2014). Organization change: Theory and practice (4th ed.). Thousand Oaks, CA: Sage Publications. Burke, W., & Litwin G. (1992). A causal model of organizational performance and change. Journal of Management, 18(3), 523–545. Cameron, K., & Quinn, R. (2011). Diagnosing and change organizational culture: Based on the competing values framework (3rd ed.). San Francisco, CA: Jossey-Bass. Cracknell, B. (2000). Evaluating development aid: Issues, problems, and solutions. Thousand Oaks, CA: Sage Publications. Drucker, P. (1994). The theory of the business. Harvard Business Review, 72(5), 95–104. Gargani, J. (2012, January 30). The future of evaluation: 10 predictions [Blog post]. Retrieved from https://evalblog.com/2012/01/30/the-future-of-evaluation-10-predictions/ Martin, J. (2002). Organizational culture: Mapping the terrain. Thousand Oaks, CA: Sage Publications. Schein, E. (1999). Process consultation revisited: Building the helping relationship. San Francisco, CA: Addison-Wesley. Tichy, N. (1982). Managing change strategically: The technical, political, and cultural keys. Organizational Dynamics, 11(2), 59–80.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.