Use of Evaluations and the Evaluation of their Use

Page 1

Evaluation http://evi.sagepub.com

Use of Evaluations and the Evaluation of their Use Osvaldo Feinstein Evaluation 2002; 8; 433 DOI: 10.1177/13563890260620621 The online version of this article can be found at: http://evi.sagepub.com/cgi/content/abstract/8/4/433

Published by: http://www.sagepublications.com

On behalf of:

The Tavistock Institute

Additional services and information for Evaluation can be found at: Email Alerts: http://evi.sagepub.com/cgi/alerts Subscriptions: http://evi.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav

Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 Š 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


02Feinstein (bc/d)

10/31/02

2:19 PM

Page 433

Evaluation Copyright © 2002 SAGE Publications (London, Thousand Oaks and New Delhi) [1356–3890 (200210)8:4; 433–439; 029950] Vol 8(4): 433–439

Use of Evaluations and the Evaluation of their Use1 O S VA L D O N . F E I N S T E I N The World Bank, USA All evaluations have a cost but not necessarily a value. Their value does not depend on their cost but on their use, and this article discusses factors affecting the use of evaluations. These factors could be taken into account in order to increase and improve the use made of evaluations and, consequently, their value. Two key issues (lags and the attribution problem) for the evaluation of the use of evaluations are discussed and a ‘possibilist’ approach to evaluation use is presented. K E Y WO R D S :

dissemination; knowledge management; learning; use; value

Introduction The issue that this article addresses is the very limited use made of evaluations. A simple conceptual framework (including a few formulae to make these ideas as clear as possible) will be proposed to deal with this issue. Different types of evaluation uses and users will be discussed as well as some key issues regarding the evaluation of the uses of evaluations. This article uses concepts from economics and political economy in order to consider key evaluation issues from a perspective that allows for a more rigorous (yet still simple) approach than those often used to address the issue of evaluation use (e.g. see the articles in Caracelli and Preskill, 2000), leading also to a different set of questions and answers on this topic. The explicit way in which this conceptual framework has been articulated may facilitate its further development and also the evaluation of its use.

Types of Evaluation Use There are different types of evaluation use that have been considered in the evaluation literature (Worthen et al., 1997), such as instrumental and persuasive, which are useful. Another use identified in the literature is ‘enlightenment’ (Marra, 2000; Weiss, 1998), a term which may be replaced by ‘cognitive’, as the former is not very enlightening. Also important is the distinction between evaluation for accountability and evaluation for learning, which sometimes are considered as mutually exclusive options (whereas in this article these last two 433 Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 © 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


02Feinstein (bc/d)

10/31/02

2:19 PM

Page 434

Evaluation 8(4) uses are considered to be complementary, the former creating an incentive framework for the latter). It is also worth noting a crucial distinction between apparent and actual use of evaluations: between what seems to be the use (or the lack of use) of evaluations and the way(s) in which evaluations are actually used. The ‘authorizing environment’ (AE), introduced by the JFK Harvard School of Government (Moore, 1995), is an important concept within this discussion. In the context of programs or projects, the AE includes those ‘principals’ that make fundamental decisions concerning the approval or cancellation of programs. One key use of evaluations is to persuade the AE, and the public at large, that a program should continue (with a new phase) or be cancelled, ‘legitimizing or delegitimizing’ it by providing information concerning its performance and results. This use of evaluations is neither for accountability nor for learning, it is neither instrumental nor necessarily enlightening, but it can play a crucial role in terms of whatever is evaluated if decision makers are persuaded to make a decision on the basis of information provided by the evaluation, which might confirm their own views on the program, project or policy evaluated. Evaluations, like audits (though frequently in a less formal way), provide a ‘seal of approval’. In addition, this use for persuasion can build trust of activities when doubts arise regarding the value of the corresponding activity (be it part of a program, project or policy implementation). It is interesting to note that whereas the use of evaluation for persuasion has been recently acknowledged in the evaluation literature, there is a school of thought (and practice) in economics that highlights the key role of persuasion (Kirkhart, 2000; McCloskey, 1994). Another basic distinction that helps in discussing evaluation use (and in finding ways to promote it) is between actual and potential use. What are the ‘barriers to use’? How can potential use of evaluations be transformed into actual use? These are some of the key questions to consider, rather than trying to develop a comprehensive typology of evaluation uses (that may not be used).

Cost, Value and Use of Evaluations Given that evaluations are not subject to Say’s law, that supply creates its own demand (and only to a limited extent to Yas’ law – the inverse of Say’s law – that demand induces supply), the gap mentioned before between potential and actual use might be very significant. An important challenge is to identify crucial factors that affect use and that can be turned into levers to promote it. Let me start then with two key factors: the relevance of the evaluations and the quality of their dissemination. Relevance has to do with the extent to which an evaluation addresses issues that are considered of importance by the ‘clients’ of the evaluation (using a wide concept of clients, including not only those that have requested the evaluation but also some possible additional audiences). The quality of dissemination is the appropriateness of the means used to facilitate access to the evaluation. One can summarize the relations between these concepts through the following relation: (1) U = R D

434 Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 © 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


02Feinstein (bc/d)

10/31/02

2:19 PM

Page 435

Feinstein: Use of Evaluations and the Evaluation of their Use where U is use, R is relevance and D is dissemination. U, R and D can be considered as variables and they can be rated with values from 3 to 0 with the following categories: 3 = highly satisfactory; 2 = satisfactory; 1 = partially satisfactory; and 0 = unsatisfactory. Thus, if there is no relevance or no dissemination, there is no use. Figure 1 illustrates the relation between these variables. Relevance and dissemination can also be considered in terms of supply and demand of evaluations: thus, relevance corresponds to demand and dissemination to supply. Relevant evaluations are those for which there is a demand. If there is no dissemination, there is no supply (beneath the heading Incentives and Capacities to Use Evaluations, there are some additional considerations about the use of demand and supply categories in this context). Now we turn to the factors determining the quality of dissemination and the degree of relevance. For the latter, the choice of evaluation theme and especially the timing of the evaluation are crucial, to make evaluation findings available when decisions are taken. Also involving stakeholders increases the perceived relevance of evaluations. Finally, the evaluation’s credibility is another crucial factor determining the relevance of the evaluation, and this credibility depends on the methodology used and the perceived quality of the evaluation team. Summing up: (2) R = T C

where R is relevance, T is timeliness and C is credibility (and a four-point scale can again be used for the three variables). In the case of dissemination it is important to consider the way in which evaluations are presented (their user-friendliness) and the mechanisms or channels used for their communication. For the latter, the use of a knowledge management (KM) approach, through help desks and alternative ways of packaging the information, has been found useful. Through KM, evaluations (E) are used as inputs to produce user-friendly evaluation products (E*), facilitating the conversion of

Relevance (R) Use = R x D

Dissemination (D) Figure 1. Relationship between Relevance, Dissemination and Use

435 Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 Š 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


02Feinstein (bc/d)

10/31/02

2:19 PM

Page 436

Evaluation 8(4) information into knowledge; for example, these may include brief notes on lessons learned, self-contained summaries and stories illustrating key points (Ingram and Feinstein, 2001). Schematically: E ➝ (KM) ➝ E*

where E is evaluation, E* is evaluation products and KM is knowledge management. Partnering in evaluation also helps the dissemination process, both to the partners (by their involvement) and through them to others. Thus: (3) D = P M

where D is dissemination, P is presentation of the evaluation (user-friendliness) and M is ‘means’ or mechanisms or ways or channels for distribution (includes KM, help desk and other e-means).

Incentives and Capacities to Use Evaluations Another way to address the issue of evaluation use is to consider the incentives and capacities to use evaluation, as well as the demand and supply of evaluations. The previous framework assumes an institutional environment with given incentives to use evaluation, both positive and negative (‘carrots and sticks’). Note that relevance is linked to both demand and the capacity to supply. If an evaluation is not relevant then there will not be any demand for it and vice versa. However, even if an evaluation is relevant there could possibly be no capacity to produce it (either directly or by contracting it out) or there could be no incentives to use existing capacities to produce this type of evaluation. In other words, if the evaluation is relevant then there will be an incentive to use it, therefore it is important that there is also an incentive to produce it. On the other hand, the availability of relevant evaluations does not ensure that they will be used, if the capacity to use evaluations is very reduced. This type of evaluation-use capacity involves a capacity to search for relevant information, i.e. knowing where to search (this will be facilitated if there are good websites and portals such as the emerging global development gateway). It also requires a capacity to use the evaluations, to highlight findings and lessons that are relevant for specific issues being addressed (user-friendly evaluations facilitate this, but in the end the use depends on the users). It is important to distinguish between the capacity to produce evaluations and the capacity to use them (in the same way as has been done for household surveys), and to assess the capacity to use evaluations, providing when needed appropriate support for its development. Therefore, in order to better understand (and to promote) the use of evaluations it is worth focusing on the issues of incentives and capacities. Sometimes the argument is conducted in terms of supply and demand of evaluations, but the market of evaluations is quite imperfect (among other issues, there is frequently no relevant analogy for market prices) and, in addition, incentives have an 436 Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 © 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


02Feinstein (bc/d)

10/31/02

2:19 PM

Page 437

Feinstein: Use of Evaluations and the Evaluation of their Use influence both on supply and demand, and the same is true with capacities. Though in some cases it might be useful to discuss use issues in terms of supply and demand, it might be more fruitful (and rigorous) to focus on incentives and capacities to produce, disseminate and use relevant evaluations.

Evaluation of Use There are some typical pitfalls in the evaluation of the use of evaluations. One of them is due to the existence of lags, to a ‘gestation period’ for the occurrence of use. It might seem that there is no evidence of use and therefore no use. But this may just mean that the process leading from the production of the evaluation to its use takes time, and that the evaluation of evaluation use might have been premature. There are two risks: the first is of waiting sine die, always refraining to pass judgement because it might be that the evaluation will still be used (apocalyptic fallacy); the other risk is ‘killing’ an evaluation, arguing that it has not been used and that therefore it is useless, whereas it might be that it will be used in the future (premature killing). Another source of pitfalls is the attribution problem: one can find things that have been done after the evaluation was completed in a way consistent with the evaluation’s recommendations. Is this evidence of use? It seems so, but it might be that there were other reasons why things were done in such a way and that this is merely a case of apparent use. The fact that there is consistency between the evaluation findings and recommendations and what was done after the evaluation is not necessarily an indication of use (post hoc fallacy). However, it is also possible to completely neglect the role a specific evaluation had in the decision making process and in achieving results. A particular evaluation could possibly have played a contributing role, perhaps helping the decision makers reach the ‘tipping point’ (Gladwell, 2000) through the cumulative effect of evaluations. Finally, in evaluating the use of evaluation it is worthwhile to refer to the factors mentioned before, such as relevance and dissemination and their determinants (timeliness, credibility, quality of presentations and means of dissemination, as well as incentives and capacities); low levels of those factors or of incentives and/or capacities can act as barriers to use. Furthermore, when evaluating use it is important to consider changes in knowledge, attitudes and behavior, bearing in mind lags and the attribution problem.

A ‘Possibilist’ Approach to Evaluation Use It is generally recognized that evaluation activities generate knowledge that is significantly under-used. The distinction between actual and potential use results in greater focus on possible uses of evaluation that are more intensive and beneficial. One of these possible uses would be to identify what worked and what did not work in specific contexts (applying a sort of ‘realist’ approach to evaluations) in order to identify opportunities for effective interventions and to avoid those that are ineffective. 437 Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 © 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


02Feinstein (bc/d)

10/31/02

2:19 PM

Page 438

Evaluation 8(4) Codifying evaluation knowledge in this way facilitates learning (‘vicarious learning’) and the ‘mise en valeur’ of evaluations, using them as sources of insights for the design of interventions in similar contexts. Thus, the stock of evaluations could be a source of relevant ideas about effective actions, facilitating the process of learning from positive and negative experiences. If evaluations are conducted with this aim in mind, they can be readily used in this way. Otherwise, it could still be possible, at least for a subset of these evaluations, to codify them using a ‘realist’ framework (Feinstein, 1998). This implies making the context of application explicit, to facilitate the use of evaluations as a source through which to identify effective interventions in similar contexts.

Conclusions The key messages of this article are: 1. the value of evaluations depends on their use; 2. the use of evaluations should not be taken for granted; and 3. there are several things that can be done to promote greater and better use of evaluations. One of them is to develop a conceptual framework to guide our analysis of evaluation use and our actions that aim to improve it. The goal of this article has been to provide a simple and practical framework for this purpose.

Note 1. This article is a revised version of a keynote speech delivered at the IVth Annual Meeting of the Italian Evaluation Association. I wish to thank Nicoletta Stame for encouraging me to prepare this article, Luca Meldolesi for triggering the thoughts that led to a new section on the ‘possibilist’ approach, Frans Leeuw for his comment on ‘vicarious learning’ and Mita Marra for our dialogue on these issues.

References Caracelli, V. J. and H. Preskill (eds) (2000) The Expanding Scope of Evaluation Use, New Directions for Evaluation 88. San Francisco, CA: Jossey-Bass. Feinstein, O. N. (1998) ‘Review of “Realistic Evaluation” ’ , Evaluation 4(2): 243–6. Gladwell, M. (2000) The Tipping Point: How Little Things Can Make a Big Difference. New York and London: Little, Brown and Company. Ingram, G. K. and O. N. Feinstein (2001) ‘Learning from Evaluation: The World Bank’s Experience’, Evaluation Insights 3(1): 4–6. Kirkhart, K. E. (2000) ‘Reconceptualizating Evaluation Use: An Integrated Theory of Influence’, in V. J. Caracelli and H. Preskill (eds) The Expanding Scope of Evaluation Use, New Directions for Evaluation 88, pp. 5–23. San Francisco, CA: Jossey-Bass. McCloskey, D. N. (1994) Knowledge and Persuasion in Economics. Cambridge: Cambridge University Press. Marra, M. (2000) ‘How Much Does Evaluation Matter?’, Evaluation 6(1): 22–36. Moore, M. H. (1995) Creating Public Value. Cambridge, MA: Harvard University Press.

438 Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 © 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


02Feinstein (bc/d)

10/31/02

2:19 PM

Page 439

Feinstein: Use of Evaluations and the Evaluation of their Use Weiss, C. H. (1998) Evaluation. Englewood Cliffs, NJ: Prentice Hall. Worthen, B. R., J. R. Sanders and J. L. Fitzpatrick (1997) Program Evaluation: Alternative Approaches and Practical Guidelines, 2nd edn. New York: Longman.

OSVALDO N. FEINSTEIN is a manager of the Operations Evaluation Department at the World Bank, and an evaluator and economist with worldwide experience. He designed and supervised the Latin American Program for Strengthening Evaluation Capacities in Latin America and the Caribbean (PREVAL), and has worked as a consultant with several international organizations. He has also lectured and published on evaluation, economics and development. Please address correspondence to: 1818 H Street, NW, Washington, DC 20433, USA. [email: ofeinstein@worldbank.org]

439 Downloaded from http://evi.sagepub.com by Osvaldo Feinstein on February 15, 2007 Š 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.