Efficiency Criteria Study-Methodological debate

Page 1

Efficiency Criteria Study By Carolina Guimaraes Purpose: This discussion note looks at the understanding of ‘efficiency’ in the general context of development and ultimately focuses on strategic evaluations carried out by Devco to stimulate methodological discussion. The need for such research arose from a general perception among donors, later supported by studies and reports of DAC donors that such criteria was poorly or superficially tacked in evaluation reports. […] Several reports and observations reveal that donors and development agencies conduct efficiency analyses with insufficient frequency and quality.” (BMZ) Introduction: Most time for accountability purposes efficiency analysis tend to focus on quantitative indicators, demonstrating a ratio of input and output for potential interested constituents. Also, for consultants or whoever is undertaking the analysis such exercise is (relatively) easier to carry out as unit cost/output is easier to quantify and benchmark. However, this type of exercise runs the risk of being over simplistic and not targeting the real questions of development such: employment rate, service quality, school attendance, etc. Attaching a monetary value or quantifying benefits can also be a time consuming, complicated and controversial exercise, thus most times rations are utilized such as the number of lives saved, children vaccinated, number of additional household served with electricity per thousand invested. Margin of error must always be indicated to ensure credibility. However, recent discussion among various DAC donors has broadened the definition of efficiency to look at it beyond a process criterion or at least to consider other gains of activities beyond output. Ideally, efficiency would provide informed alternative uses of available resources to produce more and better development outcomes and/or impacts. It must be recognized that as we move further down the intervention logic (inputsàoutputà resultsà outcomeà àimpacts) the harder it becomes to measure the efficiency of activities, provided that more exogenous variables and factors come into play. In this context, it becomes ever more important to question our expectations, needs and usability before undertaking a potential labour intensive and costly ‘tracking’ exercise. Moreover it requires time/skills/data/context understanding to provide correct/viable solutions that could be plausible future alternatives and provide food for thought for other activities. Testing experiments before project implementation is not a current practice for the European Commission, while it could be regarded as a wasteful exercise, it can also

1


serve as a way forward to gauge the future project environment/scenario potentially to predict final impacts, and find out best alternatives to reach the proposed objective. If reality is such, that experiments are not been undertaken previous to project/programme implementation, ex-post evaluation recommendations could potentially provide alternatives that should be experimented or looked at before next programming cycle begins. To control for external variables the random trial methodology could be pursued. Not all projects need (depending on amount allocated) or can be experimented, for example the construction of a high way, however in such case, experiments could take shape of brainstorming exercises outlining the various roots/ways for undertaking such endeavour, e.g. through different management and implementing bodies. Practice: The lack of a greater understanding of how projects fit in a strategic level evaluation and how conclusions can be drawn should be further discussed, defined and standardized among concerned stakeholders. Project evaluations should feed into strategic conclusions and the lack of primary information and quality of the reports should also be recognized, provided it will also reflect on the quality of the analysis at strategic level. Once such discussion takes place, a second step would be to look at how project evaluations could be grouped/aggregated under a potential sector/thematic to serve as basis for strategic analysis. While it might seem obvious at first glance, the connection between project and strategy is sometimes approached at ad hoc and due to different implementing/management bodies discerning perception for doing so still exist. Efficiency in the unit’s ( Devco B2 reports) reports is a criterion tackled in aid modalities (budget support) questions; however a true assessment of efficiency would compare how the different modalities could have been more efficient in achieving similar outcomes/impacts in that theme or country. Nevertheless, such comparison is normally not possible provided the unique nature and goal of different aid modalities, the lack of counterfactuals1 and the segmentation between budget allocations. Therefore, the efficiency criterion might have a better probability to present alternatives solutions for future activities when approached in sector evaluation questions. Comparing alternatives is always a complicated endeavour provided the difficulty to forecast alternatives impacts. However, through tracing back steps, alternative steps can be identified, especially when comparing proposed/planned actions with carried out/ accomplished actions. Also counterfactuals evaluation comparisons (with vs. without interveention) might be easier when broken down in different and more tangible elements.

1

Counterfactual experiments through random trials normally work best in small scales project in comparison to large aid-funded operations and broad policy changes.

2


Expectations: The wide range of definitions and understanding of efficiency lead to a conflicting expectations of such criteria, thus before matching evaluation questions with the 5 DAC+ 3 C's + EU added value, it is vital to undertake a questioning exercise, where the use of each criteria per evaluation question is justified (whatever might be the need or validation). Evaluation questions should not be randomly matched with a criteria or be a reflex of an automatic response to previous evaluation experience. The value and need for each criterion should be asked, so evaluations can attempt to balance the need to meet both the accountability and lessons learned objectives, provide criterion selection will tackle the need of donor and beneficiaries and attempt to minimize diverging expectations. A recent report commissioned by BMZ titled Tools and Methods for Evaluating the Efficiency of Development Interventions, differentiates methods of efficiency by its analytical power in three sub-categories: Level 0à analysis is entirely descriptive and usually cannot produce well-founded recommendations. For this level of analysis, recognising the limitations of the findings is critical to avoid proceeding with actions under a misconception of the evidence basis. Level 1à analysis capable of identifying the potential for efficiency improvements within aid interventions, improve interventions operationally Level 2à analysis is capable of assessing the efficiency of an aid intervention so that it can be compared with alternatives or benchmarks. The complex nature of the strategic evaluations provide some constraints (time, resources and capacity) to tackle efficiency at potentially level 2, however depending on the need (beneficiaries/donors) for tackling such criteria, one or two questions could be targeted at a more in depth level. Also, at level 2, the concerted effort to arrive at a robust and important analysis also means ( in theory) a lower level of subjectivity, which in turns translate into a more informed analysis with a great potential to guide users and better future implementation. A Benchmarking approach, mentioned at level 2, is not a ‘one and for all recipe’, due to the risk of being too general, setting over ambitious indicators that are not context sensitive and flexible enough to adapt to changing situation. Developing indicators and a baseline is also a time consuming activity and problematic in cases where data availability is lacking. A way to go around it, would be to use previous evaluations as interim replacement for data capacity building. Benchmarks, whenever possible, should include singularities of context, and should be preferably manageable/adaptable to the implementing bodies: local, national and potentially regional. In this way, more just comparisons can be made, and a greater incentive for improvement is built in, provided that project beneficiaries and managers

3


can bettter relate to such context and indicators. Also, country sector evaluations could be potentially benchmarked within regional set indicators. Problems/Obstacles While not exhaustive, a few issues and innate problematic in tackling efficiency include: Ø Inability to suggest/propose alternatives whenever available resources are not pooled together under the same umbrella/budget/administrate department etc. In this way, evaluators do know not know exactly the total amount of inputs to provide different alternative allocations. Ø Lack of sufficient and reliable data to support alternative recommendations is a recurring problem, not only for strategic evaluation but also ROM reports. Ø The inherent difficulty in measuring non-tangible benefits. Needed Skills A robust efficient analysis requires a good understanding of national and regional system in order to provide alternatives of project/strategy implementation. Context acknowledgement is vital in order to recommend alternative roots/ways/possibilities for carrying out same activities with potential better impacts. Also there must be a combination of qualitative and quantitative data and exercise, in order to answer: what worked and why it worked. Steps ahead Ø Better monitoring/data gathering (The Commission is already going through great extents to track its development for measuring impacts, but should be realistic in this tracking exercise and recognized the sectors it must use proxy data.) Also there must be a strengthening of capacity at the initial stage of tracking, as the lack of primary data will reflect on higher levels of analysis. Ø Adjust expectations to reality ( both consultants but also evaluation managers in combination with the reference group must work together to determine their expectation and needs for efficiency criteria analysis, in other words, propose specific areas that need a detailed efficiency study.) Ø Analyse efficiency from the donor and beneficiary point of view. In a strategic evaluation it is important to look at the relationship between donor-partner, but the perspective of beneficiaries should be kept in mind in order to increase utility of the report, especially for Budget support aid modality. Ø Focused studies that could feed/aid in answering efficient questions on strategic evaluations (ex. Criteria studies by sector) Ø Brainstorm ways to attach value to non tangible activities (ex. Policy Dialogue)

4


Most Important Bibliography: Tools and Methods for Evaluation the Efficiency of Development Interventions (BMZ) Office of Environment and Heritage (Australia, New South Wales) Are SIDA evaluations good enough? (SIDA studies in Evaluation)

5


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.