WALKING THE TALK— REFLECTIONS ON THE USE OF INDICATORS TO MEASURE AND FOSTER LOCALLY LED DEVELOPMENT
Prepared in cooperation with PSC’s Council of International Development Companies
By Larry Cooley, Jean Gilson, and Indira AhluwaliaIntroduction
This paper builds on our prior paper, Perspectives on Localization,1 written under CIDC auspices and, to a lesser extent, on a second paper, Localization in Conflict Contexts, authored by MSI.2 Rather than repeat the arguments and lessons of those papers, it extends that discussion to specific considerations relevant to USAID’s proposed indicator for tracking its localization target #2: by 2030, 50 percent of USAID’s programming will place local communities in the lead to co-design a project, set priorities, drive implementation, and/or evaluate the impact of its programs.
The underlying goal of localization target #2 is to advance from inclusion of local actors to decision making by local actors and to measure that progress. This objective acknowledges that local communities, entities, and stakeholders have strategic insights, context, and know-how about what solutions are scalable and sustainable, and they have the legitimacy and standing to make lasting change. Target #2 also positions equity as the basis on which locally led development is applied that is, by integrating local actors with international practitioners in USAID programming to jointly make decisions about program design and development.
Localization Targets
Moving forward, USAID will provide "at least a quarter of all of our funds directly to local partners" over the next four years. And by the end of the decade, "50 percent of our programming, at least half of every dollar we spend, will need to place local communities in the lead" to set priorities, design projects, and evaluate the impact of USAID programs.
We endorse the letter and spirit of this second localization objective. Although the authors come from the development consulting community and draw principally on that perspective in framing our views, this paper, like the previous two, is intended as our contribution to an ongoing discussion, not as an advocacy document. It is informed by our experience helping USAID and its partners design and implement previous indicator programs and reform efforts. The paper’s Annex offers a number of comments and case illustrations contributed by CIDC members relating to USAID’s draft list of good practices.
Samantha Power, quotations from her speech at Georgetown University, November 4, 2021
We are submitting this paper having followed USAID’s public statements about the forthcoming indicator but prior to seeing the proposed indicator and accompanying guidance. Some of our comments may therefore be overtaken by events. We nevertheless decided to present them now in hopes that they might have implications for the testing and modification expected to take place in the coming months.
While CIDC provided useful input based on members’ field experience, this paper has not been formally vetted by CIDC and expresses only the views of the authors.
1 https://www.pscouncil.org/a/Resources/2021/Perspectives_On_Localization.aspx
2 https://www.msiworldwide.com/sites/default/files/2022-07/Localization_in_Conflict_Contexts_FINAL.pdf
Reflections on the Choice and Use of Indicators The Problem with Indicators Is That They Work
From our perspective, USAID’s experience suggests that indicators and performance measurement are very influential, but not always in the ways originally intended. This is partially because of the understandable tendency of respondents inside and outside the Agency to “teach to the test.” In this case, there is little doubt in our minds that the indicator selected will become the de facto, reductionist definition used by many to encompass the concepts of local voice, local ownership, and local leadership.
We have observed that performance measurement in USAID, as in other organizations, is typically designed to serve twin objectives learning and accountability. Sometimes these functions complement one another, but more frequently they don’t; and when there are trade-offs, accountability often crowds out learning. Given the nuance, contextual complexity, and variation in current debates about “locally led” development, we believe that fact calls for a focus, at least in the near term, on listening and learning as our community (donors, implementing partners, and the full range of local actors) continues to refine in operational terms the ways in which local leadership, experience, and voice are most effectively manifested and supported.
The Primacy of Upstream Action and Engaged Dialogue
As contractors, we typically support USAID programming in the implementation phase. The opportunities for local leadership in this phase, while significant, are smaller and fewer than those that are potentially available “upstream,” when country priorities and strategies are established, and interventions designed. This phase begins with the appropriation and budgeting processes, when decisions are taken that shape and limit the space available for local voice and prioritization; but considerable latitude also exists during program design.
We have also seen good use made of annual learning events and other participatory or country-led evaluative processes as mechanisms for local engagement and leadership, but only when those events are coupled with an appetite for incorporating the learning that emerges from such sessions back into the implementation process and future programming possibly in the form of rolling workplans.
In USAID’s draft list of “good practices,” five of 30 suggested actions relate directly to priority setting and design. Another four relate to evaluation At a minimum, we believe that such practices should be afforded special significance given their chances of affecting major resource allocation and programming decisions. Although earmarking, Congressional reporting, and Administration policies limit the scope for this kind of engagement to fundamentally shift resource allocation levels and priorities (in comparison to private foundations and grantsbased entities such as the Inter-American Foundation), we would argue that local leadership is greatly facilitated by delegating as much authority as possible to Missions to make resource allocation decisions. Absent genuine willingness and latitude by donors to reflect and adapt to local priorities and knowledge, our experience suggests that extensive local consultation processes can result in inflated and unrealized expectations, levels of effort disproportionate to their impact, and frustrated participants.
At the implementation stage of the project cycle, we have seen a variety of cases in which efforts to demonstrate local leadership in the absence of budgetary and programming latitude yield empty and overlapping consultations. In the contracting sector, for example, this includes cases where competitive RFPs result in frantic efforts by a host of bidders to engage the same array of host country actors during a compressed proposal development timeframe. Ironically, the recent preference in some USAID procurements for engaging local partners on a non-exclusive basis has, on some occasions, compounded this problem.
Selecting an Indicator
There are important differences between localization targets #1 and #2. Most importantly, we believe there are major risks in the application of a metric for target #2 that lacks flexibility to reflect important contextual factors such as fragile versus non-fragile states, authoritarian versus democratic governments, and inclusive versus exclusive treatment of marginalized groups. It is also unrealistic to expect a static indicator to keep pace with and reflect the reality of an evolving local landscape, and drive progress in places where local leadership and transformation are proceeding at breakneck pace
Definitional issues surround even relatively simple concepts such as “local implementing partner.” But these challenges pale in comparison to the nuances associated with concepts such as “local communities,” “in the lead,” and “shifting power.” Underestimating this complexity could spur a cottage industry of definitional debate, inadvertently further privileging the voices of entrenched elites, given where power resides, and inadvertently incentivizing actions that would not enjoy widespread support as strategies for enhancing local leadership.
One approach to measurement would be to use “good practices” applied by USAID during the project cycle as a proxy for assessing progress toward locally led development. Based on experience with other indicators that use the enumeration of internal administrative practices as a proxy for intended outcomes, we believe this approach has some merit in focusing USAID’s attention on concrete, manageable actions it can take unilaterally. But it could also be seen as reflecting a check-the-box mentality, failing to reflect local perspectives regarding these measures, and encouraging a reluctance to consider alternative ways forward.
Language also matters. Problems cited to us include a range of embedded and often inadvertent messages in how we even address equity, inclusion, agency, and the distribution of power (for example, “we place communities in the lead”).
A final and related concern is that any indicator and assessment system established and applied uniformly by a single donor runs an inherent risk of inadvertently fragmenting measurement at the country level and thereby undermining the concept of locally led development it is intended to advance
We recognize that there are trade-offs between the considerations we raise above and the action-forcing power of a single, unambiguous indicator—particularly one for which USAID hopes to achieve global support. We also appreciate the practical obstacles to introducing an indicator that imposes additional burdens on USAID’s already-stretched staff in Washington and in partner countries.
The remainder of this paper offers our suggestions on how to maximize these upside benefits and minimize the downside risks. It incorporates lessons from, among other things, our experience working alongside USAID in the roll-out of the Development Fund for Africa, environmental impact requirements, and policies on CLA, and recent efforts we have observed from GiZ and OECD/DAC.
Getting the Most Out of an Indicator and the Indicator Process
If USAID decides to adopt a single, uniform indicator that can be centrally managed and applied in a consistent manner, we believe the best options to maximize benefits and minimize risks relate to the process and the scaffolding that surround whatever indicator is selected.
Some ideas we have seen used to positive effect include:
Process
• Country-Based Coordination: If USAID is to achieve widespread acceptance of the indicator, it should take full advantage of existing consultative mechanisms established by host governments and others to limit cumbersome, donor-specific, and duplicative consultation. Where possible, these same venues can be used to establish agreed standards and examples of locally led development.
• Self-Assessment with Selective Audit: USAID’s experience underscores the fact that the more Missions and their key stakeholders are involved in designing and assessing the consultative and power-shifting processes, the greater their impact will likely be. Alternatively, the more these processes are seen as centrally dictated bureaucratic requirements, the more likely they are to lack local ownership and to be ignored, or even resented, at the Mission. In our view, the most realistic way to square that circle is to ask Missions to score themselves against established criteria, to present the rationale for that self-assessment, and to be subject to selective audit. This process has been used successfully by USAID on several previous occasions, often linked to Mission-based task forces and reform initiatives. In several of those cases, Regional Bureaus played important intermediation roles.
• Local Advisory/Review Groups: Missions should be encouraged and supported to establish stakeholder consultation processes where these do not exist and to use these processes to advise on programming and to push the envelope of opportunities for local leadership. Among other things, these groups could be asked to express their opinions on USAID’s self-assessment ratings, or perhaps provide their own ratings.
• Peer Learning: Missions could be encouraged to self-nominate to pilot innovative approaches for the measurement of locally led development, with high-level recognition and an annual process for global information sharing such as the Spring Reviews USAID used to run so successfully.
• Country-Based Learning Events: USAID should actively encourage, participate in, and (if necessary) help fund country-based learning events that feature local voices and include explicit discussion of issues of localization, inclusion, and decision making, with feedback loops or other mechanisms to ensure learning is incorporated into country strategies.
Scaffolding
• Maturity Models: While admittedly more complicated than single, binary indicators, USAID has had positive experience with the use of maturity models to track and support the evolution of multifaceted and context-
dependent processes such as localization. The use of these models could be either a substitute for, or an addition to, the use of a simple indicator.
• Supporting Narrative: Again, at the cost of some additional effort, Missions could be asked to provide supporting narratives describing what the Mission or its individual activities have done to support locally led development. These narratives, along with the results of simpler indicators, could form the basis for discussion inside the Mission, with local stakeholders, and with Washington.
• Case Competitions: USAID has done some of its most innovative work and advanced some of its most lasting internal reform though peer-to-peer learning. In several recent and not-so-recent examples, the Agency uses case competitions to accelerate, enhance, and systematize this learning process. The roll-out of localization target #2 would be well suited to a process of this sort, perhaps associated with high-level recognition and more tangible benefits for the best entries.
• Indicator Reference Sheets: Considerable supplementary guidance to Missions could be embodied in moreextensive-than-usual indicator reference sheets that, in addition to the typical definitional guidance, encourage complementary actions of the sort suggested in this paper and elsewhere.
Supporting the Internal Change Process
As USAID leadership has rightly emphasized, implementing the current localization initiative has profound implications for the Agency’s programming and operations, talent acquisition and development, organizational culture, and the day-to-day work of USAID staff. Having watched and participated in previous reform efforts, we have deep respect for the demands associated with this effort, and we hope USAID’s implementing partners will find effective ways to support it.
Most of the draft list of good practices refer to actions to be taken directly by USAID staff and we wholeheartedly support USAID’s efforts to get Missions the staffing and other resources they need to implement the letter and spirit of the localization effort.
But there are several areas where the Agency’s partners can do more to help. In addition to the work undertaken by contractors and grantees during activity implementation, this potential support includes upstream assistance as a force magnifier in organizing consultative processes (good practices 1-5); the design and implementation of participatory and community-led monitoring systems (good practices 23-24); strengthening the readiness of local organizations to take on key leadership roles (good practices 26 and 29); and designing and implementing participatory evaluations and learning events (good practices 27, 28, and 30).
If it would be useful to USAID, we are confident that CIDC members could contribute additional examples of these practices in action. Current implementing partners can also serve as eyes and ears for USAID staff, providing observations and insights into some of the challenges being faced during roll-out of the reform.
We applaud USAID’s intensified focus on locally led development and the Agency’s recognition of the power of indicators to drive outcomes. It is our hope and belief that this impact can be substantially strengthened by incorporating additional contextual considerations and stakeholder engagement in shaping how these measures are operationalized in different settings.