3 minute read

2.4 Challenges and limitations

Next Article
6.3 Lessons

6.3 Lessons

The main analytical prism for this formative evaluation was qualitative content analysis, or textual analysis48 (of quantitative and qualitative sources), and discourse analysis (power analysis).49 The analysis of programmatic results was based on the 2018 LNOB guidance that recommends three mutually reinforcing ‘levers’ for implementation.50 Three evaluators rated the extent to which each Signature Solution contributes to each lever in order to ensure inter-rater reliability. The assessment of programmatic results and organizational effectiveness additionally adopted a limited summative lens, retrospectively determining to what extent UNDP is contributing to set objectives. For this, the evaluation took a generative (or mechanism-based) approach to causality inspired by process tracing.51 To analyse the extent and depth of gender-related approaches and results, the IEO gender results effectiveness scale was employed, classifying results between gender negative, blind, targeted, responsive or transformative.52

The evaluation used artificial intelligence to determine, based on past evaluations, to what extent UNDP achieved results in support of the various LNOB groups identified in the LNOB marker.53 To do so, the IEO Artificial Intelligence for Development Analytics (AIDA) system identified 4,767 past UNDP evaluation reports of relevance, which were tagged and analysed through human manual inductive and deductive coding and assessment.54 Keywords were agreed upon for each of the 18 LNOB groups, and results were filtered to include the period under evaluation.

Finally, evidence was contrasted and compared, and patterns synthesized into key findings. These were distilled into lessons, higher level conclusions, and forward-looking formative recommendations.

The scoping phase of the evaluation included a basic evaluability assessment on whether LNOB integration in UNDP follows a clear and coherent logic, measured through well-articulated indicators of success, and whether data requirements have been fulfilled. Evaluability was deemed moderate, reinforcing the need for a formative approach. The evaluation worked with the best available information, but recognizes quality and coverage issues given restrained evaluability.

For lack of a counterfactual, the evaluation could not determine whether some of the programmatic results reported in chapter 5 were prompted by the integration of the LNOB principles and a subsequent mindset shift in UNDP, or whether they instead included ‘rebranding’ of results that would have occurred anyway as LNOB integration.

48 See Miles, Matthew B. et al., Qualitative Data Analysis: A methods sourcebook, SAGE, Thousand Oaks, 2019. 49 See Van Dijk, Teun A., ‘Discourse, Power and Access’, in Caldas-Coulthard, Carmen Rose et al., Texts and Practices, Routledge,

London, 1995. 50 UNDP, What Does It Mean To Leave No One Behind? 51 Collier, David, ‘Understanding Process Tracing’, Political Science and Politics, vol. 44, No.4, pp.823-830. For a quick overview of application in evaluations see the process tracing entry in INTRAC’s monitoring and evaluation universe. 52 UNDP Independent Evaluation Office, ‘The Gender Results Effectiveness Scale (GRES): A methodology guidance note’, UNDP,

New York. 53 The 18 LNOB groups are enumerated in Table 1. 54 Inductive analysis was conducted for all 18 groups, deriving insights ground-up from past evaluations. In addition, a deductive process was followed for select groups, using a top-down approach starting from multiple premises assumed to be true, extrapolated from past thematic evaluation findings, and used as benchmarks. This type of analysis allowed the evaluation to track evidence on UNDP progress (or lack thereof) in its work with the select LNOB groups.

More disaggregated monitoring data would have been desirable to better assess LNOB and RFBF integration, as well as fine-grained input such as meeting minutes, emails, training or presentation materials, track-changes on draft legislation, etc. needed to assess the UNDP contribution through the process-tracing method used. This kind of data were not always available or accessible due to staff turnover and poor information management systems.

This evaluation was conducted during the COVID-19 pandemic, requiring considerable flexibility (e.g., substitution of team members to manage illness and well-being and to avoid delays, and exclusively remote data collection for IEO staff). To mitigate related limitations, national evaluators were hired in focus countries for data collection. Respecting local safeguards, these national evaluators/data collectors were able to move freely in the national territory and collect data in local languages reaching more rights-holders from various population groups.

This article is from: