2 minute read
ANNEX 1 METHODOLOGICAL REFLECTIONS ON MONITORING AND EVALUATING INNOVATION FOR EDUCATION
IfE provides an opportunity to reflect on the processes of monitoring and evaluating innovation. Across the projects, a number of key lessons learnt have been identified. The first is the merits of mixed-methods research design for evaluating innovation. All of the projects utilised a number of different methods to evaluate the innovation. For the majority, this was a combination of a quantitative measurement of change (for example, changes in student test scores) with qualitative exploration of the perspectives of key stakeholders. Those projects directly involved with change in the classroom also utilised classroom observations and video data to understand teacher and learner behaviour. While the quantitative data provide evidence of the outcomes of a particular innovation, it has often been the qualitative findings which have provided the explanations for why an intervention was deemed to be successful, or not. Significantly, IfE has gone beyond just looking at outcomes to understand the necessary conditions for innovation. By analysing the processes as well as the outcomes of the innovations, IfE provides reflections on the conditions and contexts for enabling or barring innovation implementation. Here, qualitative findings have been invaluable.
Given the varied skills and capabilities required across different research methodologies, there is an important lesson here about the need for research skills development. This was particularly seen in relation to qualitative data collection and analysis where the skills needed were not widely available among local field researchers. Significant support was required from the Fund Manager, both within projects and in their overarching management, for project monitoring and evaluation. This suggests the importance of allowing for the development of monitoring and evaluation capacity in project design, to enable innovation projects to demonstrate impact with the required strength of evidence. It also suggests the need to consider deploying external evaluators to undertake monitoring and evaluation of projects, although this has cost implications associated with it.
Findings from the IfE projects show that it can be possible to see improvements in learning outcomes through innovation but this tends to only be for innovations that focus on one variable with tests coherent with that variable. Examples discussed here have included language and literacy. In projects where there was a general move towards more effective teaching, an impact on learning outcomes measured through examination results was often assumed. However, these have been shown to not always be an appropriate way to understand the impact of an innovation that seeks to improve learning and teaching.
Evidence shows that it can often take considerable lengths of time and perseverance with innovations, perhaps for five years or more, for innovations to begin to have a meaningful impact on learner outcomes. This is particularly true for innovations that aim to influence the wider environment for teaching and learning, for example through inculcating new skills amongst educators, or through introducing new ways of doing things or new materials that might require changes in behaviour and attitudes on the part of teachers and learners for change to take place. In this respect, innovation can carry risk and requires significant and long-term commitment on the part of policy makers as well as changes in attitude and behaviour of key stakeholders. Although demonstrable improvements in learning outcomes are the gold standard by which innovations are ultimately evaluated, in the short-term they may not be useful or reliable indicators of the longer-term potential of the innovation. There is a role for intermediate outcomes in evaluating the progress of innovations.