Four characteristics of a strong monitoring approach - Girls Education Challenge 2021

Page 1

Four characteristics of a strong monitoring approach

This Practice Brief summarises the learning from conducting assessments of seven Girls’ Education Challenge projects’ monitoring systems, which took place during the COVID-19 school closures between mid-2020 and mid-2021. It focuses on characteristics of the monitoring systems themselves and, based on the quality of resulting data from these systems, it presents the qualities of a good monitoring approach in an education project’s monitoring and evaluation (M&E) system. The brief highlights four areas that projects and their M&E teams can consider to build better, stronger and more effective monitoring systems.

Introduction

Monitoring systems are like the taste tests a chef does as they are cooking – very different from the evaluation of the final dish by a food critic. Without taste tests, the chef cannot ensure that the food the critic tastes will reflect the recipe, ingredients and cooking techniques used. Similarly, without sound monitoring systems, the findings of an evaluation can be an unnecessary unknown and, likely, of poorer quality.

A routine practice on the Girls’ Education Challenge (GEC) is to conduct site visits to projects. The purpose of these site visits is to provide technical support to the project teams and understand how activities are implemented in the field. A critical entry point for these technical site visits is monitoring data.

Monitoring data collected by each project’s Monitoring, Evaluation and Learning (MEL) staff are used to inform the scope and goals of the site visit. Monitoring tools, reports of monitoring data and process documentation allow GEC Fund

Manager staff to assess – ahead of technical visits – the nature of monitoring data being captured. It also tells us about the project’s activities, the potential impact on girls, and gaps in the approach. This oversight of monitoring data reveals areas where additional technical support may be valuable to each project. During COVID-19, when on-the-ground site visits were suspended, the Fund Manager conducted an assessment of monitoring systems. This exercise helped inform how monitoring data should influence the decisions made by project teams and Fund Manager technical support staff regarding project activities.

This Practice Brief summarises the learnings from assessments of seven projects during the COVID-19 school closures between mid-2020 and mid-2021. It focuses on characteristics of the monitoring systems themselves and, assessing the quality of resulting data from these systems, presents the qualities of a good monitoring approach in an education project’s M&E system. These characteristics are not unique to the COVID-19 context and are considerations for building better and stronger M&E systems.

PRACTICE BRIEF #2 / FOUR CHARACTERISTICS OF A STRONG MONITORING APPROACH 1
PRACTICE BRIEF #2 JULY 2021
© PLAN

This Practice Brief identifies four characteristics of strong monitoring systems from GEC projects to build better systems going forward:

1

Include formative measurements of outcomes, especially of learning

2

Use tools that strike a balance between being easy-to-administer and yielding actionable findings

3

Incorporate classroom observations to capture changes in instructional practices, particularly the adoption of GESI and safeguarding principles

4

Leverage digital data collection platforms to further streamline an already effective data aggregation process and field protocols

Report progress on…

INTERMEDIATE OUTCOMES

Utilised for…

Data collected using…

Report progress on…

INTERVENTION DESIGN

EVALUATION QUESTIONS

MONITORING TOOL REVISIONS

MONITORING EVALUATION

Monitoring and evaluation are complementary exercises. When done well, the two inform and improve each other. Together, robust monitoring and evaluation systems report progress, have complementary purposes, rely on similar modalities for data collection and, when harmonised, can inform programme design and evaluative decisions.

PRACTICE BRIEF #2 / FOUR CHARACTERISTICS OF A STRONG MONITORING APPROACH 2
OUTPUTS
FORMATIVE
OUTCOMES SUMMATIVE
FACE-TO-FACE DIGITAL HYBRID

1. Include formative measurements of outcomes, especially of learning

A common mistake in GEC projects’ monitoring systems, including during COVID-19-related school closures, is the absence of tools that measure student learning. As learning is the primary outcome of an education project, leaving it out of routine monitoring is a missed opportunity to gain insights into what and how much students are learning. Monitoring tools may touch upon learning – by examining module completion or attendance – but still fall short of assessing actual learning. For example, monitoring indicators such as “Number of girls attending”, “Number of girls absent”, and questions on monitoring tools such as “Was the teacher on time?”, “Did every student have a desk?”, “Did the teacher use appropriate materials?” are seen all too often. Often these questions can help disaggregate learning data but alone are too broad to glean actionable findings about ways to improve student learning.

Similar shortcomings are often observed in lesson observation tools. For example, observation tools may hint at lesson quality by recording whether ‘appropriate’ lesson materials were present. However, without consensus among observers on what the term ‘appropriate’ means, the resulting variation in data between classrooms may be meaningless (i.e. is the difference between classrooms due to the difference in materials available or differences in interpretation by observers, or both?). Additionally, lesson observations do not always follow up directly with students. After completing a lesson observation, observers have a golden opportunity to interview three to five students from the class to assess whether students have mastered the lesson content. While one-to-one interviews must follow safeguarding protocols, feedback from the ultimate beneficiaries of project activities is critical to continuous improvement and informing evaluation questions. Following up with students can also be play-based. This could be a trivia game if competition between students is appropriate, or if maintaining discretion between individuals’ responses is necessary, observers can play a game in which students’ responses are given by raising a hand or making a motion to indicate knowledge (raise your knee if the answer is x).

In the absence of directly testing students’ learning, projects sometimes rely on self-reported perceptions – often a measure with limited use without additional data. For example, asking students, “How confident do you feel about your maths work?” with response options ranging from ‘very confident’ to ‘very not confident’, the information fails to provide information on either the lesson quality or student’s actual learning achievement. Instead, it can erroneously represent learning outcomes if used and be misinterpreted.

Formative measures of learning can be as small scale as a few questions after an observation or small-group assessments conducted on a sample of students at the end of a unit or term. Or they can be more substantive, such as census administration of a standardised assessment to gauge student knowledge at crucial points in the academic cycle. The purpose and use of the data and availability of resources will inform the type of formative assessment of learning that is possible. However, even with limited resources, measuring student learning as part of regular monitoring systems is critical. Even data on a small sample of students at two to three-time points in the year can be instructive to programme design and upcoming evaluation cycles.

PRACTICE BRIEF #2 / FOUR CHARACTERISTICS OF A STRONG MONITORING APPROACH 3

2. Use tools that strike a balance between being easy-to-administer and yielding actionable findings

MEL staff often opt for an easy-to-administer response option to keep monitoring forms user-friendly and straightforward, using a binary input such as yes/no, agree/disagree, present/ not present. Although binary inputs are easy to administer and reduce training and programming time, this increased volume of data does not directly translate into actionable insights.

For example, the following excerpt from a monitoring tool based on a school site visit is easy to administer but produces limited insights. ARE

DATA COLLECTOR ASSESSMENT OF SAFEGUARDING REPORTING PATHWAYS

Place a call to the hotline number. Did the call go through? How long were you waiting to speak with someone?

Describe how girls and community members are made aware of reporting options such as posters or communication materials.

FM INTERVIEW PROTOCOL FOR SAFEGUARDING REPORTING PATHWAYS

Describe the ways to report a safeguarding incident in your community

What options are available to girls who cannot access a phone or a suggestion/reporting box?

Provide an estimate of the number of reports you are aware of in the past x months.

Provide a short narrative of an anonymised case that you or someone you know had reported

Describe the response received.

Instead, these questions can be framed based on the project’s intended next steps. For example, if this question aims to help the project determine how many learning materials to procure, then the desired response is in a number. Alternatively, if the aim is to understand the quality of the materials, then a Likert scale may be the appropriate response type.

HOW MANY STUDENTS ARE WITHOUT THE FOLLOWING DAILY LEARNING MATERIALS?

Pencils, pens Paper

Notebook

Textbook

Storybook

BOYS GIRLS

Another easy-to-administer strategy that does not yield actionable findings is the use of open-ended follow-up questions. As a follow-up to a yes/no item, respondents are often asked to “give an example” or “please explain”. The challenge with these follow-up questions comes when analysing the data:

Similarly, yes/no items do not provide evidence that girls or community members can report safeguarding issues. Instead, examples of how individuals behave (e.g. how they utilise the hotlines) offer insights into how each step of the reporting pathway is – or is not – being used.

Is there a hotline? YES NO

Is the hotline being used? YES NO

1. Follow-up questions are time-consuming to analyse accurately: The follow-up question is ignored in the analysis because it is much more complex and time-consuming to code open-ended responses than just the binary input. For example, an enumerator selected ‘no’ to the question “Does the teacher refrain from corporal punishment?” and in the follow-up, provided this explanation: “The teacher only beats the students once, but not three times.” The analyst needs time to read each response and recode the binary input. Instead, utilise the binary input as a filter to obtain meaningful follow-up information. For example, “Did you observe evidence of the teacher engaging in corporal punishment?” followed by “Summarise the evidence”. This still requires a time-consuming analysis task, but it is more likely to provide the desired information.

2. Follow-up questions may require specific knowledge to interpret accurately: the analyst may not have the technical expertise to assess whether the follow-up response is accurate or how to report it. Furthermore, even if the data analyst takes the time to read all follow-up responses, they may not know, without additional inputs, what type of summary would be helpful to programme staff. Instead, work with programme staff when developing items so the response options align with the analysis plan and the programme team’s plan of use.

PRACTICE BRIEF #2 / FOUR CHARACTERISTICS OF A STRONG MONITORING APPROACH 4
MATERIALS? YES NO
pens YES
Paper YES
THERE STUDENTS WITHOUT DAILY LEARNING
Pencils,
NO
NO Notebook YES NO Textbook YES NO Storybook YES NO

Classroom observations are one of the few ways that actual changes in behaviour can be measured. Knowing whether teachers embrace and use GESI and safeguarding strategies – a key focus of GEC projects – is particularly challenging to measure. During COVID-19, restrictions limited in-person observations (even when classrooms were in session), classroom observation tools were put on hold and are expected to resume with the reopening.

The biggest shortcoming of observation tools is that they often ask one or two questions on gender and social inclusion and safeguarding but leave all the work of interpreting the question to the observers. For example, an observation tool may ask, “Does the teacher apply genderresponsive and sensitive teaching methods? Give an example.” The intention of this question is essential to include, but as structured, it may not provide accurate information. A typical response to the item, as written, is: YES, the teacher asked a question directly to a girl; or YES, 50% of the class were girls. With such responses, the project is making a broad and inaccurate assumption and creating a record that the teacher meets GESI targets.

Instead, work with experts to develop observation items that capture GESI and safeguarding practices in the classroom with greater accuracy. The following five strategies can help improve classroom observations and the quality and utility of the data they produce:

1. Break down questions into concrete and discrete observable behaviours. Avoid items that ask for judgement by the observer unless the criteria for assessment are laid out and the observer is trained to use them.

2. Link teacher training content to the monitoring tool. The monitoring tool should list specific teaching strategies as taught during teacher GESI training, and enumerator training should prepare observers to recognise those strategies in action.

3. Train observers on how to collect data, including the specific behaviours for which they are looking. Training observers is the cornerstone of reliable observation data, which is essential in making judgements about teaching and learning in classrooms.

4. Provide observers with support materials to check GESI performance in the classroom, including those specifically tied to project interventions. Materials should also include examples from real classrooms and ongoing refresher training to discuss what they have observed.

5. Allow the teacher to provide reflections and feedback following the observation. Findings from classroom observation tools can be complemented with teacher reflection data. Plan a 15-minute reflection section after the lesson and record the teacher’s feedback on the day’s lesson, their perspective on key challenges, strategies they found helpful or struggled with, and student engagement, to name a few. Including the teacher can enrich the project’s understanding of how classroom instruction is changing while offering teachers the opportunity to utilise the data for their development.

PRACTICE BRIEF #2 / FOUR CHARACTERISTICS OF A STRONG MONITORING APPROACH 5
3. Incorporate classroom observations to capture changes in instructional practices, particularly the adoption of gender and social inclusion (GESI) and safeguarding principles

4. Leverage digital data collection platforms to further streamline an already effective data aggregation process and field protocols

Before COVID-19 and increasingly, in response to COVID19-related travel restrictions, projects have been transitioning to digital tools for data collection. Some digital tools used by GEC projects include KOBO Collect, Tangerine, Solstice, Open Data Kit (ODK) or SMS platforms. However, while digital platforms may streamline data capture, they do not guarantee robust monitoring systems. A digital platform is only one element of a strong system. Instead, the digital platform helps streamline a good data aggregation process and field protocols. Data aggregation is the flow of data from the field through multiple users along the way to a centralised database. Field protocols ensure data collection procedures capture a range of respondent types and data.

Evaluators of GEC projects are often encouraged by the Fund Manager to capture the most important voices – those of girls, their families and community members. This requires welldesigned field protocols that can feed data through a field-todatabase aggregation process. Therefore, when deciding to use a digital platform for data collection, the platform should build on an already strong data aggregation process and field protocols.

Data aggregation should be coupled with data utilisation at each step from field to central database. For example, a paperbased data collection system may require forms to be filled out by enumerators in the classrooms, then compiled and entered at the district office, then checked and sent forward in excel sheets to a central office in-country or international. A digital platform streamlines this process minimises the field-todatabase delay and reduces opportunities for errors.

However, an effective data collection system is one in which data utilisation is built into the process of data aggregation.

At the classroom site, some digital platforms can summarise the data immediately, which provides an opportunity for enumerators to review the data. It also allows the enumerators to share results with teachers and students to capture their reflections and correct any errors in basic data points (e.g. student age, classroom materials present). At the district level, in place of data entry tasks, MEL staff can use some of their time savings to review the aggregated data coming in through the digital platform, reconcile missing uploads, and quickly determine next steps such as delivering materials to classrooms or following up with teachers who have not completed training. Data utilisation requires focused training on interpreting and using findings and MEL staff at all levels to be empowered to act on the results.

Field protocols should not rely solely on MEL staff to conduct site visits but instead should leverage a range of data collection methods based on respondents’ accessibility. Particularly during COVID-19-related travel restrictions, projects need data inputs from the field as directly as possible without needing an army of data collectors to do frequent visits. For example, a few projects establish data links directly with teachers and students. During school closures, projects established SMS-based collection protocols with school directors to track teachers’ attendance. Similarly, SMS was used to gather reports from girls (using parents’ or friends’ phones) regarding their participation in various project interventions. Field protocols that incorporate direct links with respondents can naturally offer data-sharing behaviours that can be sustained beyond the life of a project. More importantly, direct links to respondents can reduce the number of beneficiary subgroups that can be mistakenly excluded from monitoring data.

Conclusion

Effective monitoring systems on the GEC have illustrated that reliable and meaningful findings do not require substantial resources but thoughtful and strategic planning. This brief highlighted four areas that can make for a more effective monitoring system. When travel restrictions have forced MEL teams to be creative about capturing robust monitoring data, quality tools and processes become even more crucial. When monitoring systems for education projects are based on thorough planning that includes these four considerations, the resulting system is likely to yield actionable findings and ultimately better outcomes for marginalised girls.

PRACTICE BRIEF #2 / FOUR CHARACTERISTICS OF A STRONG MONITORING APPROACH 6

This Practice Brief was prepared by Elsa van Vuuren and Hetal Thukral of the Girls’ Education Challenge (GEC) Fund Manager team.

For more information, contact: learningteam@girlseducationchallenge.org | www.girlseducationchallenge.org

The Girls’ Education Challenge is a project funded by the UK’s Foreign, Commonwealth and Development Office (“FCDO”), formerly the Department for International Development (“DFID”), and is led and administered by PricewaterhouseCoopers LLP and Mott MacDonald (trading as Cambridge Education), working with organisations including Nathan Associates London Ltd. and Social Development Direct Ltd. This publication has been prepared for general guidance on matters of interest only and does not constitute professional advice. You should not act upon the information contained in this publication without obtaining specific professional advice. No representation or warranty (express or implied) is given as to the accuracy or completeness of the information contained in this publication, and, to the extent permitted by law, PricewaterhouseCoopers LLP and the other entities managing the Girls’ Education Challenge (as listed above) do not accept or assume any liability, responsibility or duty of care for any consequences of you or anyone else acting, or refraining to act, in reliance on the information contained in this publication or for any decision based on it.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.