14 minute read
Table 11: Overview of interview sampling
Annex 5: Interview methodology and key findings
This annex provides an overview of the methodology applied for the endline KIIs and FGDs. It also presents the qualitative findings by core evaluation question. The specific interview schedules / topic guides used for interviews, along with disaggregated matrices of qualitative findings are available from Mott MacDonald upon request. More detailed qualitative findings for Part 1 of the synthesis report and the special studies are also available upon request.
The interview methodology
Qualitative data collection was conducted between late August and early December 2021.
The qualitative findings presented here are based on triangulated data collection from literature review, key informant interviews (KII) and focus group discussions (FGD), linked to contribution analysis and examination of the Kuunika evaluation Theory of Change and hypotheses. Therefore, the results presented below for the core evaluation questions 1 - 4 are grounded in information gathered from key project, health sector and partner stakeholders.
Table 11: Overview of interview sampling Category of key informant
National stakeholders: MoH, representatives of partners, Kuunika staff District stakeholders: District Health Office (DHO) and Council Health Facility stakeholders: In Charges, nurses, Data Clerks International decentralization specialists
TOTAL Number
13
8
31
3
54
The matrices for the three broad categories of key informant interviews and focus group discussions (health facility staff, District Health Office and District Council staff and national/partner) are available as raw data from Mott MacDonald – these matrices represent the basis for analysis.
Limitations of the qualitative methodology: These included restrictions on travel due to the pandemic and the resultant challenges of conducting remote interviews through Microsoft Teams and Zoom. Connectively problems led to several missed KIIs. Travel restrictions also meant it was not possible to undertake dedicated KII or FGD with Health Surveillance Assistants (HSA) / Disease Control Surveillance Assistants (DCSA) or with any representatives of Health Facility, Village or other category of health committee – note this had been possible during the baseline and midline evaluations.
Summary of key qualitative findings by evaluation question
The following summary include indicative quotes and examples only. These have been selected to illustrate the evaluators process of contribution analysis and testing of the project theory of change and hypotheses (Annex 1). The full qualitative matrices available from Mott MacDonald contain more comprehensive quotes and examples. The following findings have been organized by evaluation question.
Q1. Has data quality improved?
Key findings were as follows:
● Insufficient training and supervision: A number of Data Clerks stated that they had not received any initial or refresher training. ● Hierarchies of responsibility and collection: Several respondents at facility level indicated that there is a hierarchy involved in data entry - this is not seen as a high status job. It was clear that some health workers continue to see themselves as outside the 'data chain', in terms not only of collection, but analysis and application. ● The role of civil society in collecting and using data: As at baseline and midline, key informants acknowledged the critical role played by community organizations and Health
Surveillance Assistants (now said to have been renamed Disease Control Surveillance
Assistants, although HSA respondents identified themselves as such) in collecting data and supporting the management of patient access. At national level there is reference to the long awaited and still delayed LAHARS (Local Authority HIV and AIDS Reporting System), which is considered to provide opportunities for more user-friendly data collection. ● Data 'super users': As at midline, we identified some ‘data super users'. At endline these included a programme co-ordinator, a nurse and a DHO member. As illustrated by the quote below, a common emotion expressed by the super users was frustration, over lack of time, low quality of data, demands not being met by support and a desire to be able to use data more effectively to plan and to deliver services.
“It is really frustrating not to able to use data as they could and should be: we know what could be done and we see obstacles still. I would really like to use DHIS2 to plan and deliver services and support others, here and at e.g. review meetings. The demands are placed on us, while the support is inadequate. Projects come, projects go, all with slightly different priorities - and they seldom talk enough to each other and certainly not to us or the DHO.” (Health facility staff member, Kuunika District) ● Use of EMRs and Paper Registers: At district and facility levels, respondents consistently reported that paper registers continue to be an essential part of the collection of data.
Electronic Medical Registers (whatever the view of their usefulness, completeness, etc.) were widely seen as too unreliable due to issues of power outages and lack of connectivity for any health facility to contemplate abandoning the use of paper registers. Respondents were, however, able to identify the positives and negatives of both systems. ● Interaction with DHIS2: Respondents at facility level consistently reported that access to
DHIS2 continues to be limited. Key barriers mentioned were the need for password access, the volume of data available and challenges in navigating the system. National level key informants reported that a key issue remains how to best bring together the paper and EMR systems, and combine the data in ways that are user-friendly and accessible.
“There's also the sheer volume of data - DHIS2 can overwhelm users. There isn't always a correlation between quantity and quality of data.” (National-level respondent) ● Perceptions of data quality at subnational levels: – Data completeness: Health facility respondents consistently referred to gaps in routine data collection systems (e.g. on ART information) and the need to the need to retrospectively collect and reconcile the data. “At ANC, entering data on [the EMR] - we abandoned it, because it was incomplete and poor. When women went for HIV testing, this didn't show in the [EMR] system, so we went back to the paper-based module.” (Health facility staff member, Kuunika District)
– Data accuracy: Several respondents referred to issues of human error and the fact that errors can be made in data entry - not least because many Data Clerks have not been appropriately trained -especially in the use of EMRs. Key challenges mentioned included: few opportunities for quality assurance, the volume and pressure of work at facility level, and the proliferation of different reporting systems. – Data timeliness: Several respondents at facility and district level suggested that timeliness of reporting had deteriorated in the past two years. One reason given was the increased demand for COVID-19 data. Other frequent comments were that both paper and electronic reports need to be submitted to the district office; also that additional time is needed to backfill data.
● Whether trust in data increased as a result of Kuunika support: District-level respondents in
Zomba mentioned support provided by the project that was considered to have increased trust in data. There were also perceptions of a wider range of GoM and donor partners providing support and a degree of momentum to enhancement of trustworthy data.
“I feel that data reliability has been increased, in part due to Kuunika supporting the setting up of the WhatsApp group, where we can triangulate, discuss, plot, track those who haven't reported - and take action. Such responsibility feels good.” (DHO staff members, Kuunika
District) ● Progress on SOPs: One national level key informant made the following observation:
“We came out with SOPs to drive the effort [to support use of digital data], but these are nowhere to be seen.” (National-level respondent)
Q2. Has data use increased?
Key findings were as follows:
● Using data at subnational levels: While some district-level officers reported some progress in data use, the general perception at facility level was that high workloads make consistent data use practices difficult.
“We have seen changes in service delivery - previously some were not able to enter details properly, now we can do that, so can review and make better use. There is faster data entry now, so that in itself improve quality of service delivery.” (Health facility staff member,
Kuunika District)
“There are so many indicators and so much raw data; it is difficult for the Data Clerks to enter the mass of data and for us to look it up, make sense of it and use it. There are also so many Registers and so many demands from partners and the DHO and the DHA. So even if you're committed like me, it is hard to keep on top of the data and make best use.” (Health facility staff member, Kuunika District)
● Support for data use: Respondents at sub-national level suggested support for data use often largely hinges on intra and inter-facility meetings. While the pandemic has resulted in what appears to be the widespread halting of quarterly district M&E review meetings, a small number of health facilities (2 of the 6 visited for the FGDs) mentioned that they hold their own, internal data review meetings; one facility uses the meetings to develop monthly action plans. WhatsApp groups continue to function and are widely seen as positive, immediate and task-oriented, with individuals and health facilities able to confer at their site or more widely, to learn from each other and to solve problems within their scope.
Q3. Have attitudes and practices around decision-making improved?
Key findings were as follows:
● Development of a data use culture: One positive finding from a minority of both health facility and DHO staff member respondents is that there continues to be some development of a 'data culture', which indicates some movement with regard to improved attitudes and practices for evidence-based decision-making. The District Health Office was sometimes seen as facilitating a data use culture:
“We are guided by the DHO and also (increasingly) by the data we share and view on
WhatsApp, DHIS2 + general discussions with other health workers. These have helped our health facility to be more self-sufficient in terms of planning to use HIV data - but resources always lag very far behind.” (Health facility staff members, Kuunika District)
● Dependence on data clerks: At facility level there were many references to data clerks having primary responsibility for data entry and use. For example:
“Is DHIS2 used for that? I've never used it; I rely on the Data Clerk - surely they are the ones to plan the way forward?” (Health facility staff member, Kuunika District)
“The data clerks use [DHIS2], but only one has been trained at this facility. The head of HMIS uses it most“ (Health facility staff members, Kuunika District)
● District acknowledgement of Kuunika support: Some district key informants recognised the gains from Kuunika support:
“The Department for Digital Health and Kuunika have worked very closely together. In our
District the Kuunika people have been supportive in terms of DHIS2, which has helped us at the DHO with planning. Kuunika has tried to live up to its name, to support decision makers to understand data and make use of them.” (DHO staff member, Kuunika District)
● National-level perspectives on Kuunika support to the development of an evidence-based knowledge culture varied. Some key informants suggested that while more data is now available, it is not translated into decision-making, in part due to “lack of a performance management culture”. There were also concerns about: “too many partners, too many reporting formats and too many demands for data”.
Q4. Have key [HIV] service areas improved as a result?
Key findings were as follows:
● Kuunika inputs to improved service delivery: There were some perceptions that Kuunika has contributed to strengthening digital data systems across the board. A few respondents felt that such improvements are now embedded into the system.
“Those HMIS review meetings, where we look at indicators - those are useful and I think we now understand their value. Kuunika has supported us to shift towards greater use of data for planning, which can only assist service delivery improvements.” (DHO staff member,
Kuunika District)
● Effects of the pandemic: A few respondent pointed out that the focus on COVID-19 data collection had resulted in reduced attention on data collection for other services. There were also concern about recent reductions in mentoring and supportive supervision support:
“I have missed mentoring during the pandemic; it encourages me to think about quality and use of data.” (Health facility staff member, non-Kuunika District)
● Sustainability of progress: Views were mixed. Zomba District respondents were the most positive, stating that while even before the pandemic the project was proactive and problem solving at both district and health facility levels, the COVID-19 response had introduced a 'push in data management, reporting and use'. Because data collectors, analysts and users have all been aware of the paramount need for timeliness, this was said to have strengthened the whole process of data use. This attitudinal and practice shift was felt to be sustainable - with continued support.
Other views were that throughout the project, the emphasis has been on data collection and data extraction. There were concerns (especially at district level) that this did not promote sustainability in terms of data quality, ownership, and practical use of data for planning and service delivery purposes. The various meetings funded by Kuunika were beginning to expand engagement and ownership; however, the view was that these need to be re-started and sustained for the foreseeable future.
Annex 6: Lessons for evaluation methodologies
This annex reflects on the original impact evaluation design and its fitness for purpose in capturing (i) the changing activities and aims of Kuunika over its life and (ii) distilling the lessons learnt.
● The mixed-methods approach (quantitative and qualitative) remains valid. The mixedmethod evaluation approach combines simple and intuitive quantification of progress, triangulating and explaining that with deeper qualitative assessment. The complexity and multi-faceted nature of digital innovation in low resource settings requires a depth and variety of measurement which cannot be satisfactorily evaluated in another way. ● A theory-based and contribution analysis approach to the evaluation design and implementation remains valid too. This approach can accommodate project complexity and adaptation, but provides a transparent framework for selecting the mixture of methods, weighing and synthesising results. It also enables and invites participation and discussion that promote learning amongst stakeholders. However, there is perhaps more scope for scheduling points of joint reflection, progress updates and participatory learning (such as annual reviews). ● Building in emerging best practice: Since Kuunika began, there have been massive strides49 in the experience of digital health innovation, how to strategise it, do it and how to evaluate it.
A lot has been learnt and shared since 2016/17 on best practice in all these areas. ● Keeping the formative research role distinct: The formative learning process (‘act fact, fail fast, learn fast’) should be explicitly wrapped into the design of the digital programme from the start, with programme documents and management processes maintained to work with, and capture, that approach.50 ● Documentation of learning and course correction: While much of that formative learning is in real time, tacit and inherent in the programme and in the process of ‘learning by doing’, there is benefit in recording and disseminating not just what changes have happened and developments have occurred, but why, and what the learning process has been to get there. ● Incorporating an evaluation utilisation approach: The above step would mean that maintaining and reviewing the key programme documents could then be linked into a light touch, independent evaluation that employs more participative, self-assessment and scorecard approaches to assist and embed learning in the programme. ● Retaining some independence in the evaluation function is recommended, to provide not only verification and independent advice to the client, and the wider digital health community, but also help difficult issues to be surfaced and potentially difficult messages delivered. ● Jointly developing some key programme design tools. For example, the project theory of change and M&E framework (which could be linked practically to a ‘digital roadmap’ for digital transformation. The particular division of labour and usable nature of these tools would need to be carefully agreed and negotiated. Consideration would need to be given to
49 For example, the inaugural edition of the State of Digital Health report, presenting the Global Digital Health Index, was published in
April 2019 50 The original project design for Kuunika planned for a two-part approach on learning and evaluation that reflected the expected need for support to detailed formative research, rapid learning and feedback to the implementing partners, plus an independent evaluator to take an overview of the programme and the extent to it had progressed towards its overall goal and impact. Later Cooper / Smith took the formative research role with it into the implementation role, rightly making closer links between rapid learning and implementation
joint tools for: a) the technical back-end of the system architecture; b) organisational functions, including understanding digital confidence and innovation readiness – all contingent determinants of delivery; c) the wider contextual governance and legislative determinants and needs, including stakeholder and political economy assessments, and behavioural change and innovation adoption scoring. ● Application of standard global concepts, indicators and tools in the evaluation that have growing currency in the field of digital transformation. Global concepts such as ‘innovation readiness’, ‘digital confidence’, ‘innovation adoption’ now are commonly defined and measurement methods and composite indices developed which could be adapted for low resource settings, such as Malawi. It might also be possible to draw on in-country frameworks and methods for evaluating digital innovation (including in the health sector).
Application of ‘non-adoption, abandonment, scale-up, spread and sustainability’ frameworks could add value to sustainability evaluations.51Use of more standardised concepts and evaluation frameworks across digital health projects could assist comparison work and promote country-level peer-to-peer benchmarking and review.
51 Abimbola, S., Patel, B., Peiris, D. et al. The NASSS framework for ex post theorisation of technology-supported change in healthcare: worked example of the TORPEDO programme. BMC Med 17, 233 (2019). https://doi.org/10.1186/s12916-019-1463-x