Education
COVID-19 and the Education Crisis: Using the ‘A-Level Fiasco’ as a Catalyst for Regulatory Reform by
Udit Mahalingam and Michael Head
The ‘A-Level Fiasco’ In light of the Coronavirus pandemic and the cancellation of all in-person examinations, an alternative to award by assessment was required for thousands of A-Level students in the United Kingdom (UK) this summer. However, the Department for Education, and adjacent regulatory institutions, failed to establish a standardisation process which was both robust and fair, a failure which caused anger and confusion for many students across England. This policy paper will seek to outline potential remedies for these regulatory shortcomings, in the hope that the same errors in standardisation can be prevented from occuring again in the future. An Inaccurate Algorithm At the root of the summer’s “A-Level fiasco” was an algorithm created by the Office for Qualifications and Examinations
38
Regulation (Ofqual). In short, Ofqual’s algorithm calculated predictions of how A-Level grades should be distributed within a given school, rather than predicting results based on individual students’ prior attainment.1 To do this, it relied on data which prioritised each school’s historical performance, thus minimising the value of both GCSE results, as well as the student rankings provided by each school after nationwide mock examination procedures in January 2020.2 Centre Assessed Grades, based on teacher’s predictions, were largely ignored in the construction of this algorithm. Whether acting out of hubris or a genuine belief in its statistical modelling, Ofqual’s insistence was contradicted by its own analysis, which revealed that the algorithm predicted inaccurate grades at least a third of the time.3 It is worth noting that Gavin Williamson, the UK’s Education Secretary, received evidence of serious flaws within the grading system prior to results day. Nevertheless, Williamson