42 minute read

Leading Learning in the Age of League Tables: Disciplinary Power, Performativity and the Fight for the Ethical Self

SUNATA 38

Leading Learning in the Age of League

Tables: Disciplinary Power, Performativity and the Fight for the Ethical Self

Introduction

The schooling system in Australia has been exposed to myriad reforms and political agendas in the last decade, including the introduction of a national curriculum to mandate perceived requisite content knowledge and pedagogical approaches. Alongside this, the pressure for schools to perform on the National Assessment Program and Australia’s slide in the most recent PISA rankings (Thomson e al. 2019) have generated significant debate about the quality of both Australia’s education system and its educators and school leaders. The neoliberal policy discourses from which these changes and debates have emerged have given rise to an ‘identity crisis’, or, as Cohen (2013) contends, an ‘ontological insecurity’ for many educators, who have inevitably been shaped by the ‘surveillance’ (Sinclair 2011; Thomas 2011; Anderson & Grinberg 1998) doctrines and performativity agendas (Allix 2000; Ball 2003; Cohen 2013; Eacott 2010; Fitzgerald & Savage, 2013; Grace 2000; Wrigley 2011) set forth by top-down policy decisions. A dilemma for educational leadership in this context is management of the power relations that exist in schools and how they shape teacher identity and agency, both at the macro-level in terms of neoliberal policy pressure and performative culture, and through the micro-politics of schoolspecific reform initiatives, internal leadership styles and community expectations. As an educational leader, it is paramount to recognise the influence these power relations have on staff when adopting and embedding digital and pedagogical innovation. Furthermore, the need to remain cognisant of the macro policy framework that governs the leadership of schools and dictates community perceptions of what is valued in education cannot be ignored in any changes that could be perceived as additional impositions on a sometimes change-fatigued and tyrannised teaching fraternity. It seems, all too often, that there is a tension between what is ‘right’ in terms of pedagogy and learning outcomes and what is ‘good’ for school testing results and league tables. At times, these two concepts seem completely incongruous. As someone trying to lead other educators, it is incumbent upon me to encourage staff to exist within these two concepts simultaneously, rather than continually seeing them in opposition to one another; this is crucial for change. The idea of developing and delivering rigorous, meaningful and innovative learning experiences when juxtaposed against a content-focused curriculum and the high stakes nature of current testing regimes means that they can feel overwhelmed with their roles, and it seems that more often than not, the tests, and the league tables and media attention that arise from them, become the focus of, and reason for, school reform. Specific to the Queensland context, we embarked upon an entirely new system for the senior phase of learning in 2019, and it has sometimes meant that staff are reluctant or unable to engage at a deeper level with innovative practice in order to expand their repertoire of pedagogical and digital skills. The ‘busyness’ of teachers is certainly not a

Alison Scott

Head of Faculty – ELearning and Research Services

new phenomenon, and the many and varied roles teachers are asked to take on certainly influences teachers’ ability to exhibit agency in their profession and develop a collective sense of selfefficacy; however, research suggests that staff reluctance to engage in innovative practice and technological change might not emerge from a dogmatic refusal to change practice, but could be the result of macro influences and the associated power dynamics exerted upon them about their ability and agency to enact change. Therefore, understanding and remaining sensitive to the identities teachers construct about themselves and their profession in the context of neoliberal policy agendas, alongside other macro and micro power relations that have shaped these identities, are key to encouraging a culture shift and exhibiting successful educational leadership. In examining these influences, discussions of Foucault’s disciplinary power, in conjunction with Bourdieu’s habitus, provide a useful entry point into the ways, whether consciously or unconsciously, we are shaped by ‘socially constituted dispositions’ (Hayes et al. 2004, p. 521) and how these inevitably conspire to change and challenge our identity, both as teacher and leader. Performativity – an exemplar of this power – exists as a ‘policy prop’, and as explained by Fitzgerald and Savage (2013, p. 127), ‘ensure[s] a level of adherence to and compliance with the efficient and effective implementation of the wider reform agenda’ – an agenda that has a direct impact on teachers’ individual and collective self-efficacy and sense of agency over their profession.

The panopticism of the postmodern world

In order to comprehend and challenge teachers’ ‘identity crisis’, it is necessary to understand the power relations at work in constructing our identity in the first place. Cohen’s ‘In the back of our minds always’: Reflexivity as resistance for the performing principal (2013) speaks directly to the forces at work in this space and proves useful in articulating a foundational principle for leadership challenges: that wider societal pressures and neoliberal policy have been internalised by teachers and this has affected their ability to exhibit agency over their pedagogy in ways that are harmful to all stakeholders. Cohen’s examination of Foucault’s disciplinary power is epiphanic in its assertions that these pressures and policies are ‘a modest, suspicious power, which [function] as a calculated but permanent economy’ (Cohen 2013, p. 7). Historically, Foucault described the progression to the now ubiquitous internalisation of neoliberal agendas, as the move away from ‘sovereign power through torture’ to the more subtle, but perhaps equally insidious, ‘constant supervision’ model that emerged alongside capitalist society (Cohen, 2013 p. 6). Post-modern society has therefore normalised conformity to macro neoliberal ideals through the panoptic-like infiltration of our thoughts and identities. As Anderson and Grinberg (1998, p. 334) so eloquently describe, ‘surveillance is permanent in its effect, even if it is discontinuous in its action’. The idea of policy as an omnipresent force is again stated by Allix (2000 p. 14), who describes Burns’ model of reframing teacher ‘wants’ as institutional ‘needs’ which are ‘socialised, collective and objective phenomena, derived from the environment’. In this model, the judgements of the teacher are subservient to the macro-level policy agendas imposed on schools from outside. Eacott (2010) and Hayes et al. (2004) shed further light on this concept by explicating the theories of Pierre Bourdieu with regards to the influence neoliberal policy can exert over our socially-constructed selves. Eacott (2010 p. 268) focuses on Bourdieu’s concept of strategy to articulate how our choices are not always our own; they are ‘not conscious, individual rational choice[s], rather appropriate actions taken without conscious reflection…Strategy or the feel for the game entails moves in the SUNATA 39

SUNATA 40

game that are based on mastery of its logic, acquired through experience, part of habitus’. Hayes et al. (2004 p. 521) build upon this notion of habitus as ‘the internalisation of the social structure so that its practices seem familiar, ‘taken for granted’, and common sense’. As an educational leader, then, our actions must be understood from within the social space in which they take place. ‘[T]here is a need to understand the context of the situation in relation to historical events’ (Eacott 2010, p. 276) and macro-level policy frameworks before we can hope to subvert their influence on us, and our teachers.

Performativity

One such framework that continues to exert ever-increasing influence on the lives and work of educators is the rise of an accountability-driven culture. The research on performativity deftly articulates the pressures that educational leaders and teachers face to conform to standards and ‘yard sticks’ set by macro-level neoliberal agendas. First, Ball (2003, p. 144) provides a definition of performativity: ‘Performativity is a technology, a culture and a mode of regulation that employs judgements, comparisons and displays as means of incentive, control, attrition and change – based on rewards and sanctions (both material and symbolic).’ He goes on to explain that, ‘[t]he act of teaching and the subjectivity of the teacher are both profoundly changed within the new management panopticism (of quality and excellence) and the new forms of entrepreneurial control (through marketing and competition)’ (Ball 2003, p. 146). In other words, our teachers are co-opted into adhering to the neoliberal agendas of the day through fear of sanction for lack of compliance, or worse still, poor student outcomes. The ‘meaningfulness of what they do’ is forsaken and ‘what is important is what works’ in order to ‘produce measurable and ‘improving’ outputs and performances’ (Ball 2003, p. 150). The notion of ‘what works’ in an accountability-driven culture is picked up in the writings of Eacott (2010) and Fitzgerald and Savage (2013), who both contend that the emergent reductive ethos permeating educational policy rationalises education down to a set of ‘skills, knowledge and professional practices that teachers and school leaders must possess’ (Fitzgerald & Savage 2013, p. 132). Grace (2000), in his work The Challenges of Contemporary School Leadership: The Contribution of Critical Scholarship, opines that the neoliberal rise of a market and management culture in 1980s England and the consequent performativity agendas precipitated the demise of a ‘plurality of views’ and ‘professional autonomy’. The introduction of the National Professional Qualification for Headteachers (NPQH) as the ‘required gateway’ to school leadership seemed to seal this fate (Grace 2000, p. 233). Unfortunate parallels can be drawn with Fitzgerald and Savage (2013, p. 130) who speak of the AITSL Australian Professional Standards for Teachers as the ‘official language of leadership scripts’. Choice and agency seem to have been taken away from the profession in myriad ways, and, in their stead, we have uniformity, compliance and league tables.

Teacher identity, self-efficacy and agency

The cumulative effect of omniscient disciplinary power and macrolevel policy frameworks such as performativity is the gradual erosion of teacher identity. Ontological insecurity (Ball 2003; Cohen 2013; Fitzgerald & Savage 2013; Sinclair 2011; Thomas 2011) emerges when teachers and educational leaders are stripped of their power, knowingly and unknowingly – when the disciplinary power becomes so strong that they ‘willingly subject themselves’ to its dictates (Cohen 2013, p. 7). Fitzgerald and Savage (2013, p. 129) provide further insight through the way macro-level policy can impugn the self-efficacy of those in the teaching profession when teachers and school leaders are seen as ‘objects to be trained and therefore reduced to roles as high-level technicians certificated to implement dictates and objectives decided by experts far removed from the everyday realities of schools, staffrooms and classrooms’. Trust in the professional judgement of educators is eroded and replaced by unwavering belief in topdown policy – a concern raised far too often in my own context and across the teaching profession. Performativity at the expense of teacher judgement is also questioned by Gunter (2001, p. 98) who stresses that ‘[it] demands ICT, human, and evidence-based auditing and communications systems that alter the meaning of teaching from professional ethics to statistical calculations about a teacher’s worth’. Wrigley (2011) corroborates Gunter’s argument in asserting that micro-level school improvement initiatives are undermined by macro-level performativity policy where schools seeking to develop democratic participation are attempting to do so within an education system based on command and control. Indeed, Wrigley cites Gunter’s work within his own to articulate the challenge of supporting teacher self-efficacy and agency in a neoliberal context:

The neo-liberal version of the performing school requires teachers and students to be followers, but to feel good about it . . . The problems of education have been laid at the door of teachers while their capacity for finding solutions has been taken away. The rhetoric has been of empowerment, participation and teams, but the reality is that teachers have had to continue to do what they have always done – be empowered to do what they have been told to do (Gunter 2001). In being ‘told what to do’ by accountability demands from school

leaders and wider policy frameworks, teachers have been subjected to judgement, comparison and surveillance on a hitherto unprecedented scale, undermining their self-efficacy and leading to a high degree of uncertainty and instability. As practitioners and school leaders, ‘[w]e become ontologically insecure: unsure whether we are doing enough, doing the right thing, doing as much as others, or as well as others, constantly looking to improve, to be better, to be excellent. And yet it is not always very clear what is expected of us’ (Ball 2003, p. 148).

Conclusion

Nevertheless, despite the concerns portended above, there is hope. The literature clearly defines a way forward for me, as a school leader, and teachers, to reframe and redress the effects of disciplinary power and performativity on teacher identity and agency. Cohen’s (2013) ‘critical consciousness’ and Fitzgerald and Savage’s (2013) ‘abandoning the script’ alight the way for educators. They speak of the need for educational leaders to be aware of the power that macrolevel policy frameworks can exert upon us; therefore, we must interact with neoliberal discourse in order to offset its influence. As educational leaders, our role ‘…simultaneously involves a conscious and unconscious contestation of hegemonic attempts to codify and bureaucratise leadership. Leaders in schools need to move beyond the blind conformity that scripts of ‘today’ and ‘tomorrow’ enunciate and critically engage with the spaces in which educational leadership is embedded’ (Fitzgerald & Savage 2013, p. 137). Hattam et al. (2010) put their faith in instructional leadership as a tool for changing the status quo. They support a marriage of sorts between macro-level accountability measures and micro-political school reform initiatives where school leaders who practise instructional leadership are the conduit between performativity, classroom practice and improved student learning outcomes. ‘Where [leadership] is operating effectively, people understand what their role is as a leader and know the difference between leadership and management, and can focus at least a bit on the instructional leadership’ (Hattam et al. 2010, p. 60). They proffer that good support and clear messaging from leadership about ‘expectations and accountability for improvement in student achievement’ involve effective, targeted use of data that is made explicit and ‘public’ to staff. Additionally, Heffernan (2019) conceptualises her advice to educational leaders as the ‘punk rock’ approach to leadership. O’Hara (cited in Heffernan 2019, p. 120) defines the concept thus: ‘Punk is the ideal that people should think for themselves, be themselves, create their own rules and live their own lives, beyond what society has offered.’ As Heffernan (2019, p. 120) notes, this concept was borne out of a time when youths were rallying against what they saw as ‘bleak, oppressive futures’ and the parallels with the current performativity demands and societal pressure on teachers do not go unnoticed. Punk rock leadership then, ‘…challenges dominant discourses about leadership and alters traditional notions of power within the school setting’ (Heffernan 2019, p. 117). All in all, the literature reveals that, as leaders and teachers, we must stand up and be counted. Be aware of the status quo, but do not be afraid to challenge it, and fight for the ethical self.

References

Allix, N 2000, ‘Transformational Leadership: Democratic or Despotic?’, Educational Management & Administration, vol. 28, no. 1, pp. 7-20. Anderson, G & Grinberg, J 1998, ‘Educational Administration as a Disciplinary Practice: Appropriating Foucault’s Views of Power, Discourse and Method’, Educational Administration Quarterly, vol. 34, no. 3, pp. 329-353. Ball, S 2005, Education Policy and Social Class: The Selected Works of Stephen J. Ball, Routledge, Florence. Cohen, M 2014, ‘In the back of our minds always: reflexivity as resistance for the performing principal’, International Journal of Leadership in Education: Theory and Practice, vol. 17, no. 1, pp. 1-22. Eacott, C 2010, ‘Bourdieu’s strategies and the challenge for educational leadership’, International Journal of Leadership in Education: Theory and Practice, vol. 13, no. 3, pp. 265-281. Fitzgerald, T & Savage, J 2013, ‘Scripting, ritualising and performing leadership: interrogating recent policy developments in Australia’, Journal of Educational Administration and History, vol. 45, no. 2, pp. 126-143. Grace, G 2000, ‘Research and the Challenges of Contemporary School Leadership: The Contribution of Critical Scholarship’, British Journal of Educational Studies, vol. 48, no. 3, pp. 231-247. Gunter, H 2001, ‘Critical approaches to leadership in education’, Journal of Educational Enquiry, vol. 2, no. 2, pp. 94-108. Hattam, R, Kerkham, L, Walsh, J, Barnett, J, Bills, D & Lietz, P 2010, ‘South Australian SILA Project Evaluation Report No 1’, Centre for Research in Education, University of South Australia, pp. 1-71. Hayes, D, Christie, P, Mills, M, & Lingard, B 2004, ‘Productive leaders and productive leadership: Schools as learning organisations’, Journal of Educational Administration, vol. 42, no. 5, pp. 520-538. Heffernan, A 2019, ‘The ‘punk rock principal’: a metaphor for rethinking educational leadership’, Journal of Educational Administration and History, vol. 51, no. 2, pp. 117-132. Sinclair, A 2011, Being Leaders: Identity and Identity Work in Leadership, Melbourne Business School, University of Melbourne, viewed 15 February, 2020, http://works. bepress.com/amanda_sinclair/9 Thomas, R 2011, ‘Critical management studies on identity: mapping the terrain’, in M Alvesson, T Bridgman & H Willmott, (eds.), 4he Handbook of Critical Management Studies, Oxford University Press, Oxford, pp. 166-185 Thomson, P 2009, ‘Headteacher autonomy: a sketch of a Bourdieuian field analysis of position and practice’, Critical Studies in Education, vol. 51, no. 1, pp. 5-20. Thomson, S, De Bortoli, L, Underwood, C & Schmid, M 2019, PISA 2018: Reporting Australia’s Results. Volume I Student Performance, Australian Council for Educational Research, viewed 15 February 2020, https://research. acer.edu.au/ozpisa/35 Wrigley, T 2011, ‘Paradigms of School Change’, Management in Education, vol. 25, no. 2, pp. 62-66. SUNATA 41

SUNATA 42

Vicki Strid

Head of Faculty – Mathematics

F.R.A.M.E.

The Underlying Principles of Teaching and Learning Mathematics at St Margaret’s Anglican Girls School

I can trace the origins of F.R.A.M.E. to a specific moment in time five years ago. I was giving back test papers to my Year 9 class. I came to GK and gave her back her paper. I was disheartened by the look in her eyes as she gazed upon her E+ before hurriedly turning her paper face down on her desk. This was not the first time I had handed back a paper to a student with a low grade, and this certainly wasn’t the first time GK had received a result of this standard, but it was a point where I thought ‘this is not good enough’. Einstein defines insanity as doing the same thing repeatedly and expecting a different result, so, as a new Head of Mathematics, I decided to change what we were doing to try and produce more positive outcomes. After consulting with the Principal, it was decided to trial second chance testing with the Year 9 cohort. So, after another week, we gave the students a similar test. Most students improved but not all; GK got a B-. I will never forget the look on her face when she got her paper back. When her peers found out, they applauded her, she started to cry and the positive emotion in the room was palpable. While GK did not turn into a child prodigy in mathematics, she did pass senior Mathematics A quite comfortably. This initial trial was followed by a Teachers as Researchers Project with Independent Schools Queensland. The research undertaken gave credibility to the simple idea of offering a second chance. Bloom (1973) and Guskey (1980 and 2003) are both strong advocates for second chance testing. They see the first test as providing valuable feedback to both the teacher and the student on the strengths and weaknesses of both teaching and learning. The emphasis is framed around mastery as opposed to performance. Hattie (2019) attributes an effect size of 0.53 to second and third chance testing and an effect size of 0.57 to mastery learning, which indicates significant and positive effects on learning. Pekrun (2014) gave us valuable insight into the emotions that were clearly visible in the classroom that day and the important role they played in the learning process. Parallels can be seen between Dweck’s (2008) growth and fixed mindsets and Pekrun’s (2014) work on mastery goals and performance goals. Norman Doidge (2011) led us to some basic research on neuroscience, which meant commonly held beliefs about predisposition to mathematics had to be challenged. Many students showed improvement in the second test and in some cases some students were moving from D to B, or even D to A in isolated cases. There were also cases where there were no improvement in results. The most interesting findings related to the results of a student engagement survey. More than 90 per cent of the cohort believed that second chance testing was helping improve their confidence in mathematics and gain a better understanding of the subject. Almost 70 per cent claimed to be working harder because of second chance testing. While we have been encouraged by the improvement, there are still some students who do not improve or only produce marginal gains in results. We had initially assumed that offering a second chance would provide hope and motivation, which would be catalysts for increased effort which in turn would produce better results. The comments we write on reports are deliberate in their focus on improvement over grades. This is an attempt to enculturate the attitude that with effort you can improve, ergo, growth-mindset. Pekrun (2014) states that students who set mastery goals are more resilient in their learning than students who are orientated towards performance goals. In brief, performance goals relate to achievement relative to peers. While second chance testing has provided the motivation to apply effort for some students, it has not been the catalyst for engagement for others. Eccles & Wigfield (2002) reflect on many theories as to why students choose to engage in learning. These theories indicate there are many factors associated with a student’s environment and historical, social and emotional influences that impact on a student’s motivation. Inherent in the theories is a confidence that students can learn the skills required to develop self-regulation. The acquisition of these skills helps students design and implement a specific set of tasks to achieve a predetermined outcome. As the level of complexity of outcomes increases so does the level of selfefficacy, indicating that an important catalyst to improving motivation to learn is the teaching of self-regulation skills. F.R.A.M.E. is the evolution of our thinking and practice. It encapsulates our previous research and processes around second chance testing but also acknowledges we need to address issues of student motivation and we need to teach students how to learn mathematics. Zimmerman (2002, p. 65) states: ‘Self-regulation is not a mental ability or an academic performance skill; rather it is the self-directed process by which learners transform their mental abilities into academic skills.’ Zimmerman (2002) also states that it is possible to teach students skills to develop their ability

F.R.A.M.E.

to self-regulate their learning. He also warns that teachers run

FEEDBACK + REFLECTION + ACTION + MASTERY = EMPOWERED

FEEDBACK Check your understanding Did you understand the lesson? Could you do your homework? How did you perform on the quiz/exam?

REFLECTION Record level of confidence Identify areas of confusion Plan for mastery NO

MASTERY

YES ACTION Practice makes permanent

EMPOWERED Practice makes permanent The more you know the more you can learn

ACTION Challenge your deep understanding

Self-regulation strategies [Effect size 0.52] Second/third chance programs [Effect size 0.53] Mastery learning [Effect size 0.57] Feedback [Effect size 0.70] Evaluation and reflection [Effect size 0.75] Figure 2: Bloom’s Mastery Learning Process Effort [Effect size 0.77] Deliberate practice [Effect size 0.79] Self-efficacy [Effect size 0.92] Figure 2: Bloom’s Mastery Learning Process [Source: Guskey 2005] the risk of undermining the student’s ability to develop these [Source: Guskey 2005]Prior ability [Effect size 0.94] skills by attempting to identify the student’s limitations. Zimmerman (2002, p. 65) states students ‘must possess the selfCollective teacher efficacy [Effect size 1.57] Figure 3: Retrieved from Figure 3: Retrieved from https://serc.carleton.edu/sage2yc/ self_regulated/what.html awareness and strategic knowledge to take corrective action’. https://serc.carleton.edu/sage2yc/self_regulated Zimmerman (1998) informs that increased ability to selfregulate learning improves motivation and achievement. The F.R.A.M.E. cycle exists within four domains: the classroom; /what.html Figure 1 is our model for teaching and learning mathematics ‘Collective teacher efficacy is the collective belief the revision quiz; the first of the two tests; and then, finally, the of teachers in their ability to positively affect students.’ at St Margaret’s. It is an extension of Bloom’s model for (Hattie 2019) second test. mastery learning (Figure 2) and Zimmerman’s cyclical model for self-regulation (Figure 3). Our goal is aspirational beyond The Classroom What we do starts with the belief that we can positively impact on the learning of all students. I believe that if we just teaching mathematics. It now encompasses a focus on developing students’ self-efficacy and self-regulatory skills with put effort into developing students’ self-regulatory skills, At the beginning of each unit students are given a table outlining the learning intentions for the unit. This is referred to then students who lack motivation and learning skills can the hope that the motivation to engage in learning will extend improve their learning outcomes in mathematics. as their Revision and Reflection Spreadsheet (RRS) (Figure 4). to a greater number of students than second chance testing. This is in a printed form and an electronic form. The printed The principles incumbent in the F.R.A.M.E. process rate very highly in John Hattie’s (2019) effect size. form is either the first page of a booklet specially written for the unit of work or a separate sheet which students paste into their exercise books. As each learning intention is covered, the The F.R.A.M.E. cycle exists within four domains: the classroom; the revision quiz; the first of the two tests; and then finally, the second test. , • Self-regulation strategies [Effect size 0.52]The Classroom students are asked to rate their level of confidence and shade • Second/third chance programs [Effect size 0.53] • Mastery learning [Effect size 0.57] this on their RRS. Later in the teaching cycle, students are given diagnostic tests, which test many learning intentions. These generally take the form of online quizzes through the At the beginning of each unit students are given a table outlining the learning intentions for the unit. This is referred to as their Revision and Reflection Spreadsheet (RRS) (Figure 4). This is in a printed form and an electronic form. The

Fi • Feedback [Effect size 0.70] platforms provided by Cambridge Press, HotMaths and Mathspace. These are marked online and provide students gure 3: Retrieved from printed form is either the first page of a booklet specially written for the unit of work or a separate sheet which students paste into their exercise books. As each learning intention is covered, the students are asked to rate their • Evaluation and reflection [Effect size 0.75] with immediate feedback on the gaps in their knowledge and https://serc.carleton.edu/sage2yc/self_regulated level of confidence and shade this on their RRS. Later in the teaching cycle, students are given diagnostic tests, which • Effort [Effect size 0.77] understanding. HotMaths provides links back to the textbook for each question which means students can easily revisit the /what.htmltest many learning intentions. These generally take the form of online quizzes through the platforms provided by • Deliberate practice [Effect size 0.79] work tested. Mathspace provides feedback to students on Cambridge Press, HotMaths and Mathspace. These are marked online and provide students with immediate the individual lines of their working, whereas HotMaths only feedback on the gaps in their knowledge and understanding. HotMaths provides links back to the textbook for each • Self-efficacy [Effect size 0.92] provides opportunities to submit their final answer. Students question which means students can easily revisit the work tested. Mathspace provides feedback to students on the • Prior ability [Effect size 0.94] • Collective teacher efficacy [Effect size 1.57] are encouraged to reflect on the feedback they receive and reevaluate their level of confidence based on these quizzes. This is done by shading the level of confidence grid another colour. individual lines of their working, whereas HotMaths only provides opportunities to submit their final answer. Students are encouraged to reflect on the feedback they receive and re-evaluate their level of confidence based on

‘Collective teacher efficacy is the collective belief of teachers in The level of confidence grid also provides the teacher with these quizzes. This is done by shading the level of confidence grid another colour. their ability to positively affect students.’ (Hattie 2019) feedback on how each student is progressing. It is often the

What we do starts with the belief that we can positively impact on the learning of all students. I believe that if we put effort into case that their performance on the quizzes is lower than their original evaluation. This shows that while students may The level of confidence grid also provides the teacher with feedback on how each student is progressing. It is often the case that their performance on the quizzes is lower than their original evaluation. This shows that while students developing students’ self-regulatory skills, then students who may understand work when it is covered, unless they co understand work when it is covered, unless they continuously ntinuously practice that learning will be lost. The quizzes lack motivation and learning skills can improve their learning outcomes in mathematics. also serve as a means of spaced practice. practise that learning will be lost. The quizzes also serve as a means of spaced practice.

SUNATA 43

At the beginning of each unit students are given a table outlining the learning intentions for the unit. This is referred to as their Revision and Reflection Spreadsheet (RRS) (Figure 4). This is in a printed form and an electronic form. The printed form is either the first page of a booklet specially written for the unit of work or a separate sheet which tudents paste into their exercise books. As each learning intention is covered, the students are asked to rate their level of confidence and shade this on their RRS. Later in the teaching cycle, students are given diagnostic tests, which intentions. These generally take the form of online quizzes through the platforms provided by Cambridge Press, HotMaths and Mathspace. These are marked online and provide students with immediate feedback on the gaps in their knowledge and understanding. HotMaths provides links back to the textbook for each The Revision Quiz question which means students can easily revisit the work tested. Mathspace provides feedback to students on the The revision quiz, Figure 5, is the third opportunity for the whereas HotMaths only provides opportunities to submit their final answer. students to reflect on their knowledge and understanding. Students are encouraged to reflect on the feedback they receive and re-evaluate their level of confidence based on It is an overview of the material covered on the test. The learning intentions are explicitly stated, and the numbering these quizzes. This is done by shading the level of confidence grid another colour. matches their revision and reflection spreadsheets and their original unit booklets. A range of questions are given for each provides the teacher with feedback on how each student is progressing. It is often the case that their performance on the quizzes is lower than their original evaluation. This shows that while students may understand work when it is covered, unless they continuously practice that learning will be lost. The quizzes learning intention, so students clearly understand the success criteria for each learning intention. The degree of difficulty reflects simple familiar, simple applications, complex familiar to complex unfamiliar, as do the questions on the test. We Round 1 and Round 2 Tests Both tests have four parts: Figure 6: Extract from test reflection spreadshee recommend the students map their level of understanding on • C/D knowledge and understanding the revision and reflection page created for the test, and to use this as a guide to direct their revision time for the test. As can The questions in this section are simple familiar. They are be seen in Figure 4, the RRS has a list of resources alongside similar on both papers with the numbers changed. This is a each learning intention. These resources range from internet deliberate attempt to engage the lowest achieving students links to exercises in the textbook. The work is also differentiated and show that with effort they can achieve learning. The between core and challenge. goal posts are not shifted, and students can trust that any We are moving away from giving students large quantities of, often randomly assembled, revision sheets, especially ones • C/D problem solving and reasoning without answers. This does not encourage the students to This section involves simple applications of the skills use their time effectively and does not require them to make taught. The focus of these question is on mathematical decisions about what they need to focus on. While this process literacy and numeracy. There are slight changes in context has been trialled in previous years, this is the first year that it is but not so far as to make these unfamiliar. an expectation across Years 7 to 11 cohorts. Initial observations learning they do should pay off. indicate there is a high level of take-up from the higher • A/B knowledge and understanding achieving students. One student’s evaluation of the revision The questions in this section test the more difficult quiz was ‘these are gold Ms’. concepts and generally involve a combination of concepts. These questions are not the same in both papers. The second paper may link different concepts in a question. • A/B problem solving and reasoning

Figure 5: Extract from revision quiz, Year 9 SUNATA 44 Figure 5: Extract from revision quiz, Year 9

Over time this section has evolved the most. It contains unseen or unfamiliar questions. These questions are designed to challenge the students’ ability to formulate solutions rather than regurgitate learned procedures.

These questions are different on both papers and may cover different concepts. This is the ‘pit’ of the test, the struggle point. It is important to note that the first three sections of the paper reflect the learning intentions listed in the revision and reflection document, which reflect the lessons taught. While only the numbers are changed in the questions in the C/D knowledge and understanding, the focus of the A/B knowledge and understanding section is application of higher-level skills and conceptual understanding. All the work tested reflects the clearly documented learning intentions and the elaborations of these in the revision quiz. This approach is aligned with the work of Guskey (2003), which emphasises the need for tests to assess what has been taught and not to try and outsmart students. A test should provide feedback to students on the effectiveness of their learning.

spreadsheet for the test. The students enter their results after the first round and compare these results to the spreadsheet for the test. The students enter their results after the first round and compare these results to the Figure 6: Extract from test reflection spreadsheet While the A/B problem solving and reasoning section has been schools has been to place greater emphasis on the teacher est reflection spreadsheet marks allocated for a question. They can then adjust their confidence levels on their previous spreadsheet based on this new feedback. They are also required to enter notes on what they must do to improve on the next round. marks allocated for a question. They can then adjust their confidence levels on their previous spreadsheet based on this new feedback. They are also required to enter notes on what they must do to improve on the next round. referred to as the ‘pit’, students are taught strategies in class and to provide detailed feedback and analysis of students’ work. given opportunities to engage in tasks which require them to In this mode the student is passive and the responsibility for formulate their own solutions. successful outcomes after the feedback lies with the teacher. As At each feedback point – the lesson, the diagnostic quiz, the revision quiz and the round 1 test – students are asked to reflect on their learning. They are required to list their mentioned earlier, Zimmerman warns us, in our zest to help students, we need to be careful that we are empowering them and not disempowering them. For me, the potentially most powerful aspects of this spreadsheet are: • the visual representation of improvement or lack thereof provided by the bar graphs – green representing round 1 and blue representing round 2 results over the two assessment criteria. For me, the potentially most powerful aspects of this spreadsheet are: • the visual representation of improvement or lack thereof provided by the bar graphs – green representing round 1 and blue representing round 2 results over the two assessment criteria. • • areas of weakness and plan for mastery over the gaps in The emphasis of F.R.A.M.E. is that students are responsible the reflection enabled by the questions – What do I need to do? What did/didn’t I do? What was the outcome?the reflection enabled by the questions – What do I need to do? What did/didn’t I do? What was the outcome? their understanding. The goal is that these reflections appear for interpreting their feedback at predetermined intervals, throughout their exercise book or unit booklet and it becomes planning and taking action with the goal of mastering the At this point, they are not reflecting on the gaps in their understanding but on the effectiveness of their plan for At this point, they are not reflecting on the gaps in their understanding but on the effectiveness of their plan for habitual in nature. improvement and their action. improvement and their action. material. The teacher’s role includes setting up the check The test reflection spreadsheet (Figure 6) is the fourth opportunity for students to reflect on the gaps in their understanding. This spreadsheet is only available electronically to the students but is linked to the revision spreadsheet for the test. The students enter their results after the first round and compare these results to the marks allocated for a question. They can then adjust their confidence levels on their previous spreadsheet based on this new feedback. They are also required points, giving students time to reflect on the feedback from the checkpoints, checking the students are recording their level of confidence and planning for mastery, and using classroom language which promotes self-efficacy and self-regulation. As explained, F.R.A.M.E. is in its first year of implementation and the degree of take-up across teachers is varied. It is an extension of second-chance testing which has been in Hattie and Timperley’s (2007) research on feedback and its importance in learning has prompted an increased emphasis on feedback in schools. In my opinion, the interpretation by schools has been to place greater emphasis on the teacher to provide detailed feedback and analysis of students’ work. In this mode the student is passive and the responsibility for successful outcomes after the feedback lies with the teacher. As mentioned earlier, Zimmerman warns us, in our zest to help students, we need to be careful that we are empowering them and not disempowering them. Hattie and Timperley’s (2007) research on feedback and its importance in learning has prompted an increased emphasis on feedback in schools. In my opinion, the interpretation by schools has been to place greater emphasis on the teacher to provide detailed feedback and analysis of students’ work. In this mode the student is passive and the responsibility for successful outcomes after the feedback lies with the teacher. As mentioned earlier, Zimmerman warns us, in our zest to help students, we need to be careful that we are empowering them and not disempowering them. to enter notes on what they must do to improve on the next round. For me, the potentially most powerful aspects of this spreadsheet are: • the visual representation of improvement or lack thereof provided by the bar graphs – green representing round operation for five years. The improvement in results indicated in the following graphs and data are more likely a legacy of second chance testing rather than this new initiative. The following results are those of the Year 9 cohort in Semester 1, 2019. There is a strong focus on F.R.A.M.E. at this level. Figure 7 shows the improvement in average percentages for The emphasis of F.R.A.M.E. is that students are responsible for interpreting their feedback at predetermined intervals, planning and taking action with the goal of mastering the material. The teacher’s role includes setting up the check points, giving students time to reflect on the feedback from the checkpoints, checking the students are recording their level of confidence and planning for mastery, and using classroom language which promotes selfefficacy and self-regulation. The emphasis of F.R.A.M.E. is that students are responsible for interpreting their feedback at predetermined intervals, planning and taking action with the goal of mastering the material. The teacher’s role includes setting up the check points, giving students time to reflect on the feedback from the checkpoints, checking the students are recording their level of confidence and planning for mastery, and using classroom language which promotes selfefficacy and self-regulation. 1 and blue representing round 2 results over the two assessment criteria. the assessment criteria – knowledge and understanding and problem solving and reasoning. As explained, F.R.A.M.E. is in its first year of implementation and the degree of take-up across teachers is varied. It is an extension of second-chance testing which has been in operation for five years. The improvement in results As explained, F.R.A.M.E. is in its first year of implementation and the degree of take-up across teachers is varied. It is an extension of second-chance testing which has been in operation for five years. The improvement in results • the reflection enabled by the questions – What do I need to do? What did/didn’t I do? What was the outcome? At this point, they are not reflecting on the gaps in their understanding but on the effectiveness of their plan for improvement and their action. The average reflects an increase in knowledge and skill of over 10 per cent for knowledge and understanding and an increase of over 8 per cent for problem solving and reasoning for each student. Interesting is that the percentage improvement from the Round 1 test to the Round 2 test is more than double for knowledge and understanding and has increased by 3.5 per indicated in the following graphs and data are more likely a legacy of second chance testing rather than this new initiative. The following results are those of the Year 9 cohort in Semester 1, 2019. There is a strong focus on F.R.A.M.E. at this level. indicated in the following graphs and data are more likely a legacy of second chance testing rather than this new initiative. The following results are those of the Year 9 cohort in Semester 1, 2019. There is a strong focus on F.R.A.M.E. at this level. Hattie and Timperley’s (2007) research on feedback and its importance in learning has prompted an increased emphasis cent for problem solving and reasoning in Term 2. This could indicate that students were making better use of F.R.A.M.E. Figure 7 shows the improvement in average percentages for the assessment criteria; knowledge and understanding Figure 7 shows the improvement in average percentages for the assessment criteria; knowledge and understanding on feedback in schools. In my opinion, the interpretation by processes in Term 2.

Figure 7: Percentage improvement, First Test to Best Result. Year 9, St Margaret’s Anglican Girls School, Semester Figure 7: Percentage improvement, First Test to Best Result. Year 9, St Margaret’s Anglican Girls School, Semester Figure 7: Percentage improvement, First Test to Best Result. Year 9, St Margaret’s Anglican Girls School, Semester 1, 2019. SUNATA 45 1, 2019. 1, 2019.

the results of mastery learning (Guskey 2007, p. 14). The difference between this model and the outcomes we want to achieve for our model is in the initial standard normal curve. We have built in multiple interventions which require students to reflect on their understanding, plan and act to achieve mastery. we would be aiming for a negatively skewed distribution after the first round and an even greater

Figure 8: Number of students achieving grades A to E in Round 1 and Best Result, Year 9, St Margaret’s Anglican Girls School, Semester 1, 2019. Figure 8: Number of students achieving grades A to E in Round 1 and Best Result, Year 9 Girls School, Semester 1, 2019. While the evidence presented is not conclusive and requires much more exploration into the many facets, such as Figure 8 shows the breakdown of the number of each of the References , St Margaret’s Anglican difficulty of the tests and an evaluation of the perceptions of students, I am encouraged grades A, B, C, D and E for each assessment criteria, for the first Bloom, BS 1973, ‘Recent developments in mastery learning’, Educational seems to be from the students achieving grades C or better. results. The second chance testing component of F.R.A.M.E. has always provided hope for students and an and second tests in both Term 1 and Term 2. It also identifies the change in the number of each grade. It can be seen that the Psychologist, vol. 10, no. 2, pp. 53-57. Bloom, BS 1968, Learning for Mastery. Instruction and Curriculum. Regional Finally, Figure 9 shows graphically the results of mastery learning (Guskey 2007, p. 14). The difference between this which many students have embraced. Our next challenge is to improve their self-regulation number of As is increasing and the number of lower grades is decreasing. It is obvious from the data that the number Education Laboratory for the Carolinas and Virginia, Topical Papers and Reprints, Number 1, Chicago Univ., Ill. Dept. of Education.; Regional model and the outcomes we want to achieve for our model is in the initial standard normal curve. We ractice until it becomes second nature.of Es and Ds did not change much in Term 1 and, although the change in these numbers was greater in Term 2, the final Educational Laboratory for the Carolinas and Virginia, Durham, N.C. Doidge, N 2011, The brain that changes itself, Madman Entertainment, have built in multiple interventions which require students to reflect on their understanding, plan and actnumbers for grades D and E were similar in both terms. Collingwood, Victoria. to achieve mastery. Graphically, we would be aiming for a negatively skewed distribution after the first round and an even greater The predominant movement seems to be from the students achieving grades C or better. Dweck, C 2008, ‘Mindsets and Math/Science Achievement’, Carnegie Corporation of New York, New York. negative skew after the second round. While the evidence presented is not conclusive and requires much more exploration into the many facets, such as the comparatively level of difficulty of the tests and an evaluation of the perceptions of students, I am encouraged by the results. The second chance testing component of F.R.A.M.E. has always provided hope for students and an opportunity to improve, which many students have embraced. Our next challenge is to improve their self-regulation skills and get them to sustain this practice until it becomes second nature. Finally, Figure 9 shows graphically the results of mastery learning (Guskey 2007, p. 14). The difference between this model and the outcomes we want to achieve for our model is in the initial standard normal curve. We have built in multiple interventions which require students to reflect on their understanding, plan and act to achieve mastery. Graphically, we would be aiming for a negatively skewed distribution after the first round and an even greater negative skew after the second round. Eccles, J & Wigfield, A 2002, ‘Motivational Beliefs, Values, and Goals’, Annual Review of Psychology, vol. 53, no. 1, pp. 109-132. Guskey, T 1980, ‘Mastery learning: Applying the theory’, Theory into Practice, vol. 19, no. 2, pp. 104-111. Guskey, T 2003, ‘How Classroom Assessments Improve Learning’, Educational Leadership, vol. 60, pp. 6-11. Guskey, T 2007, ‘Closing Achievement Gaps: Revisiting Benjamin S. Bloom's “Learning for Mastery”’, Journal of Advanced Academics, vol. 19, no. 1, pp. 8-31. Visible Learning 2019, Collective Teacher Efficacy (CTE) according to While the evidence presented is not conclusive and requires John Hattie, viewed 19 July 2019, https://visible-learning.org/2018/03/ much more exploration into the many facets, such as the collective-teacher-efficacy-hattie/ comparatively level of difficulty of the tests and an evaluation Hattie, J 2019, Hattie effect size list - 256 Influences Related To Achievement, of the perceptions of students, I am encouraged by the results. viewed 19 July 2019, https://visible-learning.org/hattie-rankingThe second chance testing component of F.R.A.M.E. has always influences-effect-sizes-learning-achievement/ provided hope for students and an opportunity to improve, Hattie, J & Timperley, H 2007, ‘The Power of Feedback’, Review of which many students have embraced. Our next challenge is to Educational Research, vol. 77, no. 1, pp. 81-112. improve their self-regulation skills and get them to sustain this Marzano, R & Kendall, J 2007, The New Taxonomy of Educational practice until it becomes second nature. Objectives (2nd Edition), Hawker Brownlow Education, Heatherton, Victoria. Pekrun, R 2014, ‘Emotions and Learning’, International Bureau of Education, Educational Practices, Series 24. Zimmerman, B 1989, ‘A social cognitive view of self-regulated academic learning’, Journal of Educational Psychology, vol. 81, no. 3, pp. 329-339. Zimmerman, B 2002, ‘Becoming a Self-Regulated Learner: An Overview’, Theory into Practice, vol. 41, no. 2, pp. 64-70. SAGGE 2YC 2017, What is Self-Regulated Learning?, viewed 28 July 2019, https://serc.carleton.edu/sage2yc/self_regulated/what.html

SUNATA 46 Figure 9: The distribution of achievement in Figure 9: The distribution of achievement in mastery learning classrooms Source: Guskey 2007, p. 14 mastery learning classrooms

This article is from: