12 minute read

by Hannah Galbraith 35

The positives given by pupils for revising using stylus on tablet were diverse, and the most often cited positive theme, raised by five pupils, was that revising using their stylus on their tablet devices offered greater freedom with layout.

A narrower range of negatives were raised by pupils in their responses, including the dissimilarity of revising using stylus on tablet to their actual vocabulary tests, the method being less helpful than pen on paper for committing characters to memory, and pupils’ work being less neat on the smoother surface.

Conclusion The quantitative study found that the revision medium, be it pen on paper or stylus on tablet, had no effect on vocabulary test scores. This result contrasts with both Gerth et al. (2016) who conclude that children find the smooth surface of a tablet screen a challenge for writing, and Osugi et al. (2019) who suggest that the smoother surface of a tablet screen compared to paper is easier to use, for those already familiar with the use of a digital pen.

However, vocabulary test scores aligned with the mean baseline ability of each group, and therefore more effective controlling for baseline ability was needed to form a firm conclusion. This was a significant limitation of this study, which if repeated, would employ the control initially planned, which grouped pupils to include a representative spread of MidYIS scores for each test medium.

The qualitative study found that one term after the quantitative study, pupils employed a variety of revision strategies, across both media.

Pupils liked pen and paper for its simulation of real test conditions, and felt that this medium was more helpful for memorising vocabulary, even if this was not shown to be true in their vocabulary test results. However, pupils found that revising using pen on paper was slow.

The greatest benefit cited by pupils of revising using stylus on tablet was the freedom of layout on their page, on OneNote and other applications. Some pupils described the infinite space available as helpful to their revision. Greater speed was another commonly cited positive of this medium.

While further studies are recommended to further investigate these findings, this study has shown that pupils find positives and negatives for both media. With no discernible benefit to vocabulary learning shown in the data gathered, pupils can be allowed to choose the revision medium which best suits their needs and preferences.

BIO Hannah Galbraith is Head of Mandarin at Berkhamsted School. She has an interest in digital learning and its interface with the physical classroom.

How Can Digital Technology be Used to Support Effective Assessment and Feedback Without Increasing Teacher Workload?

1 Rationale behind the research

The use of technology in the classroom is not a new phenomenon. Whilst at secondary school in the early 2000s, I clearly remember the unveiling of the newest technological resource to arrive in classrooms set to revolutionise learning: the interactive whiteboard. Skip forward to 2015 and the same IWBs were being removed from the walls at my PGCE placement school, no longer being the trending tool of the time. Five years later, I find myself in a breakout room with colleagues during a national lockdown discussing the potential of Virtual Reality in bringing First World War trenches to life for our students stuck at home in their bedrooms.

Clearly, teachers have been navigating this changing ‘EdTech’ landscape for many years, but undoubtedly the biggest bump in the road came in March 2020 when schools across the country were closed overnight and online learning commenced. For the class of 2020 (and 2021), the physical gap between teachers and students had never been greater, yet the potential for technology to provide solutions to bridge the lockdown learning gap for what the media has dubbed the ‘lost generation of students’ seemed similarly great.

For this Action Research project, I chose to focus specifically on digitising effective assessment and feedback for my Key Stage 4 and 5 exam classes, considering this to be more crucial than ever in the absence of the physical cues teachers typically rely upon in the classroom when assessing progress. After problematising exactly what it was I wanted to virtually assess, I came to realise this was the same for distanced learning as it had previously been back in the classroom, that is: the completion of and engagement with tasks set for classwork (albeit at times now to be carried out asynchronously); knowledge retention and recall; and skills in exam technique.

Whilst using technology to facilitate effective assessment and feedback was my primary focus for this project, I did not want this to come at the expense of increased teacher workload. In the 2016 DfE Workload Challenge Survey, 53% of respondents thought that the excessive nature, depth and frequency of marking was burdensome (DfE, 2016). It was in this area that I felt confident there was scope to take advantage of technological opportunities to streamline time spent marking, removing the burden of personalised written comments for individual students, instead redeploying energy into improving the timeliness and efficacy of whole-class feedback to accelerate progress for all.

Many teachers may feel they are well-justified in their position as ‘tech-sceptics’, having seen gimmicky innovations come and go throughout the course of their careers. However, recent research reveals a positive correlation between the effective use of technology and improved outcomes (Higgins et al., 2013), suggesting that when used well, technology has an important role to play in classrooms of the future. Furthermore, there is also emerging evidence that proves the specific impact of traditional marking in the form of lengthy written comments to be minimal (Elliott et al., 2016). When these findings are considered in the context of an education workforce exhausted after more than a year of COVID disruption, with 70% reporting increased workload over the last 12 months and one in three teachers planning to quit within 5 years (Weale, 2021), it seems the right time to reflect and ensure we are making IT work for us.

When asked to visualise the stereotypical overworked teacher, we might envisage lone individuals to be working until late into the evening rooted in the relentless practice of ‘tick-and-flick’. If this is the reality, perhaps the most pressing concern from a pedagogical perspective is that the time it would take for a teacher to see, let alone read, every page of work their students produce is wildly disproportionate to the outcomes it would lead to. Therefore, for the first tier of my threetiered assessment and feedback model, my aim was to utilise technology to create an alternative system to acknowledge efforts and hold students to account for classwork, without establishing a culture where written comments were expected on an individual basis for every page completed.

Much discussion has been focused on the second tier of my assessment and feedback model, namely knowledge retention and recall. The benefits of regular testing have been well-documented (Roediger et al., 2014) and many technological tools have been designed to serve this specific purpose. Retrieval practice platforms such as Quizlet provide students with real-time results, mimicking live marking yet requiring no input from the teacher. Therefore, for this second tier of the assessment and feedback model, I planned to explore the opportunities available for using technology to frequently test security of knowledge amongst students in my exam classes, empowering them to self-assess their retention and recall in a low-stakes format.

Finally, for the third tier of my assessment and feedback model I wanted to ensure that each of my students knew precisely how to improve their exam technique. Research has shown that encouraging reflection and resubmission, even in the absence of a grade, has high efficacy in promoting student progress (Walker, 2021). Therefore, rather than focusing on individual What Went Well’s and Even Better If’s, I instead hoped to carry out task-based analysis and identify overall patterns of strengths and weaknesses, before delivering whole-class feedback (Picardo, 2017). This process would require the careful use of technology to employ peer work as a formative assessment tool to model expectations clearly and at times facilitate students working collaboratively in well-matched groups (McGill, 2017). It was in this area that I wanted to dedicate most energy, maximising the time I had gained from ‘outsourcing’ the tracking of classwork and knowledge testing to technology.

3. Procedure

Below is a visual representation of my three-tiered assessment and feedback model (see Figure 1), followed by summaries of the different strategies I used to rigorously assess students and return timely, high-quality feedback, making the most of technological opportunities to reduce workload.

Figure 1. Three-tiered assessment and feedback model with most emphasis placed on exam technique

To avoid falling into the trap of the problematic practice of ‘ticking-and-flicking’, I sought to be disciplined in spending the least time assessing this area of student work. I implemented the following coding system that effectively ‘graded’ students’ OneNote pages from 1 to 4 (see Figure 2). It established a channel of communication with each of my students without me having to write one word and allowed individuals to interpret the feedback with just one glance (see Figure 3). I also chose to record this data in my mark book, which provided me with a clear picture of completion of and engagement with classwork and homework, prompting me to intervene early when issues arose (see Figure 4).

Figure 2. Marking codes used to assess classwork and some homework tasks

Figure 4. Coding data inputted into my mark book

B Knowledge retention and recall

To assess knowledge retention and provide feedback on accuracy of recall, I refined my use of online quizzing platforms to ensure my classes were retrieving knowledge on a weekly basis and that I employed spaced practice as part of this testing schedule. For each unit of the course covered, I created an accompanying bank of flashcards on Quizlet. Each week for homework, students were asked to make use of the different functions available on Quizlet to learn the flashcard bank. Helpfully, Quizlet provides detailed analytics on individuals, allowing me to gain insight into the ways in which different students were working (see Figure 5).

Figure 5. Analytics provided by Quizlet on individual students' use of platform

Students were then tested on their recall of this knowledge, and indeed prior learning as a form of spaced practice, completing a self-marking Microsoft Forms quiz each week. Although these initial steps required time invested into creating the bank of flashcards and test questions, the reward was that a personalised, live feedback loop was established for each individual student to self-assess their performance, as well as holistic snapshots of class performance that informed my future teaching on a weekly basis (see Figure 6). Common misconceptions or gaps in learning were clearly diagnosed in this weekly data gathering process. Once again, I chose to record this data in my mark book, adding another layer to the picture I was building up of the progress each individual student was making over time (see Figure 7).

Figure 6. Holistic picture of class performance in self-marking Microsoft Forms quiz

Figure 7. Data from weekly quizzes inputted into my mark book

For the final tier of my assessment and feedback model, I wanted to focus explicitly on students’ skills in exam technique. My aim was to generate formative feedback after identifying overall patterns of strengths and weaknesses in class responses to exam-style questions. Again, I sought to use technology to streamline this process and avoid providing individual What Went Wells and Even Better Ifs for each student.

The first method I trialled was using peer work as a formative assessment tool to model expectations clearly. In one example, students were asked to reflect on their own answers after reading an example belonging to a peer and generate their own written feedback (see Figure 8). On other occasions, students were required to go further than simply selfreviewing and instead redrafted precise portions of their answers implementing strengths they had seen modelled in the work of their peers (see Figure 10).

Figure 8. Students generating their own written feedback after engaging with examples of best practice from peers

Figure 9. Example of redrafting exercise after engaging with examples of best practice from peers

A second method I experimented with was to provide a holistic ‘checklist of ingredients’ required in a successful exam-style response and ask students to annotate their own work to show where they were (or were not) including the necessary elements (see Figure 10). Another way of achieving this was to ask students to colour-code their answers to highlight where they were demonstrating certain skills in their writing, such as analysis and evaluation (see Figure 11). Both strategies led students to identify their own strengths and weaknesses, arguably a much more desirable outcome when compared with the passive engagement with a teacher-generated WWW and EBI.

Figure 10. Student self-assessment against ‘checklist of ingredients’ required for successful response

Figure 11. Student self-assessment through colour-coding of exam-style response

I also sought at times to use feedback as a basis for collaborative exercises carried out during lessons. OneNote’s Collaboration Space proved a useful platform for this purpose, and I was easily able to manipulate students into well-matched groupings according to their level of progress in mastering exam technique. In one example, students worked together to apply their knowledge to make synoptic links to other topics, annotating and improving a sample paragraph in groups (see Figure 12). I also trialled the implementation of differentiated feedback activities, designing four separate tasks for a class of 20. This allowed the students who had attained the highest marks in the assessment to be stretched and challenged with harder questions, whilst I worked with the students who had attained the lowest marks to redraft work with the support of prompting scaffolds (see Figure 13).

Figure 12. Examples of student collaboration in improving sample paragraphs by annotating with synoptic links

Figure 13. Examples of differentiated feedback tasks for students collaborating in groups according to attainment

This article is from: