Conference programme summary & abstracts

Page 1

Conference Programme Summary Time

Session

Room

09.30

Registration

09.30

Refreshments

09.55

Welcome: Dr. Nicola Reimann, Durham University

Piccadilly Suite

10.00

Keynote & Discussion: Professor David Carless

Piccadilly Suite

10.40

Refreshments

11.00

Choice of parallel sessions

1

Hotel Foyer

Evaluation or Research Presentations Session Chair: Professor Kay Sambell

Piccadilly Suite

An institutional case study: promoting student engagement in assessment Carmen Thomas, University of Nottingham, UK Can student-staff interviews about how staff use feedback when preparing their work for publication, help develop student feedback literacy? Jenny Marie, University College London; Nick Grindle, University College London, UK Feedback Footprints: Using Learning Analytics to support student engagement with, and learning from, feedback Naomi Winstone, University of Surrey; Dr Emma Medland, University of Surrey, UK

2

Evaluation or Research Presentations Session Chair: Jess Evans

Meeting Room 3

Dialogic feedback and the development of professional competence among further education pre-service teachers Justin Rami, Dublin City University; Francesca Lorenzi, Dublin City University, Republic of Ireland Systematic responses to grand challenges in assessment: REVIEW at UNSW Daniel Carroll, University of New South Wales, Australia An evaluation of peer-to-peer feedback using adaptive comparative judgement Jill Barber, The University of Manchester, UK


3

Evaluation or Research Presentations Session Chair: Linda Thompson

Meeting Room 5

The Pedagogy of Interteaching: engaging students in the learning and feedback process Nick Curtis, James Madison University, Keston Fulcher, James Madison University, Virginia, USA Study Player One: Gamification, Student Engagement, and Formative Feedback Errol Rivera, Edinburgh Napier University, UK Providing and receiving feedback: implications to student’ learning Georgeta Ion, Universitat Autònoma de Barcelona, Cristina Mercader Juan, Universitat Autònoma de Barcelona, Aleix Barrera Corominas, Universitat Autònoma de Barcelona; Anna Diaz Vicario, Universitat Autònoma de Barcelona, Spain

4

Evaluation or Research Presentations Session Chair: Dr. Rita Headington

Meeting Room 7

Contextual Variables in Written Assessment Feedback in a University-level Spanish program Ana Maria Ducasse, RMIT University, Melbourne, Australia; Kathryn Hill, La Trobe University, Melbourne, Australia VACS - Video Assessment of Clinical Skills Mark Glynn, Dublin City University; Evelyn Kelleher Dublin City University; Colette Lyng, Beaumont Hospital, Dublin; Adele Keough, Dublin City University; Anna Kimmins Dublin City University; Patrick Doyle, Dublin City University, Republic of Ireland Foregrounding feedback in assessment as learning: the case of the processfolio Jayne Pearson, University of Westminster, UK

5

Round Table Presentations Session Chair: Dr. Peter Holgate

Meeting Room 11

Exploring student perceptions of a feedback trajectory for first-year students at the University of Leuven Anneleen Cosemans, Leuven Engineering and Science Education Center, Carolien Van Soom, Greet Langie, Tinne De Laet, University of Leuven, Belgium Students’ feedback has two faces Serafina Pastore, University of Bari, Italy; Giuseppe Crescenzo), University of Bari, Italy ; Michele Chiusano Vincenzo Campobasso, University of Bari, Italy Feedback for Postgraduate Taught Students Fay Julal, University of Birmingham, UK ‘Change is one thing. Acceptance is another’: Managing stakeholder participation to support the introduction of institution wide electronic management of assessment Emma Mayhew, University of Reading; Madeleine Davies, University of Reading, UK Feedback, a shared responsibility Annelies Gilis, Karen Van Eylen, Elke Vanderstappen, KU Leuven, Belgium; Joke Vanhoudt, Study Advice Service, KU Leuven, Belgium; Jan Herpelinck, General Process Coordination, KU Leuven, Belgium Feedback: What lies within? Jane Rand, York St John University, UK 12.00 Break


6

Micro Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Crisis, what crisis? Why we should stop worrying about NSS scores Alex Buckley, University of Strathclyde, UK Transforming feedback practice in the practical environment using digital technologies Moira Maguire, Dundalk Institute of Technology, Republic of Ireland Stressful feedback post paramedic simulation training Enrico Dippenaar, Anglia Ruskin University, UK Conversation consideration: Can the ideal speech situation be used in the creation of an ideal feedback situation Kimberly Wilder, Edinburgh Napier University, UK Development and Construct Validation of a Formative Assessment of Writing Instrument: A Confirmatory Study on Iranian EFL students Elaheh Tavakoli, HĂ˜gskolen/Hakim Sabzevari University, Norway Developing and implementing a model of intensive spoken teacher and peer feedback with learner control Gordon Joughin, Deakin University, Australia; Helena Gaunt, Guildhall School of Music and Drama, UK Mapping the Assessment Journey: Student evaluations of a visual representations of their assessments Anke Buttner, University of Birmingham UK 13.10

Lunch

14.00

Choice of parallel sessions

7

Evaluation or Research Presentations Session Chair: Assistant Professor Natasha Jankowski

Piccadilly Suite

Developing student feedback literacy using educational technology and the reflective feedback conversation Kathryn Hill, La Trobe University, Melbourne, Australia; Ana Maria Ducasse, RMIT University, Melbourne, Australia; Diverse orientations towards assessment and feedback internationally Sally Brown, Independent consultant, Emerita Professor at Leeds Beckett University, UK; Kay Sambell, Edinburgh Napier University, UK Frequent rapid feedback, feed-forward, and peer learning, for enhancing student engagement in an online portfolio assessment Theresa Nicholson, Manchester Metropolitan University, UK


8

Evaluation or Research Presentations Session Chair: Associate Professor Geraldine O’Neil

Meeting Room 3

Student learning through feedback: the value of peer review? Charlie Smith, Liverpool John Moores University, UK In the shoes of the academics: Inviting undergraduate students to apply their assessment literacy to assessment design Anke Buttner, Andrew Quinn; Ben Kotzee; Joulie Axelithioti ; University of Birmingham; UK Feedback to the Future Erin Morehead, University of Central Lancashire, Andrew Sprake, University of Central Lancashire, UK

9

Evaluation or Research Presentations Session Chair: Jess Evans

Meeting Room 5

‘Feedback interpreters’: The role of Learning Development professionals in overcoming barriers to university students’ feedback recipience Karen Gravett, University of Surrey; Naomi Winstone, University of Surrey, UK. Investigating Chinese Students' Perceptions of and Responses to Teacher Feedback: Multiple Case Studies in a UK University Fangfei Li, University of Bath, UK Action on Feedback Teresa McConlogue, University College London; Jenny Marie, University College London; Clare Goudy, University College London, UK

10

Evaluation or Research Presentations Session Chair: Dr. Peter Holgate

Meeting Room 7

The Cinderella of UK assessment? Feedback to students on re-assessments Marie Stowell, University of Worcester, Liverpool John Moores University; Harvey Woolf, ex University of Wolverhampton, UK A critique of an innovative student-centred approach to feedback: evaluating alternatives in a high risk policy environment Judy Cohen; University of Kent; Catherine Robinson, University of Kent, UK Build a bridge and get over it – exploring the potential impact of feedback practices on Irish students’ engagement Angela Short, Dundalk Institute of Technology; Gerry Gallagher, Institute of Technology, Republic of Ireland


11

Round Table Presentations Session Chair: Dr. Rita Headington

Meeting Room 11

Dialogue and Dynamics: A time efficient high impact model to integrate new ideas with current practice Lindsey Thompson, University of Reading, UK From peer feedback to peer assessment Josje Weusten, Maastricht University, Netherlands Large student cohorts: the feedback challenge Jane Collings, University of Plymouth; Rebecca Turner, University of Plymouth, UK Dynamic feedback –a response to a changing teaching environment Martin Barker, University of Aberdeen, UK A comparison of response to feedback given to undergraduates in two collaborative formative assessments: wiki and oral presentations. Iain MacDonald, University of Cumbria, UK 15.00

Break

15.10

Choice of Parallel Sessions

12

Evaluation or Research Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Feedback, Social Justice, Dialogue and Continuous Assessment: how they can reinforce one another Jan McArthur, Lancaster University, UK Assessing, through narratives, the feedback needs of black women academics in higher education Jean Farmer, Stellenbosch University, South Africa Building a national resource for feedback improvement David Boud, Deakin University/University of Technology Sydney, Australia

13

Evaluation or Research Presentations Session Chair: Dr. Nicola Reimann

Meeting Room 3

‘Who Am I?’ Exploration of the healthcare learner ‘self’ within feedback situations using an interpretive phenomenological approach Sara Eastburn, University of Huddersfield, UK Trainee teachers’ experiences of classroom feedback practices and their motivation to learn Zhengdong Gan, University of Macau; Zhengdong Gan, University of Macau, China Chair: Dr. Nicola Reimann Proposing a model for the incremental development of peer assessment and feedback skills: a case study Laura Costelloe, Dublin City University; Arlene Egan, Dublin City University, Republic of Ireland


14

Evaluation or Research Presentations Session Chair: Professor Sally Jordan

Meeting Room 5

Feedback – what students want Susanne Voelkel, University of Liverpool, UK Developing as a peer reviewer: Enhancing students’ graduate attributes Rachel Simpson, Durham University; Catherine Reading, Durham University, UK Does screencast feedback improve student engagement in their learning? Subhi Ashour, The University of Buckingham, UK

15

Evaluation or Research Presentations Session Chair: Dr. Amanda Chapman

Meeting Room 7

`Feedback literacy in online learning environments: Engaging students with feedback Teresa Guasch, Open University of Catalonia; Anna Espasa Catalonia, Rosa M. Mayordomo, Open University of Catalonia, Catalonia Teaching staff’ views about e-assessment with e-authentication Alexandra Okada, Denise Whitelock, Ingrid Noguera, Jose Janssen, Tarja Ladonlahti, Anna Rozeva, Lyubka Alexieva, Serpil Kocdar, Ana-Elena Guerrero-Roldán, The Open University UK Students’ responses to learning-oriented exemplars: towards sustainable feedback in the first year experience? Kay Sambell, Edinburgh Napier University; Linda Graham, Northumbria University; Peter Beven, Northumbria University, UK

16

Round Table Presentations Session Chair: Professor Mark Huxham

Meeting Room 11

Transforming Written Corrective Feedback: Moving Beyond the Learner’s Own Devices Britney Paris, University of Calgary, Canada Increasing assessment literacy on MA translations modules to improve students’ understanding of and confidence in assessment processes Juliet Vine, University of Westminster, UK Developing an identity as a knowing person: examining the role of feedback in the Recognition of Prior Learning (RPL) Helen Pokorny, University of Westminster, UK Students as partners in co-creating a new module: Focusing on assessment criteria Maria Kambouri-Danos, University of Reading, UK Assessment - Evaluating the Workload on Staff and Students Mark Glynn, Dublin City University; Clare Gormley, Dublin City University; Laura Costelloe, Dublin City University, Republic of Ireland Hearing Voices: First Year Undergraduate Experience of Audio Feedback Stephen Dixon, Newman University, UK 16. 10

Plenary & ask the audience: Professor Kay Sambell

16.30

Close


Author Abstracts Evaluation or Research Presentations Session Chair: Professor Kay Sambell

1

Piccadilly Suite

An institutional case study: promoting student engagement in assessment Speaker: Carmen Thomas, University of Nottingham, UK Engaging students in assessment in meaningful ways to develop their evaluative judgement (Tai et al. 2017) is an area of practice that still requires much development. A case of institution-wide promotion will be presented. Practices that engage students actively in assessment (e.g. guided marking, assessment of exemplars or other formative work) have been promoted. A programme of dissemination, trials and follow-up evaluations has been conducted to promote adoption from the ground up. Fourteen trials in eleven Schools/Departments have been run. The institutional case will show: • The context Background evaluations reveal the most typical practices deployed to support students in advance of assessments. Two-way and interactive activities such as peer assessment are rare across the institution. • Promotion strategy: Support, dissemination events and evaluations. The institutional strategy has considered support to individuals, dissemination events and also a consistent evaluation programme of the effectiveness of the tasks/sessions. • Evaluation of the impact of peer assessment type of activities on students. Responses from more than 1000 students have been collected using the same core questionnaire for evaluation. All trials have been evaluated considering the: o o

effectiveness of peer assessment in promoting self-efficacy student perception of the value of the activities

Additional meta-analyses have considered which task factors affect student perceptions. This exploratory analysis will be discussed for its implications for practice. • •

From trial to embedded practice: all initial trials have led to adoption of the practice. Persisting challenges and next steps: The initial phase for promotion in a bottom up manner will be followed up by a strategy to use quality assurance (School Reviews) mechanisms to lend further institutional support.

The case illustrates the need for a multipronged institutional approach consisting of both bottom-up and topdown approaches. The institutional evaluation also shows positive impact on students’ perception and selfefficacy lending further support to the value of continuing to increase the base and reach of such practices. The ongoing challenges, slow progress will be discussed. References Tai, J; Ajjawi, R; Boud, D.; Dawson, P; Panadero, E. 2017. Developing evaluative judgement: enabling students to make decisions about the quality of work. December 2017. Higher Education. DOI: 10.1007/s10734017-0220-3 Can student-staff interviews about how staff use feedback when preparing their work for publication, help develop student feedback literacy? Speakers: Jenny Marie, University College London; Nick Grindle, University College London In recent years, scholars have argued that feedback needs to be a dialogic process if students are to engage with it in a way that aids their learning (Nicol and Macfarlane-Dick, 2013). Xu and Carless (2017) question whether students are equipped to do this effectively, suggesting that students first need to develop ‘feedback


literacy’. But how can students become feedback literate? The learning that needs to occur is part of the process of acculturating to academic life and norms, but as Sutton (2012) points out there are numerous cultures within a single university and even within a discipline. Our response was to adapt a tried-and-tested activity called ‘Meet the Researcher’ which is used to help students acculturate to the research-based culture of higher education. ‘Meet the Researcher’ consists of student-led interviews of academic staff about their research. The aim, first described by Cosgrove (1981), is ‘to draw upon the experiences of the staff as students, graduates, researchers and teachers in such a way that the staff would mediate between [the discipline] as it had been presented to the students in their other courses and the larger debates … which are presented in the literature’. Following Cosgrove’s pattern most ‘Meet the Researcher’ activities require students to work in groups and read an academic paper written by the researcher, which then forms the basis for their interview (Downie 2010). We adapted this activity, so that in addition to reading a published paper students would also be shown an earlier draft of the paper and the feedback it had received from the journal’s editor(s) and reviewers. The aim was to expose students to the ways that researchers seek feedback, the different forms in which it is given, and the impact it has on a researcher’s work. The interviews would then focus on the researcher’s feelings about the feedback, their response to it, and what impact it had on the work as finally published. We conducted small-scale pilots of this adaptation with three different groups of students in two departments, sending briefing notes to the staff about the aims of the activity and a briefing they could adapt for their students. The activity ran slightly differently in each of the pilots, resulting in three case studies of how this concept can be realised in practice. Staff partners were asked to complete a short questionnaire about how they had implemented the activity. Students were asked to answer a single question to enable us to understand what they had learnt from the experience. At the conference we will present an analysis of the responses. We’ll also compare the findings with data drawn from an earlier study of more generic ‘Meet the Researcher’ activities in the same university (Grindle 2017). Analysis of the current findings, and comparison with the earlier study, will help us illustrate how far we have succeeded in developing student feedback literacy. We will also identify areas that require further thought and development. References Cosgrove, Denis. 1981. "Teaching geographical thought through student interviews." Journal of Geography in Higher Education 5 (1):19-22. Downie, Roger. 2010. "A Postgraduate Researcher — Undergraduate Interview Scheme: Enhancing ResearchTeaching Linkages to Mutual Benefit." Bioscience Education 16 (1):1-6. doi: 10.3108/beej.16.c2. Grindle, Nicholas. 2017. "Meet the Researcher: What can staff and students learn from engaging in dialogue about research?" HEA Annual Conference, University of Manchester, 6 July 2017. Nichol, David and Debra McFarlane-Dick. 2006. ‘Formative assessment and self-regulated learning: a model and seven principles of good feedback practice’, Studies in Higher Education 31 (2):199-218. Sutton, Paul. 2012. ‘Conceptualizing feedback literacy: knowing, being, and acting’, Innovations in Education and Teaching International 49 (1):31-40, doi: 10.1080/14703297.2012.647781. Xu, Yueting and David Carless. 2017. ‘‘Only true friends could be cruelly honest’: cognitive scaffolding and social-affective support in teacher feedback literacy’, Assessment & Evaluation in Higher Education, 42 (7):1082-1094. doi: 10.1080/02602938.2016.1226759. Feedback Footprints: Using Learning Analytics to support student engagement with, and learning from, feedback Speaker: Naomi Winstone, University of Surrey; Emma Medland, University of Surrey Student satisfaction with assessment and feedback has been described as the sector’s “Achilles’ Heel” (Knight, 2002, p. 107). Many students express dissatisfaction with the utility of feedback (Medland, 2016), citing difficulties understanding comments, knowing how to take action, and how to connect feedback from different modules and assignments (Jonsson, 2013; Winstone, Nash, Rowntree & Parker, 2017). In the ‘new paradigm’ of feedback practice (Carless, 2015), emphasis is placed not on feedback as comments (i.e. the ‘old paradigm’), but on feedback as dialogue, where the impact of feedback on students’ learning is of key concern. This approach should lead educators to reflect on where students are enabled to enact feedback in a dialogic cycle, and to find ways of supporting students’ self-regulatory development by providing opportunities for them to experience the impact of implementing feedback. As researchers and practitioners, a key difficulty in understanding students’ engagement with feedback is that once marked work is returned, we know very little about what students actually do with feedback information. There is a real, but largely untapped, potential for learning analytics to illuminate this ‘hidden recipience’. Where feedback is given to students via Virtual


Learning Environments (VLEs), learning analytics can provide insight into when and how students engage with feedback (e.g. Zimbardi et al., 2017). Furthermore, student-facing analytics can inform students about their own engagement, thus supporting the development of self-regulated learning. Here, we report on an evaluation of a HEFCE-funded project through which we worked in partnership with students to develop a VLEembedded feedback portfolio to support students to synthesise and act upon feedback (https://www.youtube.com/playlist?list=PL1eqx09MUhok3O-PDHvKPfVB7cMl9gA8Y). The portfolio first includes a ‘feedback review’ tool through which students can extract key messages from feedback, which they can then categorise into a comprehensive set of academic skills. Also housed within the portfolio is a large resource bank, aligned with each of these identified academic skills. The tool synthesises feedback from multiple assignments so that students can see a visual representation of the areas in which they are performing well, and the areas that may need some development. Finally, students can use an action-planning tool to set goals for their use of the resources, and engage in dialogue with their personal tutor. The portfolio incorporates a student-facing analytics dashboard to enable them to track their engagement with feedback, and the impact of this engagement. In this project, we employed a co-design method through which students, researchers, and learning technologists worked in partnership to develop the feedback portfolio. We will present the key messages emerging from our evaluation, which employed self-report measures (e.g. orientation to feedback, assessment literacy, academic self-efficacy), quantitative analysis of learning gain and learning analytics, and qualitative analysis of user experience. Crucially, our data demonstrate disciplinary differences in engagement with the portfolio and also provide further insight into the challenges students face when using feedback to develop as autonomous, self-regulated learners. The evaluation also demonstrates that students see digital tools as possessing great potential in bringing meaningful dialogue into the feedback process. References Carless, D. (2015). Excellence in university assessment: Learning from award-winning practice. London: Routledge. Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active learning in higher education, 14(1), 63-76. Medland, E. (2016). Assessment in higher education: drivers, barriers and directions for change in the UK. Assessment & Evaluation in Higher Education, 41(1), 81-96. Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2017). ‘It'd be useful, but I wouldn't use it’: barriers to university students’ feedback seeking and recipience. Studies in Higher Education, 42(11), 2026-2041. Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting Learners' Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes. Educational Psychologist, 52(1), 17-37. Zimbardi, K., Colthorpe, K., Dekker, A., Engstrom, C., Bugarcic, A., Worthy, P., & Long, P. (2017). Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance. Assessment & Evaluation in Higher Education, 42(4), 625-644.

2

Evaluation or Research Presentations Session Chair: Jess Evans

Meeting Room 3

Dialogic feedback and the development of professional competence among further education pre-service teachers Speaker: Justin Rami, Dublin City University; Francesca Lorenzi, Dublin City University, Republic of Ireland Improving the students learning experience is closely connected with the promotion an implementation of an assessment strategy whose effectiveness relies on the quality of the formative aspect. Assessment can promote or hinder learning and it is, therefore, a powerful force to be reckoned within Education. The literature on assessment makes it quite clear that assessment shapes and drives learning in powerful, though not always helpful, ways (Ramsden, 1997). A number of authors (Steen-Utheim et al. 2017; Merry et al., 2013; Careless, 2013 and 2016; Hyatt, 2005; Juwah & al., 2004; Bryan & Clegg; 2006; (Swinthenby, Brown, Glover, Mills, Stevens & Hughes, 2005; Nicol, 2010; Torrance & Prior 2001) have advocated the encouragement of dialogue around learning and assessment as a means to enhance the formative aspect of assessment. Pedagogical dialogue and formative assessment share common principles such as the emphasis on the process


(MacDonald, 1991); the need for negotiation of meaning and shared understanding of assessment criteria (Boud, 1992)(Chanok 2000)(Harrington & Elander 2003; Harrington & al. 2005) (Sambell & McDowell 1998) (Higgins Hatley& Skelton, 2001; (Norton, 2004; Price & Rust, 1999; O’Donovan, Price & Rust 2000; Rust, Price & O’Donovan, 2003) and the development of reciprocal commitment between assessors and assesses (Hyland 1998; Taras, 2001) based on trust (Careless, 2016) We argue with Kopoonen et al. (2016) that a strong dialogic feedback culture together with the developmental role of feedback a part of future working life skills. and their importance warrants greater integration into the higher education curricula as part of the development of expertise. This paper presents the outcomes of the introduction of an assessment portfolio for module/learning unit “Curriculum Assessment” informed by dialogical principles and aimed at the development of professional competence among pre-service Further education teachers. This enquiry used a research qualitative approach with several groups of pre-service FE(T) teachers in Ireland. The paper outlines the results the small-scale research which sought to evaluate the impact of the of the module/Learning unit. The findings led to the identification of three key outcomes: firstly the development of a shared understanding of assessment criteria secondly the establishment of a mutual relationship between assessors and assesses based on commitment and trust and thirdly a heightened self-awareness both in personal (efficacy) and professional (competence) terms. This study demonstrates that a dialogical assessment model that enables students to make sense of knowledge through reflection, professional decision-making and engagement. Furthermore, it demonstrates how a dialogical approach to assessment and feedback can initiate reflective processes which may equip student teachers with knowledge transferable to professional practice. References Merry, S., Price, M., Carless, D., & Taras, M. (Eds.) (2013). Reconceptualising feedback in higher education: Developing dialogue with students. London: Routledge. Carless, D. (2013). Trust and its role in facilitating dialogic feedback. In D. Boud & L. Molloy, Effective Feedback in Higher and Professional Education. London: Routledge. Carless, D. (2016). Feedback as dialogue. Encyclopedia of Educational Philosophy and Theory, p. 1-6, http://link.springer.com/referenceworkentry/10.1007/978-981-287-532-7_389-1. Koponen J., Kooko, T., Perkiö A. Savander- Ranner A. Toivanen T., Utrio, M. (2016) Dialogic feedback culture as a base for enhancing working life skills in higher education, Journal of Finnish Universities of Applied Sciences, available from https://uasjournal.fi/in-english/dialogic-feedback-culture-as-a-base-forenhancing-working-life-skills-in-higher-education Nicol, D. (2010). From monologue to dialogue: improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501–517. doi:10.1080/02602931003786559 Steen-Utheim A. and Wittek A. (2017) Dialogic feedback and potentialities for student learning Learning, Culture and Social Interaction Volume 15, December 2017, Pages 18-30 Systematic responses to grand challenges in assessment: REVIEW at UNSW Speaker: Daniel Carroll, University of New South Wales, Australia The grand challenges and some of our persistent failings in delivering on the assessment for learning (AfL) agenda are well understood: many admit that systemic improvement of the student experience of assessment still challenges the higher education sector. While the AfL movement and research (Stiggins, 2002) has informed and improved assessment policy and staff development in many institutions, the current assessment systems and tools available still under-deliver in support of the assessment for learning agenda and better student experience of assessment. As part of a large faculty Assurance of Learning Project in 2011, the UNSW Business School adopted REVIEW, a software developed by academics for academics at University of Technology Sydney. REVIEW provides an online platform for criteria based assessment and feedback and visually connects short-term assessment tasks with the development of longer-term learning goals, skills or competencies. Our REVIEW experience has seen a systematic and widespread improvement in assessment experiences for both staff and students. This is based on the increased clarity in assessment through mandated use of criteria, observed reductions in staff marking times, the focus on feedback related to the judgement criteria and the increased use of student self and peer assessment. In six years, usage has grown from 4 pilot courses to over 100 courses with 10,000 + student enrollments per term (Analytics). The presentation and discussion will discuss how an online system provides a platform to operationalise good practice en masse, and assist to systematically meet some of the big challenges facing assessment


Presentation Overview: A three minute introduction provides a brief visual introduction illustrating core platform elements of REVIEW (e.g., criteria-based marking screen, task to degree goal-mapping, student self and peer assessment interfaces, staff assessment reports and analytics interface). The talk proceeds to outline how planning, staff support and the platform contribute meaningfully to a systematic institutional response to some of the grand challenges of assessment. Elements of the presentation are supported by ethics approved research and the presentation will be followed by question and discussion section. Challenge 1: Challenge 2: Challenge 3: Challenge 4: Challenge 5:

Assessment is often poorly described and recorded Individual assessments are often atomistic and poorly connected to learning and learning improvement Development of student judgement is not supported as an inherent element of assessment processes (students are passive in assessment) The effect of feedback given to students is unknown (receipt of feedback is not trackable, ‘lost’ or not acted on) We haven’t systematically designed a learner –centric (personalised) approach to assessment for students

References Carroll , D., Benefits for students from achieving accuracy in criteria-based self-assessment. IAEA Conference, 2014. https://www.researchgate.net/publication/264041914_Benefits_for_students_from_achieving_accurac y_in_criteria-based_self-_assessment Carroll , D., Win, Win, Win: Future assessment systems, EDEN Conference 2015, https://www.researchgate.net/publication/281008509_WIN_WIN_WIN_Future_assessment_systems Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758765.

An evaluation of peer-to-peer feedback using adaptive comparative judgement Speaker: Jill Barber, The University of Manchester, UK Adaptive Comparative Judgement (ACJ) is an alternative to conventional marking in which the assessor (or judge) merely compares two (anonymous) answers and chooses a winner (Pollitt, 2012). The use of a suitable sorting algorithm means that repeated comparisons lead to scripts sorted in order of merit. Boundaries are determined by separate review of scripts. The method does much to remove subjective bias and to allow "hawks" and "doves" to mark successfully in a team. (Daly et al., 2017). Our preliminary findings indicate, however, that it is, at best, time-neutral, when assignments are marked by staff. We have now marked nearly 20 assessments using adaptive comparative judgement, and have found its use in peer-assessment is especially powerful. Students find it difficult to assign marks to one another's work, but are usually skilled at simple comparisons. Further, they can learn a great deal from seeing examples of other students' work. As part of an ongoing evaluation of adaptive comparative judgement, we have asked students to leave detailed structured feedback for one another, using a 2000 word third year Pharmacy assignment (150 students) as an example. In a typical assessment a student will evaluate 10-15 of their peers' scripts, and leaving feedback on three or four aspects of the assignment is therefore manageable. In this example, students were given detailed instructions about the preparation of feedback for their peers and their feedback was found to be of high quality and to correspond well to staff feedback for the same assignment. Feedback was readily disseminated to students using Smallvoice (Ellis and Barber, 2016) but other mail merge options may be used. Interim conclusions are that peer-to-peer assessment and feedback using adaptive comparative judgement allows students to benefit from both preparing and receiving much more detailed feedback than staff could reasonably provide. Staff oversee the process by providing detailed instructions for the preparation of feedback and by moderating the assessment process. It is encouraging that the group of students who took part in this exercise have requested the opportunity to take part in adaptive comparative judgement practice exercises to help them prepare for other assessments. References Pollitt, A. (2012). The method of Adaptive Comparative Judgement. Assessment in Education: Principles, Policy & Practice, 19, 281-300.


Daly, M.; Salmonson, Y.; Glew, P.J. and Everett, B. (2017) Hawks and doves: The influence of nurse assessor stringency and leniency on pass grades in clinical skills assessments, Collegian, 24, pp. 449-454. Ellis, S. and Barber, J. (2016) Expanding and personalising feedback in online assessment: A case study in a school of pharmacy. Practitioner Research in Higher Education, Special Assessment Issue, 10, 121-129.

3

Evaluation or Research Presentations Session Chair: Linda Thompson

Meeting Room 5

The Pedagogy of Interteaching: engaging students in the learning and feedback process Speaker: Nick Curtis, James Madison University, Keston Fulcher, James Madison University, Virginia, USA This presentation will introduce participants to the pedagogy of interteaching. Interteaching is an active learning paradigm grounded in behaviour analytic methods (Goto & Schneider, 2009; Saville, Zinn, Neef, & Ferreri, 2006). There is a great deal of research suggesting that active learning methods, interteaching particularly, are more effective learning tools than traditional lecture methods for many topics (Barkley, 2009; Davis, 2009; Fink, 2013; Miller, Groccia, & Miller, 2001). The instructor of a class is not the ‘holder’ of all knowledge. Students can quite easily obtain all of the information contained in most classes from a research article, book, or online. The job of an instructor is to facilitate the process of constructing the information and to help students apply it in ways that make sense. Thus, it is crucial that students’ perspectives, fields of study, and prior knowledge are considered in the processing of the information. Interteaching allows an instructor to partner with students in the process of teaching and learning to do exactly that. This presentation will highlight and describe the key components of interteaching: Preparatory guides, in-class peer discussions, student feedback and reflection, targeted lectures, and learning probes. Explicit examples of each component, based on a course at our university, will be provided to participants. Special focus will be given to interteaching’s innovative and sustainable process of incorporating student feedback, the redistribution of power between students and teacher in the classroom, and how interteaching works to increase learner agency, involvement, and volition. Participants will interact with each other and the presenters by participating in a mock interteaching session. Participants will leave with thought provoking ideas for their own teaching practice. References Barkley, E. F. (2009). Student engagement techniques: A handbook for college faculty. John Wiley & Sons. Davis, B. G. (2009). Tools for teaching. John Wiley & Sons. Fink, L. D. (2013). Creating significant learning experiences: An integrated approach to designing college courses. John Wiley & Sons. Goto, K., & Schneider, J. (2010). Learning through teaching: Challenges and opportunities in facilitating student learning in food science and nutrition by using the interteaching approach. Journal of Food Science Education, 9(1), 31-35. Miller, J. E., Groccia, J. E., & Miller, M. S. (2001). Student-Assisted Teaching: A Guide to Faculty-Student Teamwork. Anker Publishing Company. Saville, B. K., Zinn, T. E., Neef, N. A., Norman, R. V., & Ferreri, S. J. (2006). A comparison of interteaching and lecture in the college classroom. Journal of applied behavior analysis, 39(1), 49-61. Study Player One: Gamification, Student Engagement, and Formative Feedback Speaker: Errol Rivera, Edinburgh Napier University, UK Gamification is the use of game attributes in a non-game context. It is also not a new concept, nor is it a panacea for bored students (Deterding, 2012). Gamification does not turn learning into a game, rather, it generates meaning by shifting the context of an experience from one that is not game-like to one that is. Being a process that operates on how we experience a situation, gamification relies on the premise that experiences can be constructed (Werbach, 2014), and so lends itself to the constructivist foundations of common pedagogical practices in potentially various & useful ways. For example: what if gamification could provide a toolkit for the rigorous design and effective delivery of a formative assessment? Formative assessment offers a number of challenges to the educational practitioner that can often be torn between policy and practice (Jessop, El Hakim, & Gibbs, 2017), with students caught in the middle. Formative assessment has its own


unique strengths, such as the role of feedback and the sense of safety that comes from an absence of marks or grades (Dweck, 1999), and these strengths can inform a lecturer’s design methodology. However, there might be just as much to be gained from understanding the potential weaknesses of formative assessment. When formative assessment is taken as a complex proposition, one where a teacher and a learner mutually undertake an involved activity to produce transformations that make learning objectives achievable (Black & Wiliam, 1998), formative assessment may arguably be vulnerable to a student’s level of engagement, and dependent on an inextricable relationship between student engagement and formative feedback – both of which could be supported through gamification. Unfortunately, there is no standardised methodology for identifying and embedding specific game attributes in alignment with theory-based teaching or assessment. However, as gamification research moves forward and its concepts sophisticate, testable theoretical frameworks have begun to develop. This PhD study attempts to reconcile a theory of gamified learning into theoretically based formative assessment practice, bridged by a multi-dimensional framework for understanding student engagement (Kahu, 2013). This consolidation is used as the basis for the design of a formal technique for gamifying formative assessment which will be used in several biological science modules, and monitored for their impact on students’ engagement with formative assessment. Drawn from the initial findings of PhD research into gamification, this presentation first demonstrates the common ways that gamification occurs in the ‘wilds’ of teaching and learning. Participants will then follow the challenges of consolidating existing theoretical constructs of formative assessment and student engagement with that of gamification, and see the formulation of a potential model for concertedly applying gamification in a targeted, technical manner to impact student engagement with formative assessment. This presentation aims to provide grounds for a discussion that confronts our expectations of gamification, highlights the vital role that engagement plays in feedback, and challenges lecturers to better understand the relationship between what students think and what students feel, and ultimately how we learn. References Black, P., & Wiliam, D. (1998). Assessment and Classroom Learning. Assessment in Education: Principles, Policy & Practice (Vol. 5). https://doi.org/10.1080/0969595980050102 Deterding, S. (2012). Gamification: Designing for Motivation. Interactions, 19(4), 14. https://doi.org/10.1145/2212877.2212883 Dweck, C. S. (1999). Self-theories: Their role in motivation, personality, and development. Essays in Social Psychology, 214. https://doi.org/10.1007/BF01544611 Jessop, T., El Hakim, Y., & Gibbs, G. (2017). Assessment & Evaluation in Higher Education The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns. https://doi.org/10.1080/02602938.2013.792108 Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education, 38(5), 758– 773. https://doi.org/10.1080/03075079.2011.598505 Werbach, K. (2014). ( Re ) Defining Gamification : A Process Approach. Persuasive Technology, 8462, 266–272. https://doi.org/10.1007/978-3-319-07127-5_23 Providing and receiving feedback: implications to student’ learning Speakers: Georgeta Ion, Universitat Autònoma de Barcelona, Cristina Mercader Juan, Universitat Autònoma de Barcelona, Aleix Barrera Corominas, Universitat Autònoma de Barcelona; Anna Diaz Vicario, Universitat Autònoma de Barcelona, Spain Our study addresses the following issue: In the context of peer feedback, which role, i.e., assessor and assessee, is perceived as more beneficial to learners? To answer this research question, this study examines the relationship between students’ perception of learning while ‘providing’ and ‘receiving’ feedback with an emphasis on the following areas: 1. cognitive and metacognitive learning 2. the development of discipline-related and professional academic skills and 3. academic emotions or other affective aspects. Considering the Vygotskian concept of scaffolded learning (Vygotsky, 1978), we designed an online questionnaire using the SurveyMonkey platform entitled ‘Peer evaluation strategies and feedback’. 188


students enrolled in teacher education bachelor degree at Universitat Autònoma de Barcelona (Spain) answered a survey. The questionnaire was administered to the students who were in class at that time, which allowed for the attainment of a representative sample. Once the data were gathered, univariate and multivariate statistical analyses were performed using SPSS v.20 and SPAD_N v.5.6. Results indicate that students perceived that other students benefitted more from the feedback they provided than they benefitted from receiving feedback. According to the univariate analysis, which was performed to describe the application of peer feedback, the experience was (1) a useful learning strategy (M=4.68; SD=1.497) and (2) significantly improved their assignments (M=4.61; SD=1.446). Students believed that despite its significance, the feedback was more useful in improving the tasks of others (M=5.11; SD=1.245) than in improving the tasks performed by their group (M=4.89; SD=1.460). In addition, the difference between providing and receiving feedback was analysed according to the overall assessment of each action, i.e. providing and/or receiving. Then, we compared both indices by performing a paired-samples t-test. The t-value was positive, indicating that the first condition (providing feedback) had a higher mean (M=4.75; SD=.090) than the second condition (receiving; M=4.63; SD=1.145); thus, we may conclude that providing feedback caused significantly more reported benefits than receiving feedback (t(183)=2.504; p=.013). The present study clearly supported the role of students in their own learning. As most participants recognised, more learning occurred when providing feedback, which is a clear indicator that students want to assume an active role in their own learning and consider their involvement critical in the design of teaching and learning experiences. However, to enhance this benefit, classroom experiences should facilitate deep involvement in students during all learning and assessment processes to enhance the students’ professional future competencies as assessors. References Ion, G., Cano, E., & Fernández, M. (2017). Enhancing Self-Regulated Learning through Using Written Feedback in Higher Education. International Journal of Educational Research, 85, 1-10 Panadero, E., Jonsson, A., & Botella, J. (2017). Effects of self-assessment on self-regulated learning and selfefficacy: Four meta-analyses. Educational Research Review, 22, 74-98. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge M, A: MIT Press.

4

Evaluation or Research Presentations Session Chair: Dr. Rita Headington

Meeting Room 7

Contextual Variables in Written Assessment Feedback in a University-level Spanish program Speakers: Ana Maria Ducasse, RMIT University,Melbourne, Australia; Kathryn Hill, La Trobe University, Melbourne, Australia A number of researchers have highlighted the ‘situated’ nature of assessment and feedback making context, along with teacher and students, a key factor to be taken into account in any consideration of assessment for learning. This paper reports on a collaborative dialogue (Scarino, 2016) between a teacher and researcher regarding the impact of context on written feedback in Spanish as a foreign language program at an Australian university. The study investigated two research questions: 1. and 2.

How does the teaching context influence the nature of feedback provided to students, How does the teaching context influence learner responses to feedback?

Following Turner and Purpura (2015) context is understood as comprising both micro- and macro-level factors. Macro-level factors include the political, social and cultural environment for assessment and feedback. Microlevel factors include institutional factors (e.g., assessment policy), the teaching and learning ‘infrastructure’ (e.g. curriculum and teaching environments), individual teacher and learner attributes (Andon, Dewey & Leung, 2017) and, following Norris (2016), the assessment task itself. Participants comprised a language assessment researcher, an ‘expert’ Spanish as a foreign language lecturer and 15 students from beginner (CEFR A1), intermediate (CEFR B1), and advanced (CEFR C) levels in a university-level Spanish program. Data comprised written feedback on final writing tasks for each of the three levels collected over a 12-week semester as well as recordings and transcripts of discussions between teacher and researcher regarding feedback decisions. Data were analysed using thematic content analysis. Analysis of teacher feedback focuses on the effect of context on the type and focus of feedback. Analysis of learner responses focused on


contextual influences on affective responses, dispositions towards feedback, and uptake. While some general themes could be identified, the results serve to highlight the complexity and variability of the interaction between teacher intentions, learner orientations, and various dimensions of context. References Andon, N. J., Dewey, M., & Leung, C. (2017). Tasks in the Pedagogic Space: using online discussion forum tasks and formative feedback to develop academic discourse skills at Master’s level. In TBLT as a Researched Pedagogy (Task-Based Language Teaching). John Benjamins Publishing Company. Norris, J. M. (2014, October). Some reflections on Learning Oriented Assessment. Presentation at the Roundtable on Learning-Oriented Assessment in Language Classrooms and Large-Scaled Contexts, Teachers College, Columbia University, New York. VACS - Video Assessment of Clinical Skills Speakers: Mark Glynn, Dublin City University; Evelyn Kelleher Dublin City University; Colette Lyng, Beaumont Hospital, Dublin; Adele Keough, Dublin City University; Anna Kimmins Dublin City University; Patrick Doyle, Dublin City University, Republic of Ireland This paper describes an innovative assessment that was introduced in response to educational and logistical challenges presented in identifying appropriate assessment strategies for assessing practical skills with large cohorts of students. The aim of the innovation was to replace a face-to-face practical exam with online submission of a video recording of the student performing the practical skill. Every year we have in excess of 200 first year undergraduate students. Each student must demonstrate competency with respect to a variety of practical clinical skills. Each skill can take up to 10 minutes. It is necessary for students to individually demonstrate each skill, therefore a lecturer must sit through over 2000 minutes (33+ hours) of individual student assessment for each skill that must be assessed. With five different skills required for first year students the logistic challenge for managing the assessment is very difficult. This normally involved up to ten different staff members supporting the assessment over a week long period. Each lecturer would use a paper based rubric to assess the students and then hand back their evaluations to module coordinator so feedback could be centrally issued to the students. From a student's perspective they are given a time and date and they have a one off opportunity to perform. Therefore we needed to determine a more effective and efficient way of assessing these key clinical skills. Face-to-face practical exams present many challenges; they are inflexible and resource and time intensive. Furthermore they cause pressure, nerves, anxiety and stress for students, fatigue and loss of concentration for examiners, and inconsistency and errors in marking leading to disputed results. Online video submission is a simple approach that can overcome many of these challenges. Video has been used successfully in education for many years, however, this approach of using online video submission to replace face-to-face practical exams appears to be a new innovation. This new assessment method has transformed the way not only the way we assess but the way students learn their clinical skills. We have moved from just assessment of learning to have assessment of and for learning. Instead of a one off performance in front of a lecturer, students pair up with a colleague and use their own phone to record themselves performing the skill therefore also introducing peer learning/feedback into the process. We first implemented this assessment method for first year students in 2013/14 and have used it every year since. Every year, based on feedback we make slight changes to optimise the process. Preliminary evaluation of the first cohort of students to participate in this innovation suggests that the majority of them preferred the online submission format and that it did enhance their learning. This paper outlines current more in-depth research to assess the full benefits and potential of this assessment method. References Incorporating Mobile Phone Technologies to Expand Evidence-Based Care, Deborah J. Jones, Margaret Anton, Michelle Gonzalez, Amanda Honeycutt, Olga Khavjou, Rex Forehand, Justin Parent, Cognitive and Behavioral Practice, Volume 22, Issue 3, 2015, Pages 281-290, First-year medical students’ assessment of their own communication skills: A video-based, open-ended approach Zick, Amanda et al. Patient Education and Counseling , Volume 68 , Issue 2 , 2007,161 – 166 The influence of peer feedback on self- and peer-assessment of oral skills Mrudula Patri, Language Testing , Vol 19, Issue 2, 2002 pp. 109 – 131 Assessment of clinical skills: a new approach to an old problem Nicol, Maggie et al. Nurse Education Today , Volume 18 , Issue 8 , 1998, 601 – 609


Foregrounding feedback in assessment as learning: the case of the processfolio Speaker: Jayne Pearson, University of Westminster, UK Research has shown that teacher feedback, even when high-value (Hounsell, 2007), can be disempowering for students if one-directional ‘telling’ (Sadler, 2010) or interpreted as directives despite teachers’ efforts to be facilitative (Richardson, 2000). It is also de-motivating for staff to have time-consuming feedback disregarded by students in fixing their work or to receive low scores for assessment and feedback in internal and external evaluation. This presentation will report on an alternative assessment of academic writing on a high-stakes preparatory course for international students at a UK university, which was designed and implemented over three iterative cycles as part of an action research project. The processfolio assessment was a reaction to performance and product- orientated assessments and exam culture which were preventing students from conceptualising themselves as developing writers transitioning to academic discourse communities (Gourlay, 2009) and constraining student’s individual and social agency. The project was an empirical impact study on washback and social impact of assessment in the context of tensions between the UK’s drive for internationalisation and attempts to maintain linguistic and academic standards through immigration policies. Processfolio, as an adaptation of the traditional portfolio concept, allows the student to present their journey of creating one research essay. The project contained many elements, but this presentation will focus on the incorporation of three levels of feedback- their teachers’ advice through written comments and recorded tutorials; their peers’ response through an asynchronous Moodle wiki forum; and their own evaluation through criteria application and a short reflection. As part of the project, students chose for themselves which of the feedback activities were useful and why by including them as artefacts within their folios. By foregrounding feedback as integral to the assessment, rather than a post-hoc event as in most formativeassessment practices, the processfolio incorporates different voices in the assessment process (Boud and Falchikov, 2006), emphasising a shared responsibility for feedback. Benefits derived from the folio were the increased sense of self- efficacy and control over writing-assessment events and an increased ability in selfregulated or procedural autonomy leading to a growing critical agency (Sambell, 2006) evidenced by questioning of criteria, and accepted norms of academic writing and assessment practices. However, the selfevaluative aspect of the folio was the least successful and most resisted by students. Therefore, if the processfolio is to be applied in different contexts it must not be assumed that students will automatically take agency. Therefore, the curriculum to support the processfolio should incorporate activities which facilitate an understanding of the constraining and enabling potential of assessment practices in twenty-first century higher education. Boud, D. & Falchikov, N. (2006) Aligning Assessment with long-term learning. Assessment & Evaluation in Higher Education 31 (4) 399–413 Gourlay, L. (2009). Threshold practices: becoming a student through academic literacies London Review of Education 7 (2) 181–192 Hounsell, D. (2007). Towards more sustainable feedback to students IN Falchikov, N & Boud, D. (Eds) (2007). Rethinking Assessment for Higher Education 101-113 Richardson, S. (2000). Students’ conditions response to teachers’ response: portfolio proponents take note! Assessing Writing 7(2) 117-141 Sadler, D.R. (2010). Beyond feedback: developing students’ ability in complex appraisal. Assessment and Evaluation in Higher Education 35(5) 535-550 Sambell, K.; McDowell, L. & Sambell, A. (2006). Supporting diverse students: developing learner autonomy via assessment IN Bryan, C. & Clegg, S. (Eds). (2006). Innovative Assessment in Higher Education 158-167

5

Round Table Presentations Session Chair: Dr. Peter Holgate

Meeting Room 11

Exploring student perceptions of a feedback trajectory for first-year students at the University of Leuven Speakers: Anneleen Cosemans, Leuven Engineering and Science Education Center, Carolien Van Soom, Greet Langie, Tinne De Laet, University of Leuven, Belgium The University of Leuven is a Flemish university with an open-admission system. Against this background, the University of Leuven strongly invests in study orientation, guidance, and counselling. During the round table,


we will zoom in on an innovative feedback trajectory in the Science and Engineering faculties, aimed at firstyear students. This trajectory consists of a series of feedback moments throughout the academic year: individualised learning dashboards (Broos, Peeters, et al., 2017; Broos, Verbert, & De Laet, 2018; Broos, Verbert, Vansoom, Langie, & De Laet, 2017), on-campus group sessions, one-on-one counselling, an optional positioning test before enrolment (Vanderoost et al., 2015) followed by individualised feedback, etc. Furthermore, we will propose our research questions to investigating student perceptions of the feedback trajectory. We want to find out how the learning dashboards, with situated information on individual study skills and results are perceived by our first-year students, following one of Evans’ suggestions (2013) to move research-informed practice in assessment feedback in HE forward. Do students feel these tools support their individual learning process? How experienced are they in seeking and receiving feedback? How does feedback coming from the learning dashboards fit into other types of feedback, e.g. within their coursework, or from peers? Our ultimate goal is to optimise the feedback trajectory to make it more effective, i.e. having an impact on learning behaviour. References Evans, C. (2013). Making Sense of Assessment feedback in Higher Education. Review of Educational Research 83 (1), 70-120. doi:10.3102/0034654312474350 Broos, T., Peeters, L., Verbert, K., Van Soom, C., Langie, G., & De Laet, T. (2017). Small Data as a Conversation Starter for Learning Analytics: Exam Results Dashboard for First-year Students in Higher Education. Journal of Research in Innovative Teaching & Learning, minor revi. Broos, T., Verbert, K., & De Laet, T. (2018). Multi-institutional Positioning Test Feedback Dashboard for Aspiring Students . In submitted to the LAK 2018 conference. Broos, T., Verbert, K., Van Soom, C., Langie, G., & De Laet, T. (2017). Dashboard for Actionable Feedback on Learning Skills: How Learner Profile Affects Use. In Springer Lecture Notes in Computer Science (LNCS) series. (Proceedings of the ECTEL 2017 conference; ARTEL workshop) (to be published). Vanderoost, J., Van Soom, C., Langie, G., Van den Bossche, J., Callens, R., Vandewalle, J., & De Laet, T. (2015, June). Engineering and science positioning tests in Flanders : powerful predictors for study success? Paper presented at 43rd Annual SEFI Conference. Retrieved from https://www.researchgate.net/publication/281590094_Engineering_and_science_positioning_tests_in _Flanders_powerful_predictors_for_study_success Students’ feedback has two faces Speakers: Serafina Pastore, University of Bari, Italy; Giuseppe Crescenzo, Michele Chiusano, University of Bari, Italy; Vincenzo Campobasso, Student at the University of Bari, Italy Over the years, there have been remarkable efforts to outline a different kind of assessment that is more sustainable and useful in order to guarantee an active participation of students as ‘full members of the academic community’ (Jungblut, Vukasovic and Stensaker, 2015). Therefore, reform packages, studies, and projects are moving towards the revision of traditional assessment practice, the individuation of alternative forms of assessment, and the analysis of conceptions that teachers and students have of assessment. In this framework, student evaluative surveys across Europe have become the largest and most frequently used data sources for quality assurance in higher education (Klemencic and Chirikov, 2015). However, in Italy, students’ compliant behaviour in completing end of module questionnaires and strong sense of disaffection provide significant challenges to the quality assurance system. In order to understand what hindrances, conceptions, and representations students have of the quality assurance system, a round of informal auditions has been realised during the 2017 fall semester to a sample of student representatives who joined the 15 departments of the University of Bari. Despite the methodological limitations of this study, data analysis confirms deep misconceptions students have about assessment and quality assurance. References Jungblut, J., Vukasovic, M. and Stensaker, B. (2015). Student perspective on quality in higher education. European Journal of Higher Education 5 (2): 157-180. Klemenčič, M., & Chiricov, I. (2015). On the use of student surveys. In Remus, Pricopie, Peter, Scott, Jamil, Salmi, and Adrian, Curaj (Eds.) Future of Higher Education in Europe. Volume I and Volume II. Dordrecht: Springer.


Feedback for Postgraduate Taught Students Speaker: Fay Julal, University of Birmingham, UK Relative to undergraduate and postgraduate research little is known about the feedback experiences and expectations of postgraduate taught (PGT) students. PGT students are expected to quickly adjust to new assessment and feedback practices, which is particularly challenging for international students (Tian & Lowe, 2013). PGT students are expected to show greater independence and initiative, which may imply fewer instances of feedback and a greater expectation that they will be able to use it effectively. PGT students are presumed to be “HE experts” and the transition expected to be smooth; however, this view has been challenged (see McPherson et al., 2017). The PGT population is diverse and students bring different expectations about assessment and feedback, likely shaped, in part, by their undergraduate experiences. Institutions have limited time to acculturate new students to their practices and students have relatively little experience with the new practices before they evaluate their assessment and feedback experience on the PTES. With the future inclusion of taught postgraduate provision into the TEF, it is important that the needs of PGT students are better understood. This presentation describes a research project that explores specifically the feedback experiences and expectations of postgraduate taught students. ‘Change is one thing. Acceptance is another’: Managing stakeholder participation to support the introduction of institution wide electronic management of assessment Speakers: Emma Mayhew, University of Reading; Madeleine Davies, University of Reading, UK The adoption of online submission, feedback and grading is increasing each year within the sector. This change is underpinned by common drivers, including a desire to improve the staff marking experience and to deliver an improved student assessment experience which supports student engagement, attainment and satisfaction (University of Reading, 2017). Drawing on a range of project reports, project blogs, conference presentations and wider material published in the last six years, this paper argues that one of the most difficult challenges faced by institutions moving towards institution wide online assessment, is stakeholder management. Markers can be understandably concerned about making major changes to assessment processes given the high value of marking and feedback-an activity that Newland et al (2014) describe as ”mission critical” to institutions. Ferrell’s study of Keele University outlines how it has taken time for academics to change the marking practices that they have followed for years (Ferrell, 2014). One participant in the annual 2014 Heads of eLearning Forum survey commented “Never underestimate the effort involved with winning hearts and minds of colleagues” (Newland, Martin & Ringlen, 2014). Given this context, the presentation will move on to consider how some institutions have created the right conditions to encourage successful stakeholder management by focusing on inclusive consultation in project design, particularly in the earlier stages of major change, drawing in stakeholder groups and moving away from a ‘top down’ approach. The e-AFFECT project at Queen’s University Belfast has adopted a methodology of ‘Appreciative Inquiry’ involving a non-judgemental review of current practice and collaborative forward planning with stakeholders (Queen’s University, 2014). Others have been particularly mindful that they are working with a broad range of academic and professional colleagues-those who are uncomfortable with the introduction of online assessment to those who are ‘super enthusiasts’ (Ellis & Reynolds, 2013). In response some institutions have relied on organic, non-directive change based on the recruitment of internal School champions, the effective dissemination of impact using short, regular top tips, longer evidenced based assessment and feedback case studies, a broad range of supporting resources and have disseminated using a approaches from major events, conferences, webinars, posters, online resources and explanatory screencasts. This presentation will explore some of the best case examples of strong stakeholder management across the sector to support major, institution wide change in assessment and feedback practice. References Ellis, C., and Reynolds, C. 2013. EBEAM Final Report. http://jiscdesignstudio.pbworks.com/w/file/fetch/66830875/EBEAM%20Project%20report.pdf Ferrell, G. (2014). Technology supporting assessment and feedback at Keele. Retrieved from https://ema.jiscinvolve.org/wp/2014/08/06/technology-supporting-assessment-and-feedback-at-keele/


Newland, B., Martin, L., & Ringlen, N. (2014). Electronic Management of Assessment: Critical Success Factors in Institutional Change, presented at ED-MEDIA World Conference, Finland, 2014 University of Keele STAF Project Final Report (n.d.). Retrieved from https://www.webarchive.org.uk/wayback/archive/20140614073219/http://www.jisc.ac.uk/whatwedo/ programmes/bcap/keele.aspx Queen’s University Belfast. 2014. E-Assessment and Feedback for Effective Course Transformation: Final Project Report 2014. http://jiscdesignstudio.pbworks.com/w/page/50671059/e-AFFECT%20Project The University of Reading. (2017). About the Programme: EMA Programme. Retrieved from http://www.reading.ac.uk/internal/ema/ema-about.aspx

Feedback, a shared responsibility Speakers: Annelies Gilis, Karen Van Eylen, Elke Vanderstappen, KU Leuven, Belgium; Joke Vanhoudt, Study Advice Service, KU Leuven, Belgium; Jan Herpelinck, General Process Coordination, KU Leuven, Belgium ‘Feedback is one of the most powerful influences on learning and achievement.’ The core concept of KU Leuven’s education philosophy is students’ disciplinary future self (DFS), which is that part of students’ future identity they want to create by choosing a programme of study. Research indicates that programmes that address students’ DFS stimulate motivation and deeper learning. A good implementation is therefore essential: How can we stimulate students to reflect on who they are, where they stand and where they want to go to? Up until now feedback is rather fragmented: Professors give feedback on students’ assignments, study counsellors give feedback on students’ study progress. To facilitate students’ learning and DFS, feedback should be implemented in a more aligned way to empower students. To realise this change at institutional level, we launched a project with two foci:  

What does feedback look like approached from different angles (curricula, students, career counselling)? What is the continuum of actors to involve? What does transformed feedback looks like in practice? A pilot project will be launched in one faculty.

We will provide you with an insight in KU Leuven’s transformation of feedback and share our experiences. References Hattie, J. & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77 (1), 81–112. https://www.kuleuven.be/onderwijs/english/education/policy/vision/consideration Feedback: What lies within? Speaker: Jane Rand, York St John University, UK Assessment feedback is an area of disillusionment – there is a mismatch between what students want to receive and what lecturers want to give (Tapp, 2015). Dominant assessment dualisms like ‘peer/tutor’, and more particularly ‘formative/summative’, contribute to disillusionment because they disguise what lies within the binary limits. This round table presentation introduces a model, Dimensions of knowing©, as a technique to initiate a dialogue of educational criticism (Leonardo, 2016) and open up conversations between students and lecturers about what kind(s) of work the representation of feedback does. As one of the most powerful single influences on student learning, and as an integral element of a social epistemology, feedback must both foster purpose within students and be useful to students. Using the Dimensions of knowing model as both a physical and conceptual focus, I invite colleagues to reflect, interact, and begin to discuss the ways in which we might resee, reframe, and transform feedback.

References Leonardo Z (2016) Educational criticism as a new specialization. Research in Education 96(1): 87–92. Tapp J (2015) Framing the curriculum for participation: A Bernsteinian perspective on academic literacies. Teaching in Higher Education 20(7): 711–722.


6

Micro Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Crisis, what crisis? Why we should stop worrying about NSS scores Speaker: Alex Buckley, University of Strathclyde, UK

Assessment and feedback are often discussed in tones of crisis, and the National Student Survey frequently seems to be the cause. Students’ ratings of assessment and feedback are, on average, the weakest of all the areas covered by the NSS, and it has become a commonplace to justify a focus on assessment and feedback by citing the national NSS scores. Nevertheless, it is a mistake: the NSS does not provide any evidence that assessment and feedback is a particular problem for the UK higher education sector. It is generally unwise to draw conclusions from comparisons between survey items, as different levels of positivity can be acceptable for different aspects of students’ experiences. Given the element of criticism inherent in assessment and feedback processes, there are specific reasons to expect that those aspects of students’ experiences would be less positive than other areas of the questionnaire. Having your work assessed, and receiving feedback about your performance, can be emotive and challenging in a way that other elements covered by the questionnaire - teaching, organisation, learning resources - are not. There may be a crisis in assessment and feedback, and there may be good evidence for it, but it doesn’t come from the NSS.

Transforming feedback practice in the practical environment using digital technologies Speaker: Moira Maguire, Dundalk Institute of Technology, Republic of Ireland

In Science and Health disciplines, undergraduate students spend significant time in practical sessions. Here, they work in groups to learn technical competencies in combination with group work, data analysis and interpretation, communication skills and peer/self-assessment. The TEAM project (Technology enhanced assessment methods in science and health practical settings) aims to improve assessment in practical sessions via the use of digital technologies. To date, four thematic areas have been identified, namely: (i) Pre-practical preparation (videos, online/app-based quizzes), (ii) Electronic laboratory notebooks and ePortfolios, (iii) Digital Feedback and (iv) Rubrics. 50 Pilots involving these technologies have been implemented with 1,481 students and are currently being evaluated. While improvements to the assessment of practicals have been central to the project, enhancements in feedback delivery systems within practical sessions have also become a focus. For example, in the sciences, feedback traditionally comprises hand-written comments on submitted laboratory reports. With the innovative TEAM project (http://www.teamshp.ie ), students are receiving their feedback via audio recordings, online quiz generated responses, electronic lab notebook feedback widgets, graphic-tablets & rubrics amongst others. Here, we present the student evaluation of these innovative and transformative feedback-providing technologies that can positively engage and empower students in the science practical environment. Stressful feedback post paramedic simulation training Speaker: Enrico Dippenaar, Anglia Ruskin University, UK

Simulation based education has been around for some time in clinical education. These sessions historically have been very didactic, with little to no feedback or debriefing post simulation. In recent years, the significance of feedback format and its positive impact on learning has influenced practice across all forms of clinical training. Paramedic training, however, is distinctive to the ‘job’ being unpredictable and focused on high stress, high intensity situations – often life or death. Simulation based education is ideal for trying to create the same stressors on students as they will face in ‘real-life’. Feedback and debrief following such simulations can be extremely difficult,


adrenaline is ‘pumping’, emotions are high, physiologically you exhibit signs of stress, mentally you are drained. How do we use feedback in a constructive manner in such an unconstructed environment? This micro presentation will highlight lessons learned from this unusual situation that may be of value in other assessment contexts. Conversation consideration: Can the ideal speech situation be used in the creation of an ideal feedback situation Speaker: Kimberly Wilder, Edinburgh Napier University, UK

Over the last two decades many studies in higher education settings have asked the question, ‘What is good feedback?’ These studies have focused on everything from what students want from their feedback (Beaumont, O’Doherty, and Shannon 2011; Hounsell et al., 2008;) to the best approaches to giving feedback (Carless et al. 2011; Gibbs and Simpson, 2004) to guiding principles that demonstrate good feedback (Nicol and Macfarlane-Dick 2006). What many of these models do is assume that if all of the conditions of the model are met, then the student should be satisfied with the feedback they receive. Using Habermas’ Ideal Speech Situation as a comparison point to models of good feedback, this paper addresses the fundamental flaws in the assumptions made by the Ideal Speech Situation and the similarity to the assumptions made in the models of ‘good’ feedback. I propose a new approach to feedback where we move away from the notion of using ‘good’ feedback models and focus on creating an environment where students understand the purpose of feedback. This will allow feedback provided in any form to be useful to a student, regardless of whether it is classified as a ‘good’ model or not. References Beaumont, C., O’Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feedback: A key to improving student learning? Studies in Higher Education, 36(6), 671–687. http://doi.org/10.1080/03075071003731135 Blair, A., Curtis, S., & McGinty, S. (2013). Is peer feedback an effective approach for creating dialogue in Politics? European Political Science, 12(1), 102–115. http://doi.org/10.1057/eps.2012.2 Hounsell, D., McCune, V., Hounsell, J., & Litjens, J. (2008). The quality of guidance and feedback to students. Higher Education Research & Development, 27(1), 55–67. http://doi.org/10.1080/07294360701658765 Nicol, D. J., & Macfarlane-dick, D. (2006). Formative assessment and self-regulated learning : A model and seven principles of good feedback practice . Formative assessment and selfregulated learning : A model and seven principles of good feedback practice . Studies in Higher Education (2006), 31(2), 199–218. http://doi.org/10.1080/03075070600572090 Price, M., Handley, K., & Millar, J. (2011). Feedback: focusing attention on engagement. Studies in Higher Education, 36(8), 879–896. http://doi.org/10.1080/03075079.2010.483513 Development and Construct Validation of a Formative Assessment of Writing Instrument: A Confirmatory Study on Iranian EFL students Speaker: Elaheh Tavakoli, HØgskolen/Hakim Sabzevari University, Norway

Despite the crucial role of formative assessment in developing students’ language learning, second (L2) writing has tended to focus more on summative assessment. To realize what the assessment trend in Iran as an EFL (English as a Foreign Language) is, the current study was undertaken as part of a bigger project, to develop a Formative Assessment of Writing (FAoW) instrument and validate its construct. Undertaking a comprehensive review of the literature, 30 Likert scale items were devised to identify EFL students’ experience of formative assessment practices of writing assignment. Each item measured two scales, students’ experience of FAoW and attitude towards their use to improve writing. The items were classified based on the theoretical framework proposed by Wiliam and Black (2009). The underlying construct for the development of the items was the five components in


the framework (i.e. clarifying criteria, evidence on students’ learning, feedback to move learners forward, peer-assessment and autonomy) in three main stages (i.e. “Where the learner is going/Prewriting, “Where the learner is right now/Writing and “How to get there/ Post-writing”). The construct was validated both qualitatively (through focused group interview with three EFL domain experts and think aloud protocol with three EFL learners) and quantitatively through confirmatory factor analysis. Internal consistency of the FAoW scale was estimated α = .91 for the experience and α = .87 for the attitude scale. The appropriateness of model fit through confirmatory data analysis on a sample of 255 EFL learners showed that the five-factor structure of the instrument could be statistically supported. More particularly, the absolute measure indices outstripped the minimum cut off points, meaning that the instrument had the potential for measuring the construct of FAoW.

Developing and implementing a model of intensive spoken teacher and peer feedback with learner control Speaker: Gordon Joughin, Deakin University, Australia; Helena Gaunt, Guildhall School of Music and Drama, UK

In music and drama education, feedback on students’ performance and presentations in the classroom, on stage and in formal assessments is so central to teaching and learning that it has been characterised as a signature pedagogy. Traditional forms of feedback can easily diminish learner control and student growth. At the Guildhall School of Music and Drama an innovative approach to feedback has been developed, implemented and evaluated through an HEA funded project on transformational feedback for empowering students in their courses and future careers. The Critical Response Process pioneered in dance education has been modified and enhanced through coachingmentoring principles to develop a genuinely dialogic approach to feedback co-constructed between students, their peers and their teachers. The four-step process, closely aligned with Boud and Molloy’s Feedback Mark II, focuses on the meaning of the work for its audience; the student’s request for specific feedback; questions peers and teachers pose to prompt the student’s reflection; and the student’s control over hearing evaluative opinions. The systematic embedding of the process across the Guildhall School has resulted in a significant community of informed feedback practitioners. This process may resonate with colleagues from other disciplines who may also offer insightful challenges to the approach. Web link: Guildhall School of Music and Drama 2017. Empowering artists of the future through a transformational feedback model: astrategic initiative funded by the HEA, 2015-16. https://www.gsmd.ac.uk/about_the_school/research/research_areas/transformational_feedback/ Key reference Lerman, L., and J. Borstal. 2003. Liz Lerman's Critical Response Process. Takoma Park: Liz Lerman Dance Exchange Mapping the Assessment Journey: Student evaluations of a visual representations of their assessments Speaker: Anke Buttner, University of Birmingham, UK

Assessment is central to students’ experience, and is associated with a large volume of information about different aspects of the assessment process. This ranges from basic procedural details (e.g. deadlines, formats) to sophisticated assessment literacy-related principles, which students need to understand to succeed at their assessments and make the most of their feedback (Price, Rust, O’Donovan, & Handley, 2012). To maximise the utility of feedback, students also need to understand the interconnections between different assessments on their programmes, but transfer of feedback from one context to another is often difficult (Schwartz, Bransford, & Sears, 2005). To assist with this, we developed a calendar-based assessment map, but while this was fairly well received by the students, what they considered important did not match the priorities of academics, and the evidence suggested that assessments were still viewed in isolation (Buttner & Pymont, 2015). The aim of this paper is to present students’ evaluations of a more visually intuitive representation of the


interconnections between assessments. A flow chart assessment map outlining the connections between first, second, and third year assessments was made available to students, and evaluated using a questionnaire and focus groups. The findings are presented here and implications for future developments in assessment mapping are discussed. References Buttner, A. C. & Pymont, C. (2015). Charting the assessment landscape: preliminary evaluations of an assessment map. Paper presented at the 5th Assessment in Higher Education Conference, Birmingham. Price, M., Rust, C., O’Donovan, B., & Handley, K. (2012). Assessment Literacy: The Foundation for Improving Student Learning, The Oxford Centre for Staff and Learning Development, Oxford. Schwartz, D. L., Bransford, J. D., & Sears, D. (2005). Efficiency and innovation in transfer. Transfer of Learning from a Modern Multidisciplinary Perspective, 1-51. Evaluation or Research Presentations Session Chair: Assistant Professor Natasha Jankowski

7

Piccadilly Suite

Developing student feedback literacy using educational technology and the reflective feedback conversation Speakers: Kathryn Hill, La Trobe University, Melbourne, Australia;Ana Maria Ducasse, RMIT University, Melbourne, Australia;

The potential for feedback to promote learning has been well-documented. However, as any teacher will attest, feedback does not automatically lead to improvement (Sadler, 2010) and this had led to a large volume of advice to teachers about how to make their feedback more effective. However, quality notwithstanding, feedback can only promote learning to the extent that it is accessed, understood and acted on by learners (Gibbs & Simpson, 2004-5). Research has found that students do not always share the same views about the role and importance of feedback as their teachers (Price, Handley & Millar, 2011). They often experience difficulty with understanding feedback (Weaver, 2006) and with knowing how to act on it (Gibbs, 2006; Poulos & Mahony, 2008). Moreover, students are not necessarily receptive to the feedback provided (Andon, Dewey & Leung, 2017) particularly if it fails to align with their personal learning goals and beliefs about language learning (Ducasse & Hill, 2016, 2018a). There is an increasing recognition of the centrality of the learner in assessment for learning (Andrade, 2010) and these findings underscore the importance of including the learner perspective in any examination of feedback practices. This paper describes an intervention which used the ‘reflective feedback conversation’ (Cantillon & Sargeant, 2008) and educational technology (PebblePad) to encourage learners to take a more active role in the feedback process. The ‘reflective feedback conversation’, first proposed for use in clinical education, involves asking students to identify two to three specific areas they would like feedback to focus on and to evaluate their own performance in these areas. The teacher then provides her own evaluation and asks the student to reflect on specific ways to improve. The teacher then elaborates the student’s response and checks understanding. PebblePad was used to enable the documentation and tracking of these ‘feedback conversations’ over time. The aim of the intervention was to promote: • • •

shared goals and expectations a shared understanding of expected quality and standard, and shared responsibility for developing the knowledge and skills necessary for bridging the gap between current and desired performance.

Participants were 50 students enrolled in Level 3 (pre-intermediate) of a university-level Spanish program. Data included student questionnaires, teacher and student interviews (n=5) and


documentation (via PebblePad) of feedback ‘conversations’ over time. Questionnaire data were analysed using descriptive and inferential statistics while interview and documentation data were analysed using thematic content analysis. The findings have implications for improving the uptake of feedback, increasing student agency and self-regulation, and for promoting sustainable feedback practices. References Andon, N. J., Dewey, M., & Leung, C. (2017). Tasks in the Pedagogic Space: using online discussion forum tasks and formative feedback to develop academic discourse skills at Master’s level. In TBLT as a Researched Pedagogy (Task-Based Language Teaching). John Benjamins Publishing Company. Cantillon, P. & Sargeant, J. (2008). Giving feedback in clinical settings. BMJ, 337, pp. 1292-4. doi: https://doi.org/10.1136/bmj.a1961 Ducasse, A. M. & Hill, K. (in press). Advancing written feedback practice through a teacherresearcher collaboration in a University Spanish program. In M. E. Poehner & O. InbarLourie (Eds.). Toward a Reconceptualization of L2 Classroom Assessment: Praxis and Researcher-Teacher Partnership, Springer Educational Linguistics series.

Diverse orientations towards assessment and feedback internationally Speakers: Sally Brown, Independent consultant, Emerita Professor at Leeds Beckett University, UK; Kay Sambell, Edinburgh Napier University, UK

International practice in assessment and particularly feedback is diverse globally and expectations of how feedback is presented, its scope and extent is highly variable. Additionally, Killick (2018) argues that in an intercultural context, feedback on assignments can be subject to misinterpretation across cultural boundaries’ (p149). There is also substantial variation in the extent to which students feel their teachers are responsible for their academic success through the guidance and feedback they offer (Palfreyman and McBride), with potential ensuing issues around student and staff expectations. In this short account using emergent data from an ongoing project, Kay Sambell and Sally Brown will showcase examples of different approaches and practices around feedback between at least five nations in which they have worked/presented on assessment and feedback. Our findings to date from the literature in the field and from working internationally ourselves suggest that: 1. Students from some Confucian-heritage nations have higher expectations of deep and extended support on assignments prior to submission in the form of feedback on drafts (Brown and Joughin, 2007); 2. In some Southern European nations (including Italy and Spain), extensive feedback pre- or post- submission is neither offered to students nor expected by them; 3. In the Netherlands, Oral assessment (such as viva voce examinations) is used far more extensively in undergraduate studies than elsewhere, and feedback is frequently provided on unseen time constrained exams, unlike normal practice in for example the UK and Australia. 4. Multiple-choice questions and other forms of computer-based assessment are often more widely used in nations including the US and Singapore than elsewhere and frequently these do not include integral feedback; 5. In the UK, Australia and Ireland, greater use of inter-and intra-peer assessment is used in the assessment of group work than in nations including Spain, the US and Italy, with concomitant requirements for training and support to be provided for students on giving feedback to one another.


We nevertheless recognise the importance of the findings of a comparison between UK and Australian assessment practices (Winstone and Boud, 2017, p3) that suggest that ‘major differences in feedback culture may not be located in national approaches, but in local disciplinary, institutional and pedagogic contexts’ Our work in the area is ongoing, and within the discussion element of this presentation we would seek participants’ collegial contributions to our research. References Brown S. and Joughin, G. (2007) Assessing international students: Helping clarify puzzling processes, in Internationalising Higher education (Eds. Brown, S, and Jones, E,), London: Routledge. Carless, D., Joughin, G., and Ngar-Fun L. (2006) How Assessment supports learning: Learning orientated assessment in action, Hong Kong: Hong Kong University Press. Carroll, J. and Ryan, J. (2005) Teaching International students: improving learning for all, Abingdon: Routledge SEDA series. Killick, D. (2018) Developing Intercultural practice: Academic development in a multi-cultural and globalizing world. London and New York, Routledge Palfreyman, D. and Mcbride, D.L., (2007) Learning and teaching across cultures in Higher Education, Basinstoke, Palgrave Macmillan Winstone, N. and Boud, D. (2017) Supporting students’ engagement with feedback: The adoption of student-focused feedback practices in the UK and Australia, SRHE conference. Frequent rapid feedback, feed-forward, and peer learning, for enhancing student engagement in an online portfolio assessment Speaker: Theresa Nicholson, Manchester Metropolitan University, UK

This paper presents outcomes from a 3-year initiative to enhance student engagement through weekly summative and formative feedback. The research concerns a first year undergraduate, tutorials-supported, academic skills module, with an assessed online portfolio. In phases 1 and 2, half the student cohort submitted a printed portfolio, while half completed an online portfolio. In the final phase, all students completed an online portfolio (c.190 students). The research design has enabled a robust comparison of the influence of assignment mode on the efficacy of marking and feedback, attainment, student engagement, and tutorial management. The aims were to enhance student engagement, improve digital literacy among staff and students (Clarke and Boud 2016), and to raise employability awareness by developing a showcase of skills for prospective employers (Simatele 2015). The online mode challenges the orthodox approach where the classroom is the focus of learning, to a student-regulated mode, where learning is facilitated by tutors’ regular, individual, written feedback. Thus frequent and interactive feedback, including elements of feedforward and peer learning, help develop students’ thinking and learning skills (Clark 2012). Qualitative analysis reveals that the provision of rapid and regular feedback on work is the aspect most valued by students. Tutors valued the ability to track students' progress by accessing online portfolios and providing rapid feedback on completed work. Feedback and progress tracking is easy to give and to receive online (Heinrich et al., 2007), but also creates accountability that is often absent in relatively remote institutional monitoring systems (Stork and Walker, 2015). Some tutors found the marking and feedback process easier online and that there was a positive impact on tutorials, while others found the process more challenging. The online approach had an adverse effect on face-to-face meetings for some, highlighting the need for guidance on tutorial management. Quantitative analysis of student grades tentatively indicates higher attainment levels in the online mode, where progress tracking and regular feedback occur. There are some tensions between meeting the desire to provide very regular, rapid feedback, and associated practical constraints. Barriers sometimes presented through non-engagement of learners, likely influenced by an array of external as well as internal pressures. Nevertheless, engagement on the whole was much improved. There were also constraints due to limited digital literacy, and tutors’ workload pressures. The findings suggest that personalised progress tracking, prompt, regular feedback on tasks, and


multiple opportunities for group-based discussion of feedback, can promote student engagement in both self-regulated and face-to-face learning activities. References Clarke, J. L., Boud, D. (2016). Refocusing portfolio assessment: Curating for feedback and portrayal. Innovations in Education and Teaching International. http://dx.doi.org/10.1080/14703297.2016.1250664, 1-8. Clark, I. (2012). Formative assessment: assessment is for self-regulated learning. Educational Psychology Review, 24(2), 205–249. Heinrich, E., Bhattacharya, M., Rayudu, R. (2007). Preparation for lifelong learning using ePortfolios. European Journal of Engineering Education, 32(6), 653–663. Simatele, M. (2015). Enhancing the portability of employability skills using e-portfolios. Journal of Further and Higher Education 39(6), 862–874. Stork, A., Walker, B. (2015). Becoming an Outstanding Personal Tutor. Critical Publishing, Northwich.

8

Evaluation or Research Presentations Session Chair: Associate Professor Geraldine O’Neil

Meeting Room 3

Student learning through feedback: the value of peer review? Speaker: Charlie Smith, Liverpool John Moores University, UK

The design review – also known as a crit or jury – is a long-standing cornerstone of design education. It is a forum through which students present their coursework, and then receive formative feedback from their teachers and guest reviewers – often external practitioners or teachers from other institutions. However, its perceived value and status within the pedagogic process mean that its methods and effectiveness often go unquestioned. (1) In his seminal 1852 book 'The Idea of a University', Newman argues that, “A university training … is the education which gives a man [sic] a clear conscious view of his own opinions and judgements, a truth in developing them, an eloquence in expressing them, and a force in urging them.” (2) Given the format and power-dynamic of the conventional design review – a nervous student standing in front of their work, facing a panel of critics seated before them, beyond whom other students passively observe the proceedings – it is questionable as to what extent it meets any of Newman’s objectives. What, if anything, does the review offer as a means through which to foreground the student, and toward developing their critical and reflective thinking? This paper interweaves two methodologies to critique student peer review – where students become critics in place of their teachers. Firstly, and theoretically, by interrogating it in the context of contemporary pedagogic research, to debate its value, strengths and weaknesses in terms of learning gain and skills development, and its suitability to the contemporary higher education environment. Secondly, and empirically, through a primary qualitative research project that evaluates the experiences of Architecture students involved in a series of peer reviews, as both reviewers and reviewees; this provides insights and understanding of how students feel about being placed in the position of critics to their peers, and what they think they take from the process – both in terms of feedback on their work and in developing their critical thinking. This is significant, because whilst it might have pedagogic benefits, research has found instances where students resented evaluating their peers’ work. (3) The paper will discuss to what extent peer reviews are an effective means through which to deepen students’ creative and critical thinking, in establishing a meaningful dialogue between learners in which they are contributors as opposed to mere passive recipients, and in augmenting their participation and agency in the learning enterprise. For as Collini argues, preparing students for autonomy is a key objective of higher education. References Collini, S. (2012). What Are Universities For? London: Penguin Books.


Newman, J. (1886). The Idea of a University, 6th edn. London: Longmans, Green. Salama, A.M. (2015). Spatial Design Education: New Directions for Pedagogy in Architecture and Beyond. Farnham, Surrey: Ashgate. Wilson, M.J., Diao, M. and Huang, L. (2014). ‘“I’m Not Here to Learn How to Mark Someone Else’s Stuff”: An Investigation of an Online Peer-to-Peer Review Workshop Tool.’ Assessment and Evaluation in Higher Education, Vol. 40 No. 1, pp. 15-32. In the shoes of the academics: Inviting undergraduate students to apply their assessment literacy to assessment design Speakers: Anke Buttner, Andrew Quinn; Ben Kotzee; Joulie Axelithioti ; University of Birmingham; UK

In higher education, students’ feelings about and responses to assessment are a perennial concern. Much of the discourse in this area relates to students’ satisfaction and engagement with feedback, however, an awareness has been growing that universities need to understand students’ ‘assessment literacy’ - their 'understanding of the rules surrounding assessment in their course context, their use of assessment tasks to monitor or further their learning, and their ability to work with the guidelines on standards in their context to produce work of a predictable standard.’ (Smith et al., 2013: 46) Recently, both HEA (2012) and QAA (2013) have encouraged universities to improve students’ assessment literacy to boost student satisfaction, protect and improve standards, and increase confidence in the system of assessment. Yet, to date few empirical studies have addressed students' assessment literacy in the university sector. We combined questionnaire and workshop data to study students’ feelings about assessment as well as their understanding of the assessment design process. We asked students to design the assessment they felt most suited the domain to be assessed. We recorded students thinking, responses and solutions to see whether students can ‘place themselves in the position of academics who design assessment’. In this paper, we present results from our study. Feedback to the Future Speakers: Erin Morehead, University of Central Lancashire, Andrew Sprake, University of Central Lancashire, UK

Providing students with feedback is a key part of the academic’s role (Angelo & Cross, 1993; Richardson, 2005). Facilitating students to access & engage with feedback can be challenging. Students also report a lack of understanding around feedback; in particular, on written pieces of work (Richardson, 2005). In addition to this, research has shown that feedforward plays an important role in the development of learners and that self-reflection is an integral part of the learning process (Ferrell & Grey, 2016; HEA, 2012). Therefore, it was identified that there was a need to improve students’ understanding of feedback and to encourage them to make use of this feedback in future assignments. Two small independent cohorts of students from the Faculty of Health & Wellbeing were selected for this pilot study, in order to evaluate the outcomes of two similar feedforward strategies. After receiving feedback on academic work, students were encouraged to access the comments on their assignment. Once the student had reviewed the feedback received, they were asked to complete either Action Plan A (Health Sciences students) or Action Plan B (Sports & Wellbeing students). Action Plan A asked the students to document significant feedback points from one specific assignment. The students were asked to identify both areas for improvement and areas of good practice. Following on from this, students were asked to identify how they improved or maintained these areas in the next piece of work being submitted. The students were required to submit the action plan appended to their next written assignment. Action Plan B encouraged students to document significant feedback points from multiple assignments and to identify how they could improve on this in future pieces of work. Action Plan B also asked students to perform a self-assessment of the mark they would award their own piece of work. Students were encouraged to submit this alongside of their next assignment, but it was not a formal requirement. Overall marks improved on the next piece of work submitted for both cohorts


studied, and in a number of cases by an entire grade band. As Action Plan A was a requirement, engagement with the exercise was higher than with Action Plan B. Unsurprisingly, students who reported not engaging with their feedback did not see an improvement in their mark. Overall, the students evaluated the exercise as a positive experience and reported a greater understanding of how feedback can be used in other assessments. The pilot study revealed that although different approaches to feedforward were utilised, the overall outcome of improvement in marks was observed in both cohorts. References Angelo, T. and Cross, P. (1993) Classroom Assessment Techniques: A handbook for college teachers. Jossey Bass: San Francisco. Ferrell, G. and Gray, L. (2016) Feedback and feed forward: Using technology to support students’ progression over time. https://www.jisc.ac.uk/guides/feedback-and-feed-forward Higher Education Academy. (2012) Feedback Toolkit: 10 feedback resources for your students. https://www.heacademy.ac.uk/system/files/resources/10_feedback_resources_for_your_stu dents2.pdf Richardson, J. (2005) Instruments for obtaining student feedback: a review of the literature. Assessment & Evaluation In Higher Education. Vol 30 (4), 387-415.

9

Evaluation or Research Presentations Session Chair: Jess Evans

Meeting Room 5

‘Feedback interpreters’: The role of Learning Development professionals in overcoming barriers to university students’ feedback recipience Speakers: Karen Gravett, University of Surrey; Naomi Winstone, University of Surrey, UK.

Understanding how students engage with feedback is a critical area for higher education professionals. One of the difficulties faced by those wishing to better understand student engagement with feedback is the ‘hidden recipience’ of feedback; once students access the feedback that they have been given, we know very little about what they subsequently do with it. This is problematic because we then do not have insight into the difficulties and challenges students face when trying to put feedback into practice. To date, the literature has not represented the views of a core group of professionals who have strong insight into the difficulties students face when attempting to implement feedback: student learning development professionals. Until now the role of learning development staff within this feedback landscape has not been considered, despite these individuals providing valuable support to students when interpreting and implementing feedback. Through semi-structured interviews with nine learning developers working in a UK University, we explored their insights into the barriers students confront when engaging with feedback. Our research reveals that, while many challenges do exist for staff and students in the context of assessment feedback, learning development professionals are able to provide a meaningful source of guidance, and are able to promote students’ development through constructive, dialogic, studentteacher interactions. By working with students to interpret, discuss and implement feedback, learning development professionals can be viewed as occupying a multiplicity of identities: working as interpreter, coach, intermediary, listener, and as a source of feedback dialogue. This presentation will examine these identities and explore how this critical area of support enables students to achieve at University. Hitherto these interactions have not been fully explored, yet they provide a powerful depiction of the hidden process of feedback recipience, as well as the complex learning environment experienced by today’s higher education students.


References Carless, D. 2006. ‘Differing perceptions in the feedback process’. Studies in Higher Education, 31 (2), 219-233. Evans, C. 2013. ‘Making Sense of Assessment Feedback in Higher Education’. Review of Educational Research, 83 (1), 70-120. Nash, R., and Winstone, N. 2017. ‘Responsibility-Sharing in the Giving and Receiving of Assessment Feedback’. Frontiers in Psychology, 8. Nicol, D. 2010. ‘From monologue to dialogue: improving written feedback processes in mass higher education’. Assessment and Evaluation in Higher Education. 35, (5) 501–517. doi: 10.1080/02602931003786559. Shield, S. 2015. ‘“My Work is Bleeding: Exploring Students” Emotional Responses to First-year Assignment Feedback’. Teaching in Higher Education, 20 (6), 1-11. doi: 10.1080/13562517.2015.1052786 Winstone, N. E., Nash, R. A, Rowntree, J., and Parker, M. 2017. ‘”It’d be useful, but I wouldn’t use it”: Barriers to university students’ feedback seeking and recipience’. Studies in Higher Education, 42 (11): 2026-2041. doi: 10.1080/03075079.2015.1130032. Investigating Chinese Students' Perceptions of and Responses to Teacher Feedback: Multiple Case Studies in a UK University Speakers: Fangfei Li, University of Bath, UK

With more and more attention drawing to the agency of students in the feedback process in recent studies, researchers reconceptualise the notion of feedback, in relation to its efficacy in scaffolding the dialogical learning process, from 'unilateral' (i.e. information is transmitted from the teacher to the student) to 'multilateral' (i.e. students subjectively construct feedback information and seek to inform their own judgements through drawing on information from various others (Nicol, 2010; Boud and Molloy, 2013)). Perspectives from students on their engagement with the feedback process in terms of how they perceive and respond to feedback are important to the development of teacher feedback but are comparatively under-researched (Carless, 2006). More particularly, there is yet very little research on how Chinese overseas students, who come from a feedbacksparse undergraduate educational background, engage with teacher feedback in the context of UK higher education. The study reported in this presentation therefore aimed to investigate and shed new light on how Chinese postgraduate students perceive and engage with teacher feedback in a UK university. The research questions which guided this study were: 1. How do Chinese students perceive teacher feedback in the UK higher education? 2. How do they respond to teacher feedback in the UK higher education? A qualitative case study research, employing semi-structured, stimulated recall and retrospective interviews was conducted with five Chinese postgraduate students. Data collection covered two phases – the pre-sessional language programme and the first term of their Master's degree programme. The data collected was analysed thematically and findings revealed that participants perceived teacher feedback as constituting socio-affective, cognitive, interactive and experiential dimensions. Findings indicated that participants' various perceptions of teacher feedback reflected their different understanding of the purpose of feedback. Participants coming to the UK university, in some cases, held a view of feedback that acts as a kind of explicit instruction, whereas in other cases, tried to relate feedback to the long-term development of learning. What is more, their response to teacher feedback was manifested through a range of ways including accepting and using the feedback with critical consideration, rejecting the feedback with their own reasons and interpreting the feedback to make it align with students' own judgements, beliefs or preferences. Findings showed that students constructed feedback information in various ways by discussing the feedback with other participants (e.g. peers, teachers and learning materials) and connecting it with


their previous understanding that was shaped by their prior learning experience in China. The significance of the study is that it will enable university teachers in the UK to have a comprehensive understanding of how students may interpret and respond to the feedback they provide as well as to gain insights into how to prepare and train Chinese overseas students to effectively engage with feedback in the UK higher education. References Boud, D. & Molloy, E (2013) Rethinking Models of Feedback for Learning: the Challenge of Design, Assessment & Evaluation in Higher Education, 38:6, pp. 698-712. Carless, D. R. (2006) Differing Perceptions in the Feedback Process, Studies in Higher Education, 31:2, pp. 219-233. Nicol, D. (2010) From Monologue to Dialogue: Improving Written Feedback Processes in Mass Higher Education, Assessment & Evaluation in Higher Education 35: 5, pp. 501–517. Action on Feedback Speakers: Teresa McConlogue, University College London; Jenny Marie, University College London; Clare Goudy, University College London, UK

This presentation reports on an evaluation of an initiative to improve teacher and student understanding of good quality feedback. The Action on Feedback initiative is part of a 5-year project to review assessment and feedback at UCL, a large research-intensive university. We identified feedback as a priority issue from student satisfaction surveys, which indicated dissatisfaction with the timeliness, quantity and quality of feedback. Drawing on recent research and reconceptualisations of feedback (Boud and Molloy 2013, Nicol et al 2014, Xu and Carless 2017) we aimed to shift teachers from thinking about feedback as essentially transmissive (Sadler 2010) to understanding how feedback can develop student’s evaluative judgment (Tai et al 2018, Boud and Molloy 2013) and support students to become self-regulated learners (Nicol et al 2014). We also wanted to address the lack of consistency in the quantity and quality of teacher feedback, by providing professional development for teachers across the institution. his is particularly important for large programmes with very large teaching teams and also for inter-disiplinary and cross-faculty programmes. We have worked to improve feedback by developing resources (especially the Quick Guides to Assessment and Feedback https://www.ucl.ac.uk/teaching-learning/teachingresources/teaching-toolkits/assessment-feedback ) and designing a two hour interactive workshop for academics, linking current theoretical work on feedback with ideas for practical implementation. The workshop was offered across the institution with a target of 500 participants in the first year, and a further 500 in the following year. We anticipated that this ‘saturation Continuing Professional Development’ would ensure that teaching teams developed a common understanding of good quality feedback, and that this understanding would be shared across faculties. The response to the workshop this year has been excellent and we are on course to meet or exceed our target of 500 participants. The workshop is evaluated through an immediate short questionnaire; one question asks participants what changes they intend to make. Two actions are suggested; one is that teachers should carry out peer review of feedback within their programme team. A case study of how to carry out peer review of feedback is presented in the workshop and participants are encouraged to go back to their teams and collaborate with colleagues to review feedback on their programmes. The second action is that teachers develop students’ understanding of feedback and hone their evaluative judgment through use of exemplars and dialogic feedback (Carless 2017). Appropriate activities are presented in the workshop (e.g. Guided Marking https://www.ucl.ac.uk/teachinglearning/teaching-resources/teaching-toolkits/assessment-feedback). Participants are encouraged to embed similar activities in their programmes. In this presentation, the evaluation of the impact of the Action on Feedback initiative will be reported through analysis of initial evaluations of the workshop and a follow up impact questionnaire. We will report on whether participants


implemented the two suggested actions, on lessons learned and our plans for the second year of the initiative. References Boud, D. and Molloy, E., 2013. Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education, 38(6), pp.698-712. Carless, D. and Chan, K.K.H., 2017. Managing dialogic use of exemplars. Assessment & Evaluation in Higher Education, 42(6), pp.930-941. Nicol, D., Thomson, A. and Breslin, C., 2014. Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39(1), pp.102-122. Sadler, D. R. 2010. Beyond Feedback: Developing Student Capability in Complex Appraisal. Assessment & Evaluation in Higher Education 35(5), pp. 535–550.

10

Evaluation or Research Presentations Session Chair: Dr. Peter Holgate

Meeting Room 7

The Cinderella of UK assessment? Feedback to students on re-assessments Speakers: Marie Stowell, University of Worcester, Liverpool John Moores University; Harvey Woolf, ex University of Wolverhampton, UK

The recent publication of Stefan Sellbjer’s Have you read my comments? was a stark reminder of how little has been written about re-assessment in general and feedback practices relating to reassessment in particular. Published material on re-assessment concentrates on processes and procedures, purposes and/or outcomes. In large part this is because re-assessment takes place outside the mainstream of institutions’ assessment schedules, without formalised processes for managing student preparation and feedback. A further reason for the Cinderella status of reassessment is that it is only a minority of students who have to be re-assessed. It is, however, a sizeable minority: nearly 25% of all the students who progressed to Level 5 in the 2016 NUCCATSACWG research on progression from year one had to be re-assessed in order to progress from Level 4 to Level 5. This under-represents the total number who were re-assessed as there would have been students who took Level 4 re-assessments but did not qualify for progression. We do not have hard data on the volume of re-assessment at Levels 5 and 6; however, anecdotal evidence suggests that re-assessment remains at a significant level even when students are studying at these levels. Given the overall number of students involved in re-assessment and the institutional resource invested in the process, we might consider how learning through effective feedback can be given greater importance. Over the past three years NUCCAT and SACWG have been engaged in a project to explore re-assessment practices in UK higher education. The latest phase of the project is designed to understand the extent to which the management of initial assessments varies from that of re-assessments in terms of institutional requirements and practice. One element of this is how feedback is provided to students, not least in relation to the use of capped grades for reassessments. The presentation will examine the outcomes of the research, highlighting the context in which feedback on re-assessments operates, detailing the variations in the ways in which reassessment feedback is given compared with feedback on initial assessments, and considering the implications on how feedback on re-assessments can be enhanced. References Sellbjer, Stefan. 2018. “’Have you read my comments? It is not noticeable. Change!’ An analysis of feedback given to students who have failed examinations.’ Assessment & Evaluation in Higher Education 43 (2): 163-174, DOI: 10.1080/02602938.2017.1310801. Pell, Godfrey, Katherine Boursicot and Trudie Roberts. 2009. "The Trouble with Resits …'." Assessment & Evaluation in Higher Education 34 (2):243-51.


Ricketts, Chris. 2010. "A New Look at Resits: Are They Simply a Second Chance?." Assessment & Evaluation in Higher Education 35 (4): 351-56. Proud, Steven. 2015. "Resits in Higher Education: Merely a Bar to Jump over, or Do They Give a Pedagogical ‘Leg Up’?." Assessment & Evaluation in Higher Education 40 (5): 681-97. To what extent do re-assessment, compensation and trailing support student success? The first report of the NUCCAT-SACWG Project on the honours degree outcomes of students progressing after initial failure at Level 4, NUCCAT, 2016. A critique of an innovative student-centred approach to feedback: evaluating alternatives in a high risk policy environment Speakers: Judy Cohen; University of Kent; Catherine Robinson, University of Kent, UK

This paper evaluates an approach to developing dialogic feedback through team-based learning (TBL). The aim was to address student underperformance via a collaborative and student-centred learning space (Cohen & Robinson, 2017). While this initial study showed improvements in student performance, satisfaction with TBL was mixed and early indications are that these results will be confirmed by the second study. Additionally, staff reported concerns over teaching spaces and workload, while observing that overall student satisfaction with the module dropped slightly in response to the flipped mode of content delivery and high levels of peer interaction. While studentcentred approaches may be equated with ‘teaching excellence’ in the minds of teaching staff, our findings suggest that student centredness and the push towards self-regulated learning is complex and not necessarily favoured by students, particularly those novice learners holding a didactic/reproductive approach to learning (Lee & Branch, 2017). In an era of market-driven provision of education, a focus on choice and cost may be expected to spotlight student ratings and value for money. However, this focus introduces a conflict for staff aiming to deliver a quality higher education experience while facing high expectations of accountability in the current HE environment. Staff may weigh up teaching innovations not only by the potential to enhance transformative learning, but by student satisfaction with the innovation. This phenomenon of pedagogic frailty (Kinchin et al., 2016) is included in our evaluation, and we examine ways to provide higher education which is characterised by acquisition of troublesome knowledge and requiring student effort over time (Land et al., 2010), while addressing student satisfaction. Using a conceptual framework of teaching excellence (Cohen & Robinson, 2017), we provide an evaluation of the implementation of TBL for enhancing student engagement with and use of feedback. In particular, we evaluate student use of feedback in their transition from novice to expert learner and we explore alternative ways of facilitating this developmental journey. We review the notion of pedagogic frailty (Kinchin et al., 2016) on our own practice and the implications for trialling innovative teaching practices within a competitive Business School environment. Findings from our research project are discussed in relation to the methodologies used and the appropriate methods of implementing practices supporting student engagement and peer feedback. Participants will be able to formulate an action plan and evaluate their own innovative practice with a critical eye to pedagogic frailty and student voice. References Cohen, J. and C. Robinson (2017) Enhancing Teaching Excellence through Team Based Learning, Innovations in Education and Teaching International, 13 October, 2017, https://doi.org/10.1080/14703297.2017.1389290 Kinchin,I.M, E. Alpay, K. Curtis, J. Franklin, C. Rivers & N.E. Winstone (2016) Charting the elements of pedagogic frailty, Educational Research, VOL. 58, NO. 1, 1–23, http://dx.doi.org/10.1080/00131881.2015.1129115 Land, R; Meyer, JHF & Baillie, C. (2010) ‘Editors Preface;’, in Threshold Concepts and Transformational Learning, Land, R; Meyer, JHF & Baillie, C. eds. Rotterdam: Sense Publishers.


Lee, SJ & Branch, RM (2017) ‘Students’ beliefs about teaching and learning and their perceptions of student-centred learning environments.’ Innovations in Education and Teaching International. http://dx.doi.org/10.1080/14703297.2017.1285716 Build a bridge and get over it – exploring the potential impact of feedback practices on Irish students’ engagement Speakers: Angela Short, Dundalk Institute of Technology; Gerry Gallagher, Institute of Technology, Republic of Ireland

Build a bridge and get over it – exploring the potential impact of feedback practices on Irish students’ engagement. Student engagement (SE) has been defined as participation in educationally effective practices, both inside and outside the classroom, which lead to a range of measurable outcomes. The SE concept “implies a series of conceptual commitments, teaching strategies and behavioural orientations” (Macfarlane & Tomlinson, 2017, p.7) that is key to learning gain and success in higher education. The salience of social relationships in the building of student engagement is no longer questioned, the depth and quality of the teacher-student relationship acting as buffer for students in times of crisis. However, despite the accepted importance of interpersonal relationships in providing a gateway to learning, there is a paucity of research on the nature and impact of the teacher student relationship in higher education (Hagenauer and Volet 2014). Studies have confirmed that undergraduate students perceive their teachers as caring when they encourage and respond to questions, and give good feedback (Teven 2001). Good quality feedback, formative in nature and dialogic in delivery, which is intentionally integrated throughout the teaching process and focused on improving student learning, can bolster student confidence and motivation, provide guidance and direction and, importantly, foster a stronger student teacher relationship (Winstone and Nash 2016). This, in turn, can promote learning and enhance the effectiveness of the feedback by increasing student engagement with the process (Yang and Carless 2013). The Irish Survey of Student Engagement (ISSE) is a compulsory national survey of higher education students. Introduced in 2013 and the first system-wide survey of its type in Europe, the ISSE seeks to document Irish higher education students’ experience. Primarily consisting of fixedchoice closed questions, the survey ends by posing two open-ended questions: 1. What does your institution do best to engage students in learning? and 2. What could your institution do to improve students’ engagement in learning? Systematic analysis of the free text response data collected from students in the first five years of the ISSE underlines the importance for students of the teacher-student relationship and the perception that teachers care about them and more importantly, their learning. In addition to examining what students have to say about feedback in the ISSE, this paper will explore the potential of teacher feedback to foster the teacher-student relationship when underpinned by principles of good practice. Dialogic teacher-student feedback interactions focus primarily on the individual student and their learning and, thus, can reinforce the perceived caring and approachability of the teacher. The paper will further argue that, although relational feedback practices are challenged by the prevalence of large group teaching in Higher Education today, these practices may be supported through the thoughtful use of appropriate digital technologies (Y1Feedback 2016). References Hagenauer, G. and Volet, S.E. (2014). Teacher–student relationship at university: an important yet under-researched field. Oxford Review of Education, 40(3), pp.370-388. Macfarlane, B. and Tomlinson, M. (2017). Critiques of Student Engagement. Higher Education Policy, 30, pp. 5-21. Teven, J.J. (2001). The relationships among teacher characteristics and perceived caring. Communication Education, 50(2), pp.159-169.


Winstone, N. & Nash, R. (2016). The Developing Engagement with Feedback Toolkit (DEFT). Available at: https://www.heacademy.ac.uk/knowledge-hub/developing-engagementfeedback-toolkit-deft Yang, M. and Carless, D. (2013). The Feedback Triangle and the Enhancement of Dialogic Feedback Processes. Teaching in Higher Education, 18 (3) pp. 285 – 297. Y1Feedback (2016). Technology-Enabled Feedback in the First Year: A Synthesis of the Literature. Available at: http://y1feedback.ie/wp-content/uploads/2016/04/SynthesisoftheLiterature2016.pdf

11

Round Table Presentations Session Chair: Dr. Rita Headington

Meeting Room 11

Dialogue and Dynamics: A time efficient high impact model to integrate new ideas with current practice Speaker: Lindsey Thompson, University of Reading, UK

There has been a proliferation of discussion related to student progress in higher education, reflected throughout HEA, HESA and TEF. Progress can be attributed to a variety of factors (Hattie and Timperley, 2007), one of the most significant being feedback which is consistently the lowest scoring question on the NSS (IPSOS Mori, 2006). The literature has identified a range of barriers to effective feedback in terms of dissemination and reception. A key concern of students is the consistency of feedback, their ability to interpret it and how to use it (Ajjawi and Boud 2017, Winstone et al., 2017). Alternatively, lecturers highlight student engagement as the most important barrier. In addition, they are universally concerned by the time devoted to detailed marking as student numbers continue to rise. Recent research has suggested that effective feedback takes the form of a shared dialogue between the giver and the receiver. This model has also been shown to be of preference to the student (Mulliner and Tucker, 2017). The model proposed in this paper integrates current expertise with the newer demands for dialogue and the idea that feedback should be a dynamic process. Students receive individual reports and targeted ‘next steps’. Reports can be stored and accessed easily to support revision and further work. Feedback summaries enable selfevaluation, and cohort comparisons at the lecturer level. Finally a range of dynamic ‘next steps’ are online or face to face dialogue is booked and recorded. The model is illustrated using a simple Excel spreadsheet separated into various worksheets. Initial discussions with students and lecturers have been very positive with 100% of Foundation year students saying they prefer the system over any of their current models. They report that they are more likely to read the feedback, understand what to do and use it into the future. Staff feel it will ease their workload while providing high impact feedback that will drive student progress. References Ajjawi, R and Boud, D (2017) Research feedback dialogue: an interactional analysis approach, Assessment and Evaluation in Higher Education 22, 252-265 Hattie, J and Timperley, H (2007) The Power of Feedback, Review of Educational Research, 77, 81112 Mulliner, E and Tucker, M (2017) Feedback on feedback practice: perceptions of students and academics, Assessment and Evaluation in Higher Education, 42, 266-288. Winstone, NE, Nash, RA, Rowntree, J & Parker, M (2017) ‘It'd be useful, but I wouldn't use it’: barriers to university students’ feedback seeking and recipience, Studies in Higher Education, 42:11, 2026-2041.


From peer feedback to peer assessment Speaker: Josje Weusten, Maastricht University, Netherlands

One of our main goals at the Bachelor Arts and Culture of Maastricht University is to help our students to develop into autonomous, critical thinkers and self-regulated learners. We offer a programme that is student-centered and we work according to the principles of Problem Based Learning. Peer feedback is at the back-bone of our programme, which helps students to engage with their academic development and the act of giving feedback deepens their learning experience. As coordinator of a new skills training, I would like to take the peer feedback one step further and to actively involve students in formative ánd summative assessment. Rather than making the assessment completely teacher-led, we aim to make it more student-centered (with the teacher still holding final responsibility). Involving students in their assessment has proven to increase their ability to become empowered, autonomous, engaged, creative learners (Spiller, 2016; Fleishmann, 2016; Peter and Bartholomew, 2016). Students will be involved in formative and summative assessing of parts of each other’s’ work and co-construct parts of the assessment criteria against which they will judge that work. Large student cohorts: the feedback challenge Speakers: Jane Collings, University of Plymouth; Rebecca Turner, University of Plymouth, UK

At the heart of assessment is the feed-in, feedforward, feedback model (Brown 2007). This is constantly emphasised in TLS staff sessions including the workshop ‘Smarter Faster Feedback’. The aim of the workshop is to improve student learning and reduce staff workload when delivering feedback. At the University there are many examples of good feedback practice including the use of generic feedback, the Dental School delivering same day oral feedback on all assessments and the Law School conducting post examination individual feedback sessions. Student evaluative feedback in such programmes are much improved. Web pages for staff have been developed with feedback resources and a funded project on examination feedback produced a toolkit to assist feedback design (Sutton & Sellick 2016). However, an outstanding challenge remains to improve feedforward and feedback in programmes with large student cohorts. Fast turnaround of generic feedback times are usually cited as the solution. However, generic feedback is often met with student dissatisfaction who request specific feedback with points for improvement. We know we are not alone in facing this challenge. In this round table discussion, we will request participants to share good practice and work collaboratively to develop solutions. References Brown. S (2007) Feedback and Feed-Forward’ Centre for Biosciences Bulletin.22 (Autumn 2007) Sutton,C.,& Sellick,J. (2016) Examination Toolkit. University of Plymouth. Dynamic feedback –a response to a changing teaching environment Speaker: Martin Barker, University of Aberdeen, UK

The teaching environment is evolving rapidly due to many factors, including increasing student diversity, escalating expectations from students, and a changing student-centered pedagogy. At the same time, there is an emerging technology that has opened up a dialogue between academics and their students. Part of the response to these challenges and opportunities is the use of dynamic feedback. This can include an increased emphasis on feedback that is formative and interactive, rather than purely summative and didactic. Various models of dynamic are available, including closing the feedback loop (Pitts, 2003; Quality Assurance Agency for Higher Education. 2006) and assessment dialogues (Carless, 2006) The purpose of this presentation is to strengthen the case for a dynamic feedback. I will use examples involving iterative feedback (Barker & Pinard, 2014) and experience of an online formative assessment tool (BluePulse2, e.g. Abertay TLE, 2016). My observations are that students appreciate a chance to clarify the feedback that they receive, and an


opportunity to demonstrate that they have engaged with it. At the same time, their teachers are motivated by the knowledge that their feedback has been understood, or that they can clarify feedback that has been misunderstood. This communication can be instantaneous, so that course adjustments can be made in real time. So, students can become immediate beneficiaries of dynamic feedback, rather than simply providing feedback as a legacy for future students (who may in any case have different needs). The emphasis of the talk is on practical techniques that can be readily deployed in a changing teaching environment. References Abertay TLE. 2016. “Reading your module’s pulse with Bluepulse2 –the tool to capture anonymous student feedback”. Abertay University Teaching and Learning Enhancement. WordPress December 14. https://abertaytle.wordpress.com/2016/12/14/reading-your-modules-pulsewith-bluepulse-2-the-tool-to-capture-anonymous-student-feedback/ Barker, M. & Pinard, M. A. 2014. “Iterative feedback between tutor and student in coursework assessments.” Assessment & Evaluation in Higher Education 39, (8): 899-915. Carless, D. 2006. “Differing Perceptions in the Feedback Process.” Studies in Higher Education 31 (2): 219–233. Pitts, S. E. 2003. “‘Testing, Testing …’. How Do Students Use Written Feedback?” Active Learning in Higher Education 6 (3): 218–229. Quality Assurance Agency for Higher Education. 2006. “Code of Practice for the Assurance of Academic Quality and Standards in Higher Education”, Section 6: Assessment of Students. Gloucester: The Quality Assurance Agency for Higher Education. A comparison of response to feedback given to undergraduates in two collaborative formative assessments: wiki and oral presentations. Speaker: Iain MacDonald, University of Cumbria, UK

This study is prompted by increasing opportunities for group formative assessment afforded by virtual leaning environments. Two methods of computer supported collaborative assessment were used by second year undergraduates – a ‘familiar’ MS PowerPoint presentation and a ‘novel’ wiki, a web communication and collaboration tool. Both were used in the formative assessment context. Using grounded theory, outcome measures of students were explored, including response to feedback given during the two assessments. An online survey and six in depth student interviews provided data for the study. Findings demonstrated that all 32 students had previous experience of MS PowerPoint; however, the wiki was new to them. Feedback was provided by the tutor verbally for the MS PowerPoint presentation; for the wikis this was written feedback together with peer review. Verbal feedback after presentations was seen as less useful, and frequently not comprehended by students due to anxiety. For the wiki feedback, peer review was valued by the majority of the students and written feedback was useful as it allowed subsequent review. This study demonstrates that feedback can be delivered in alternative forms, taking into account the assessment chosen, and should be an important factor in deciding the overall approach to delivery of assessment. References PARKER, K. and CHAO, J., 2007. Wiki as a Teaching Tool. Interdisciplinary Journal of Knowledge and Learning Objects, 3, pp. 57-72. TANAKA, A., TAKEHARA, T. and YAMAUCHI, H., 2006. Achievement goals in a presentation task: Performance expectancy, achievement goals, state anxiety, and task performance. Learning and Individual Differences, 16(2), pp. 93-99.


12

Evaluation or Research Presentations Session Chair: Professor Pete Boyd

Piccadilly Suite

Feedback, Social Justice, Dialogue and Continuous Assessment: how they can reinforce one another Speaker: Jan McArthur, Lancaster University, UK

This paper builds on previous work on assessment for social justice (McArthur, 2016, 2018) to consider the particular qualities of feedback if one is committed to greater social justice within and through higher education. Drawing on the critical theory of Axel Honneth (2014) and the importance of mutual recognition to social justice, the paper explores the ways in which feedback should engender in students their own sense of control over their abilities and achievements. Such a position is developed from a radical interpretation of feedback as dialogue, which compels an understanding of the importance of students being in dialogue with their own learning in its social context (McArthur & Huxham, 2013). A second feature of this paper is the focus on continuous assessment. Data will be drawn from a multi-partner study being funded by the ESRC/HEFCE as part of the Centre for Global Higher Education. The ‘Understanding Student Knowledge and Agency’ project is a longitudinal study of Chemistry and Chemical Engineering students in the UK and South Africa. This paper will present data from the completed first year of the UK side of the study. Initial analysis suggests that these students make a strong connection between assessment and learning, and this in turn impacts on their feedback attitudes and experiences. Furthermore, a particular feature is that these students undertake a large amount of continuous assessment, in the form of class tests, along with lab reports and larger pieces of work. The challenge of how to give feedback in such cases will be explored. Typical approaches to feedback in these assessment-intensive contexts include the use of devices such as tick box sheets or generic feedback. But responses from the students interviewed for this project suggest that such approaches are unsatisfactory. Many of these students have already identified the link between assessment and learning that is at the heart of good feedback, but are let down by formulaic or partial offerings. However, the time demands on their lecturers to provide more detailed feedback in a system of continuous assessment cannot be overlooked. This paper will explore how the practical solution to this dilemma lies in the philosophical commitment to social justice, as already outlined. Strategies for embedding feedback as dialogue and for nurturing in students the capacity to evaluate their own work will be discussed. References Honneth, A. (2014). The I in We: Studies in the Theory of Recognition. Cambridge: Polity Press. McArthur, J. (2016). Assessment for Social Justice: the role of assessment in achieving social justice. Assessment and Evaluation in Higher Education, 41(7), 967-981. McArthur, J. (2018). Assessment for Social Justice. London: Bloomsbury. McArthur, J., & Huxham, M. (2013). Feedback Unbound: From Master to Usher. In S. Merry, D. Carless, M. Price & M. Taras (Eds.), Reconceptualising Feedback in Higher Education. London: Routledge. Assessing, through narratives, the feedback needs of black women academics in higher education Speaker: Jean Farmer, Stellenbosch University, South Africa

The political past determines that black women may require different considerations for assessment for post-graduate advancement. The continued underrepresentation of black women in academia provides appropriate conditions for in-depth investigations into a global phenomena. This paper, based on an autoethnography tracking my socio-cultural-educational trajectory as a black woman, from Apartheid “gutter” education to working and studying in post 1994 South Africa at a previously” whites-only” university. I employ the narratives of five other black women of similar age in academic positions. In South Africa, more black women have acquired under-graduate degrees


than any other group, yet remain underrepresented in the acquisition of post-graduate degrees, and so also senior academic positions. “[P]olytemporality” gives passage between past and present so we make sense of our world (Lather 2007). Framed theoretically within critical intersectional feminism (Crenshaw 1989; hooks 1994), this study uses experiential story-telling and uniquely addresses the question of awareness of experiences of enablers and constraints, perceptions of others, and influences of our sense of agency. The interplay is crucial in determining who we are as academic students and teachers and the effects the advancement in academia (Leibowitz, Garraway, Farmer 2015). The act of re-collecting past experiences is key to understanding our relationships with the socio-cultural-educational elements. I assigned themes for data analysis to show how the relationship between individual and context is central to understanding the reasons that black women remain on their educational trajectory (or not). The preliminary results of this study signify silences around the need of a caring environment. It indicates an under-developed vocabulary to describe the experiences by black women, lack of a sense of belonging, silencing and other “microaggressions” inflicted either by ourselves or others (Sue, 2015; Henkeman, 2016a). Relevant transformational features cannot be adequately addressed, much less achieved, if the spaces to navigate these discussions are not radically owned equally by all. Building a national resource for feedback improvement Speaker: David Boud, Deakin University/University of Technology Sydney, Australia

If we are to improve feedback practices we need to move beyond general impressions of students and staff to understand what specifically is working well and what is not. This was the premise of a national teaching development project (funded by the Office of Learning and Teaching) designed to improve feedback practice across Australian universities, Feedback for Learning: Closing the Assessment Loop. The study recognised that feedback is not transmission of information by teachers to students, but a process involving many players that can occur prior to or following the submission of formal assessment tasks. The aim of a feedback process is to close the loop to improve students’ performance. The extensive literature on feedback in higher education has resulted in a surfeit of models, frameworks, principles and strategies with little guidance or research on what works in diverse contexts and how to choose amongst them. The project addressed this need through a large scale mixed method study (4514 students, 406 staff) to identify what strategies are reportedly working, how do they operate to be effective, and what conditions support those practices over time. In the project feedback was defined as: “a process in which learners make sense of information about their performance and use it to enhance the quality of their work or learning strategies.” A novel feature was that students were asked to identify a situation in which they received particularly effective feedback during their current program of study. They were asked to elaborate on what the activity was and explain what they thought was effective about it and what enabled it to occur. The research team collated information about which were the most frequently mentioned units and undertook detailed case studies on a selection of them. The case studies drew on interviews with the key staff involved in each, a selection of student interviews and examination of the formal documentation setting out the feedback processes. The set of seven cases finally published on the project website was chosen to reflect a diversity of feedback practices and disciplines. Most importantly, the set emphasised practices that could be scaled up and used in large classes and with multiple tutors. The empirical data was also deployed to illuminate ideas from the literature and generate a framework for effective feedback which identifies conditions for feedback success and discusses them in terms of three main categories: building capacity for feedback, designing for feedback and establishing a feedback culture. The presentation provides an overview of the major activities and findings of the project, introduces the main resources developed and discusses some of the challenges to be faced in feedback improvement. Full details of the project can be found at: www.feedbackforlearning.org


References Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698-712. Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), 81112.

13

Evaluation or Research Presentations Session Chair: Dr. Nicola Reimann

Meeting Room 3

‘Who Am I?’ Exploration of the healthcare learner ‘self’ within feedback situations using an interpretive phenomenological approach Speaker: Sara Eastburn, University of Huddersfield, UK This paper presents the findings of doctoral-level research. This empirical research explored learning through the lens of situated learning theory and utilised this theoretical position to investigate the concept of feedback within healthcare education. The research is framed within the work of Bourdieu (1977) who advocates the socially and culturally constructed nature of learning. Furthermore, it draws on the work of Bandura (1986) who situates learning within the processes of social observation, making a case that the environment, the individual and the social behaviours of those involved in a learning situation affect outcome. From these theoretical foundations, this research explores learning from feedback experiences for healthcare students through the particular the lens of communities of practice. Adopting an interpretive phenomenological approach, this paper explores the lived experience of feedback through the “voice” of the students engaged in feedback situations. Belonging to a community of practice with a common purpose appears to be a challenge for healthcare learners. In particular, though not exclusively, this paper explores the experiences of Dawn, a pre-registration healthcare student, who highlights the tensions and ill fit between her own learning and the purpose of healthcare delivery. This paper presents insight into the emotional burden of learning from feedback and it highlights the “unengaged alignment” (Wenger, 1998) of Dawn within a community of practice and the distance that she perceives between herself and other members of the community. Dawn’s use of language will be examined to understand her perception of “fit” and “worthiness” within a healthcare learning community of practice and the impact that this appears to have on how she uses feedback to support her learning. To augment the challenges perceived by the learner, data gathered from university and practicebased educators will also be drawn on. The learner “self” will be considered in relation to learner identity and feedback literacy (Sutton, 2012; Winstone et al., 2016, 2017). This paper will present suggestions of how the learner “self” might be better enabled to learn from feedback experiences, such as the role that newly qualified educators might adopt in supporting learner engagement within a community of practice and how a questioning approach to feedback might support a learner in developing and taking ownership for their learning from feedback. References Bandura, A. (1986). Social Foundations of thought and action: A social cognitive theory: Prentice-Hall. Bourdieu, P. (1977). Outline of a Theory of Practice. Cambridge: Cambridge University Press. Sutton, P. (2012). Conceptualizing feedback theory: knowing, being and acting. Innovations in Education and Teaching International, 49(1), 34-40. Wenger, E. (1998). Communities of Practice. Learning, Meaning and Identity. New York: Cambridge University Press. Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting Learners' Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes. Educational Psychologist, 51(1), 17-37. Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2016). ‘It'd be useful, but I wouldn't use it’: barriers to university students’ feedback seeking and recipience. Studies in Higher Education, 1-16.


Trainee teachers’ experiences of classroom feedback practices and their motivation to learn Speakers: Zhengdong Gan, University of Macau; Zhengdong Gan, University of Macau, China Classroom feedback has been shown to be a key mediating factor both in the learning process and in learners’ performance. While different theoretical frameworks have been proposed to characterize the nature of feedback and the conditions under which feedback contributes to student learning, few empirical studies have examined the usefulness of these theoretical frameworks to understand student feedback experiences in classrooms particularly in a pre-service context. This article explores what classroom feedback practices trainee teachers experience, and how their feedback experiences relate to their learning motivation. The paper presents the results of questionnaire surveys conducted with 276 pre-service ESL trainee teachers in a BEd English education programme. Confirmative factor analysis of the questionnaire responses produced four factors of feedback practices: longitudinal-development feedback, activity-based feedback, peer and self feedback, and teacher evaluation, all informed by the constructs of feedback described in the literature. Results showed that the trainee teachers experienced predominantly activity-based feedback and teacher evaluation feedback. A major pattern of correlation between feedback types and motivational factors is that among the four types of classroom feedback, activity-based feedback, peer/self feedback, and longitudinal development feedback, were significantly positively correlated with the two positive motivational factors, i.e., linguistic self-confidence, and attitudes towards the English education course, but these three feedback types were found to be either negligibly or negatively correlated with the remaining negative motivational factor, i.e., L2 classroom anxiety, suggesting that the more the trainee teachers experience these three types of classroom feedback practices, the more likely they experience positive motivational processes. The remaining feedback type, teacher evaluation, was only significantly positively correlated with linguistic self-confidence, but not attitudes towards the English education course, suggesting that teacher evaluation feedback may bring out some positive impact on students’ confidence toward a particular piece of work, but this kind of feedback may not significantly help to improve students’ attitudes towards the whole course. Multiple regression analyses of the data further showed that among the four types of feedback practices, peer/self feedback most significantly positively predicted linguistic self-confidence and attitudes towards the English education course. Secondly, longitudinal-development feedback significantly positively predicted student attitudes towards the English education course. In summary, the trainee teachers engaged with activity-based feedback and teacher evaluation most often in the English education course. It is likely that this predominance of teacher-directed feedback may result in a culture of over-dependency on teacher feedback among students at the expense of other student-led feedback practices. Nevertheless, in keeping with the international rise of student-centred pedagogy and Assessment for Learning policies, students are legitimate sources of feedback. Consequently, an implication of the results is that peer/self feedback and longitudinal-development feedback practices may have the potential to support students to become motivated and self-regulated learners if such feedback practices are contextualised as part of the trainee teachers’ development as pre-service ESL teachers or in consideration of lifelong learning skills. References Gan, M. J. S., and J. Hattie. 2014. “Prompting secondary students’ use of criteria, feedback specificity and feedback levels during an investigative task.” Instructional Science 42:861– 878. Guilloteaux, M.J., and Z. Dörnyei. 2008. Motivating language learners: a classroom-oriented investigation of the effects of motivational strategies on student motivation. TESOL, Quarterly 42 (1), 55-77. Hattie, J., and H. Timperley. 2007. “The power of feedback.” Review of Educational Research 77(1): 81–112. Nicol, D. J., and D. Macfarlane-Dick. 2006. “Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice.” Studies in Higher Education 31 (2): 199–218. Price M, Handley K, Millar J and O’Donovan B (2010) Feedback: all that effort, but what is the effect? Assessment & Evaluation in Higher Education 35 (3):277-289. Shute, V. J. 2008. “Focus on formative feedback.” Review of Educational Research 78(1): 153–189. Proposing a model for the incremental development of peer assessment and feedback skills: a case study Speakers: Laura Costelloe, Dublin City University; Arlene Egan, Dublin City University, Republic of Ireland Literature suggests that a crucial element of peer assessment is feedback; through giving and receiving feedback, peer assessment works to engage student learning on a deeper level (Liu and Carless, 2006; Topping,


1998). Equally, the ability to give and receive feedback and to critique have been recognised as important life skills beyond the classroom that are applicable to work contexts with 360 appraisals now a prominent part of professional life. Given this reality, learning how to give constructive feedback should be viewed as ‘an essential generic skill’ (Cushing et al., 2011: 105). This presentation reports on a model for the incremental trajectory for building confidence and competence in peer assessment and feedback for Higher Education learners. The model was developed from a case study of a postgraduate programme in an Irish Higher Education context. Arising from a small-scale study incorporating a combination of student feedback, teacher observations and informed by relevant literature (for more detail on the methodology underpinning the development of the model see Egan and Costelloe, 2016), the model recognises that giving and receiving peer feedback is not an innate skill and that learners require a scaffolded approach to develop the requisite skills (Adachi et al., 2018; Cassidy, 2006). This presentation focuses specifically on the ‘peer feedback’ component of the proposed model and outlines how the model might support incremental skill development, particularly (i) the ability to assess others, (ii) the ability to give and receive feedback and (iii) the ability to make judgments. The model suggests that learners should firstly become comfortable engaging in self-assessment tasks, which should incorporate a form of feedback from a more competent other. From here, self-assessment and peerassessment should commence to allow the learner to understand how elements of assessment and feedback may be perceived differently by a peer. Following this, group-to group peer assessment and feedback is encouraged, as this can enhance confidence in judgement and communication of feedback. From this point, one-to-one and one-to-group peer assessment and feedback can commence. We argue that such an approach encourages the use of peer assessment as and for learning, whereby students are gradually scaffolded through various formative “low stakes” assessment tasks and activities - to develop the ability to provide formative and constructive peer feedback. We also briefly consider how this model of incremental skill development might facilitate the development of generic skills which are required for the modern workplace. While the model requires further testing and validation, it offers a pathway for practitioners for the incremental development of peer assessment and feedback skills. References Adachi, C., Hong-Meng Tai, J., and Dawson, P. (2018) ‘Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education’ Assessment & Evaluation in Higher Education, 43(2): 294306. Cassidy, S. (2006) ‘Developing employability skills: peer assessment in higher education’ Education + Training, 48(7): 508-517. Cushing, A., Abbott, S., Lothian, D., Hall, A. and Westwood, M.R. (2011) ‘Peer feedback as an aid to learning what do we want? Feedback? When do we want it? Now!’ Medical Teacher, 33(2): e105-e112. Egan, A. and Costelloe, L. (2016) ‘Peer assessment of, for and as learning: a core component of an accredited professional development course of Higher Education teachers’ AISHE-J: The All-Ireland Journal of Teaching and Learning in Higher Education, 8(3): 2931-29313. Lui, N. and Carless, D. (2006) ‘Peer feedback: the learning element of peer assessment’, Teaching in Higher Education, 11(3): 279-290. Topping, K. (1998) ‘Peer assessment between students in colleges and universities’ Review of Educational Research, 68(3): 249-276.

14

Evaluation or Research Presentations Session Chair: Professor Sally Jordan

Meeting Room 5

Feedback – what students want Speaker: Susanne Voelkel, University of Liverpool, UK

Numerous publications discuss the best way to provide feedback to students (e.g. Nicol and Macfarlane-Dick, 2006. We know that feedback should be timely, that it should clarify what is expected of students and it should tell them what they need to do to improve their work. However, we still seem to be unsure how to put this advice into practice in the context of our own disciplines. Although many staff working hours are spent writing feedback on assignments, every year the NSS results and other student evaluations give an indication that students are not happy with the


feedback we provide. This study, therefore, aimed to improve the quality and consistency of written feedback to students in the School of Life Sciences (University of Liverpool). The study was funded by a University of Liverpool Learning and Teaching Fellowship (awarded to SV) and ethics approval was granted by the University’s ethics committee. As part of the project, we wanted to identify which (if any) attributes are common to ‘good’ feedback, i.e. feedback that students find useful. We focused on written assignments from two large year 2 and year 3 UG modules and asked students to send us their feedback if they thought it was helpful. In addition, we asked them to complete a short questionnaire about the feedback. We received 26 and 27 responses from year 2 and year 3 students, respectively. The type and depth of the feedback comments were then analysed according to Glover and Brown (2006,. Surprisingly, we found that ‘good’ feedback had relatively few common characteristics. For example, the quantity of the ‘good’ feedback varied, as did the percentage of motivational comments (praise). The questionnaires revealed two strong themes, though: 87% of respondents said that good feedback is ‘specific’ and 96% said it is ‘useful for future assignments’. We then performed semi-structured interviews with 7 students focusing on students’ perception of feedback and allowing us to explore the themes uncovered from the questionnaires. Finally, we conducted a nominal focus group with 6 students (Varga-Atkins et al., 2015). The focus group covered ‘what does your ideal feedback look like?’ and ‘your top wish-list for written feedback’. Interviews and focus group were recorded and transcribed. Transcripts were analysed using thematic analysis (Braun and Clarke, 2006)). The results clearly show students’ top three priorities for ‘good feedback’: 1. Justification of the marks, 2. How to improve, 3. Positive reinforcement. Overall, feedback should be detailed and specific as well as honest and constructive. The results of the study allowed us to design a staff guidance booklet with concrete feedback examples and straightforward instructions that are all the more powerful because they come directly from ‘the students’ mouth’. References Boud, D. & Falchikov, N. (2006) Aligning Assessment with long-term learning. Assessment & Evaluation in Higher Education 31 (4) 399–413 Gourlay, L. (2009). Threshold practices: becoming a student through academic literacies London Review of Education 7 (2) 181–192 Hounsell, D. (2007). Towards more sustainable feedback to students IN Falchikov, N & Boud, D. (Eds) (2007). Rethinking Assessment for Higher Education 101-113 Richardson, S. (2000). Students’ conditions response to teachers’ response: portfolio proponents take note! Assessing Writing 7(2) 117-141 Sadler, D.R. (2010). Beyond feedback: developing students’ ability in complex appraisal. Assessment and Evaluation in Higher Education 35(5) 535-550 Sambell, K.; McDowell, L. & Sambell, A. (2006). Supporting diverse students: developing learner autonomy via assessment IN Bryan, C. & Clegg, S. (Eds). (2006). Innovative Assessment in Higher Education 158-167 Developing as a peer reviewer: Enhancing students’ graduate attributes Speaker: Rachel Simpson, Durham University; Catherine Reading, Durham University, UK This presentation will explore students’ use of peer review as a formative assessment strategy in Higher Education. It aims to align with research in this area (Cho & MacArthur, 2011; Nicol, Thomson & Breslin, 2014), by moving beyond the benefits of receiving feedback and towards exploring the value of producing and giving feedback to peers. With a focus on sustainable learning, a key question for consideration is: Can developing as a peer reviewer enhance students’ graduate attributes? Against the backdrop of students’ feedback dissatisfaction in Higher Education, an action research project was undertaken in 2016-17, involving university tutors and seventy-four first-year students on a BA Education university course in North East England. The


project aimed for the students to develop the skill of producing evaluative judgements about both their peers’ academic work and aspects of teaching practice during a professional work placement. Students’ written records and reflections were analysed, alongside outcomes of semi-structured interviews conducted with a focus group of six students. Findings and analyses of individuals’ responses to a series of scaffolded peer review tasks indicated that reviewing peers’ work may contribute towards the development of some graduate or professional attributes. Primarily, when producing feedback, students formed evaluative judgements based on their understanding of quality, as discussed by Sadler (2010), and then used these judgements to improve their own academic work and professional teaching skills. Secondly, the professional attribute of being able to communicate feedback that was timely, relevant, accurate and understood by the recipient, was also evident. There were also strong indications of increases in students’ resilience when receiving feedback. Such attributes could be considered paramount to these students’ future careers as active members of professional learning communities (Stoll & Louis, 2007). However, in alignment with increasing student diversity in Higher Education (Bryson, 2014), students demonstrated different responses to becoming peer reviewers. Individuals faced a range of challenges as they became peer reviewers, primarily a lack of subject knowledge, skills and confidence to produce and communicate accurate reviews. There were also indications that proof of its success was needed before further investment would be made. Although not conclusive, this study provides insight into the introduction of a student-centred feedback system, its potential to achieve some of the graduate aims of universities, and the diversity of developments and challenges experienced by students as peer reviewers. Implications for future research and practice will be considered, including the balance between student autonomy and tutor input. Examples of the peer review tasks will be given during the presentation, to demonstrate some of the eight principles for peer review design identified by Nicol (2014), and the scaffolded approach which gradually built up students’ confidence and skills to become peer reviewers. Alongside this, the significance of involving the students in understanding the process of peer review from the beginning of their course will be discussed, aiming for active and informed participants in this formative feedback process from the outset. References Bryson, C. (2014). Understanding and Developing Student Engagement. Oxon: Routledge. Cho, K. & MacArthur, C. (2011). Learning by reviewing. Journal of Educational Psychology. 103, 73-84. Nicol, D. (2014). Guiding principles for peer review: Unlocking leaners’ evaluative skills. In C. Kreber, C. Anderson, N. Entwistle & J. McArthur (Eds.), Advances and Innovations in University Assessment and Feedback (pp. 197-224). Edinburgh: Edinburgh University Press. Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback in higher education: A peer review perspective. Assessment and Evaluation in Higher Education, 39, 102-122. Sadler, R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35, 535–550. Stoll, L., & Louis, K. (2007). Professional Learning Communities: Divergence, depth and dilemmas. England: McGraw-Hill. Does screencast feedback improve student engagement in their learning? Speaker: Subhi Ashour, The University of Buckingham, UK Good quality feedback, delivered in a timely manner, is considered an essential component of learning and teaching in higher education (Chickering & Gamson, 1989; Reitbauer et al, 2013). Such feedback is expected to enhance and promote learning. However, concerns have been raised about the value of feedback following noticeably low levels of students’ engagement with feedback reported in diverse contexts (Ali et al, 2017; Bailey & Garner, 2010; Crisp, 2007). These concerns are shared in my teaching context. To address these concerns, an action research project was designed to explore whether a multimedia screencast-based approach to feedback is likely to improve student engagement with feedback in particular and their learning in general. The project was focused on foundation level students from two main streams: Business and Law. An intervention was introduced in which a randomly selected group of students received screencast feedback instead of the traditional -written- feedback for a whole academic term. Semi-structured interviews were conducted with the students who received the new type of feedback in order to understand their views and perceptions of screencast feedback and to determine whether their engagement in their learning has been improved from their own perspectives. The findings from the interviews reveal a predominantly positive attitude towards screencast feedback compared to previous types of feedback. In particular, participants also identified positive aspects of screencast feedback such as reassurance that their assessed work does matter to


their tutor and being able to understand feedback better. Unlike other types of feedback the participants experienced, screencast feedback made it possible for the participants to act upon the suggestions and recommendations made in the screencast feedback such as avoiding a mistake in a future assignment or improving an aspect of their work as suggested in the feedback. Despite the advantages, considerations need to be made about the sustainability of this type of feedback. Tutors involved in this project felt that when under pressure, they were more likely to revert back to traditional types of feedback in order to avoid wasting time because of technical issues associated with screencast feedback. Some student participants also made the point that they were worried about losing the face-to-face opportunity of feedback if screencast feedback was to be rolled out. References Ali, N., Ahmed, L. and Rose, S., 2017. Identifying predictors of students' perception of and engagement with assessment feedback. Active Learning in Higher Education. Bailey, R, & Garner, M 2010, 'Is the feedback in higher education assessment worth the paper it is written on? Teachers' reflections on their practices', Teaching In Higher Education, 15, 2, pp. 187-198. Chickering, Arthur W., and Zelda F. Gamson. "Seven principles for good practice in undergraduate education." Biochemical Education 17.3 (1989): 140-141. Crisp, B.R. 2007. Is it worth the effort? How feedback influences students’ subsequent submission of assessable work. Assessment & Evaluation in Higher Education 32, no. 5: 571–81. Reitbauer, M., Schumm Fauster, J., Mercer, S., & Campbell, N. (2013). Feedback Matters: Current Feedback Practices in the EFL Classroom. Frankfurt, M., Lang-Ed

15

Evaluation or Research Presentations Session Chair: Dr. Amanda Chapman

Meeting Room 7

Feedback literacy in online learning environments: Engaging students with feedback Speakers: Teresa Guasch, Open University of Catalonia; Anna Espasa Catalonia, Rosa M. Mayordomo, Open University of Catalonia, Catalonia We share the approach that feedback is only so if students take some action (Boud and Molloy, 2013; Carless, 2015). However, research shows that teachers invest a lot of time and effort to provide feedback and students do not know how to implement it or they do not know what to do with it (O’Donovan, Rust and Price, 2016; Sutton, 2012; Winstone et al 2017). In previous studies we have provided evidence on the characteristics of feedback in online learning environments where communication is mainly written and asynchronous, as well as on the effect of feedback type on learning. More recently, we have reported on the factors that influence the use of feedback medium (audio, video and written) and its relation to teachers workload. In this sense we have evidence on the feedback process necessary for the design of the feedback in this specific environment. But we have identified that in general students do not know what they should do with the feedback they receive. Literature shows that there are different reasons for this, i.e. unclear feedback, different students expectations, students untrained in self-regulation of learning, the moment feedback its provided, misconceptions of feedback or level of students feedback literacy. This 2-year project seeks to answer: Why do students not use, implement, make decisions about the feedback received in online learning environments? What strategies would engage students to use, implement, make decisions, in short, be more active, with the feedback received? What strategies in massive contexts are sustainable to promote feedback literacy? To what extent do different strategies of feedback literacy influence student engagement? And to what degree? References Boud, D. & Molloy, E. (2013). Rethinking models of feedback for learning: the challenge of design, Assessment & Evaluation in Higher Education, 38 (6), 698-712. doi: 10.1080/02602938.2012.691462 Carless, D. (2015). Excellence in University Assessment. Learning from award-winning practice. London: Routledge. O’Donovan, B., Rust, C., & Price, M. (2016). A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in Higher Education. 41 (6), 938–949. doi: 10.1080/02602938.2015.1052774 Sutton, P. (2012). Conceptualizing feedback literacy: knowing, being, and acting, Innovations in Education and Teaching International, 49 (1), 31-40, doi: 10.1080/14703297.2012.647781


Winstone, N.E., Nash, R.A., Parker, M., & Rowntree, R. (2017) Supporting Learners’ Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes. Educational Psychologist, 52(1), 17–37. doi:10.1080/00461520.2016.1207538 Teaching staff’ views about e-assessment with e-authentication Speakers: Alexandra Okada, Denise Whitelock, Ingrid Noguera, Jose Janssen, Tarja Ladonlahti, Anna Rozeva, Lyubka Alexieva, Serpil Kocdar, Ana-Elena Guerrero-Roldán, The Open University UK Checking the identity of students and authorship of their online submissions is a major concern in Higher Education due to the increasing amount of plagiarism and cheating using the Internet. Currently, various approaches to identity and authorship verification in e-assessment are being investigated. However, the literature is very limited on the effects of e-authentication systems for teaching staff because it is not a widespread practice at the moment. A considerable gap is to understand teaching staff’ views, concerns and practices regarding the use of e-authentication instruments and how they impact their trust on e-assessment. To address this gap, this study presents cross-national data on distance education teaching staff’ views on eassessment using authentication and authorship verification technology. This investigation focuses on the implementation and use of an adaptive trust-based e-assessment system known as TeSLA - an Adaptive Trustbased e-authentication System for e-Assessment. Currently, TeSLA is developed by an EU-funded project involving 18 partners across 13 countries. TeSLA combines three types of instruments to enable reliable eassessments: biometric, textual analysis and security. The biometric instruments refer to facial recognition, voice recognition, and keystroke dynamics. The textual analysis instruments include anti-plagiarism and forensic analysis (writing style) to verify the authorship of written documents. The security instruments comprise digital signature and time-stamp. This qualitative study examines the opinions and concerns of teaching staff who used the TeSLA instruments. Data includes pre- and post- questionnaires and focus group sessions in six countries: the UK, Catalonia, Netherlands, Bulgaria, Finland, and Turkey. The findings revealed some technological and pedagogical issues related to accessibility, security, privacy, e-assessment design, and feedback. References Apampa, K., Wills G. & Argles, D. (2010). User Security Issues in Summative e-assessment. International Journal of Digital Society (IJDS), 2010. Ardid, M., Gomez-Tejedor, J., Meseguer-Duenas, J., Riera, J., & Vidaurre, A. (2015). Online exams for blended assessment. Study of different application methodologies. Computers & Education, 81, 296-303. Baneres, D., Baró X., Guerrero-Roldán, A. & Rodríguez, M. (2016). Adaptive e-assessment system: A general approach. International Journal of Emerging Technologies in Learning. IPPHEAE (2013). Impact of Policies for Plagiarism in Higher Education http://plagiarism.cz/ippheae/ Okada, A., Mendonca M. & Scott, P. (2015). Effective web videoconferencing for proctoring online oral exams: a case study at scale in Brazil. Open Praxis Journal of OECD 7 (3) 227-242. Whitelock, D. (2010). Activating Assessment for Learning: are we on the way with Web 2.0? In M. J. W. Lee & C.McLough (Eds.), Web 2.0-Based-E-Learning: Applying Social Informatics for Tertiary Teaching (pp. 319342): IGI Global.

Students’ responses to learning-oriented exemplars: towards sustainable feedback in the first year experience? Speakers: Kay Sambell, Edinburgh Napier University; Linda Graham, Northumbria University; Peter Beven, Northumbria University, UK This paper presents findings from a three-year action-research project that focused on enhancing first-year students’ engagement with exemplars. The exemplars were explicitly designed as pedagogic tools located within the taught curriculum (Boud and Molloy, 2013) to engage students proactively in feedback processes based on formative activities around their developing subject knowledge. Despite the large numbers of students (n=100+ students per iteration), activities were carefully designed to offer dialogic exemplars-based opportunities (To and Carless, 2016) for students to make sense of information from varied sources which they could then use to enhance the quality of their work and learning strategies, as opposed to providing students with conventional teacher-directed feedback/ feedforward comments. Importantly, the selected samples


(representing a quality range) took the form of formative writing-to-learn exercises, in marked contrast with the more conventional use of samples drawn from summative tasks. This had a major bearing on the teachers’ confidence in focusing the samples and associated dialogic activities on feedback processes that were directly linked to evaluating relevant subject knowledge in advance of summative testing. Students were required, in preparation for the workshop, to undertake the same task, and activities based on comparing their work with the samples formed the basis of a two-hour teaching session. In effect, then, this positioned the exemplars as pre-emptive formative assessment (Carless, 2007), contrasting strongly with the approach reported in many exemplars-based studies, where samples are often selected for student analysis and discussion to represent the genre of the imminent summative assessment, but on different topic areas, for fear of imitation or stifling students’ creativity (Hawe et al., 2016). In sympathy with sustainable feedback (Carless et al., 2011), the teachers’ focus throughout was on trying to develop first-year students’ evaluative judgment within the discipline (Tai et al., 2017). Findings, based on surveys, interviews and reflective diaries, illuminated ways in which, however, in the early iterations of the three-year action research cycle, over half of the students became confused and distracted by what Hawe et al. (2017) refer to as the ‘mechanics’ of assessment. Thus in the final cycle the same activities were reframed to promote the development of evaluative judgment even more explicitly, by involving students in the prior co-construction of rubrics. Results revealed dramatic changes in the ways in which students interpreted the same activities, given this further shift towards pedagogic discourse (Tai et al., 2017). These will be reported and linked to the literature, offering critical insights into the growing body of work on students’ responses to exemplars and sustainable feedback processes. References Boud, D. & Molloy, E., 2013. Rethinking models of feedback for learning: the challenge of design. Assessment & Evaluation in Higher Education, 38(6), pp.698–712. Carless, David, 2007. Conceptualizing Pre-emptive Formative Assessment. Assessment in Education: Principles, Policy & Practice, 14(2), pp.171–184. Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395-407. Hawe, E., Lightfoot, U. & Dixon, H., 2017. First-year students working with exemplars: promoting self-efficacy, self-monitoring and self-regulation. Journal of Further and Higher Education, pp.1–15. Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2017). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education, 1-15 To, Jessica & Carless, David, 2016. Making Productive Use of Exemplars: Peer Discussion and Teacher Guidance for Positive Transfer of Strategies. Journal of Further and Higher Education, 40(6), pp.746–764.

16

Round Table Presentations Session Chair: Professor Mark Huxham

Meeting Room 11

Transforming Written Corrective Feedback: Moving Beyond the Learner’s Own Devices Speaker: Britney Paris, University of Calgary, Canada Written Corrective Feedback (WCF) is a means of providing language learners with feedback on their writing, with the aim of improving written proficiency, and commonly used across disciplines in higher education. However, if left to their own devices, without any instruction on how to make the best use of the feedback, the feedback has little effect on the learner’s writing proficiency (Elwood & Bode, 2014; Ferris, 1995). In order to address this problem, I have developed an online learning module, which would guide learners through strategies for effectively making use of the WCF they receive in a variety of contexts. The aim of this round table session is to share this online resource as well as to gain feedback in order to improve the tool for broadened use in higher-education. References Elwood, J. A., & Bode, J. (2014). Student preferences vis-a-vis teacher feedback in university EFL writing classes in Japan. System, 42(1). http://doi.org/10.1016/j.system.2013.12.023 Ferris, D. R. (1995). Student reactions to teacher response in multiple-draft composition classrooms. TESOL Quarterly, 29(1), 33–53. http://doi.org/10.2307/3587804


Increasing assessment literacy on MA translations modules to improve students’ understanding of and confidence in assessment processes Speaker: Juliet Vine, University of Westminster, UK

Assessment is of importance to a translator, not only as a student but also throughout their career. Translator’s need to be able to self-assess their work, as well as assessing the work of peers in professional contexts. Therefore, students as part of their MA studies are often asked to assess their own and their peers’ work, but they are rarely offered the opportunity to think explicitly about the assessment criteria or the level descriptors that have being applied. In recent research into students’ attitude to assessment at our University, we discovered that students expressed concerns about the levels of subjectivity involved in the assessment process (Huertas Barros & Vine, 2017). This lack of confidence in the assessment process has been linked to low levels of assessment literacy (Elkington, 2016). In order to increase assessment literacy, and therefore equip students with the assessment skills they will need as students and professionals and to improve their confidence in the assessment process, I devised and critically evaluated a learning activity which focuses on helping the student’s understand and use the assessment criteria. References Elkington, S. (2016). HEA Transforming Assessment in Higher Education Summit 2016 Final Report. Higher Education Academy. Retrieved July 20, 2017 from https://www.heacademy.ac.uk/system/files/downloads/hea_assessment_summit_2016_0.pd f Huertas Barros, E., & Vine, J. (2017). Current trends on MA translation courses in the UK: changing assessment practices on core translation modules. The Interpreter and Translator Trainer (ITT). Special Issue ‘New Perspectives in Assessment in Translator Training’, 12(1). Developing an identity as a knowing person: examining the role of feedback in the Recognition of Prior Learning (RPL) Speaker: Helen Pokorny, University of Westminster, UK This feedback case study is located within a joint venture between a University and a College of Further Education with an explicit mission to promote part-time education. It provides a review and evaluation of the successful development of an undergraduate programme in leadership and professional development, twothirds of which is awarded through the Recognition of Prior Learning (RPL), thus reducing the cost to the students. This cost reduction is important. Figures from the Higher Education Statistics Agency show that parttime student numbers in England have fallen by 56% since 2010 with the most rapid decline taking place after the government raised the cap on part-time fees to £6,750 per year in 2012, doubling or tripling the cost of many courses (Fazackerley, 2017). RPL has its roots in the widening participation and social justice initiatives of the early 1990s but has not achieved mainstream status in the UK, despite most universities having RPL processes enshrined within their regulatory framework. Harris’ (2000), observation of RPL that the onus was on the RPL student to take the initiative and to negotiate a process she described as a lone one would still hold true for many students attempting to access this process in universities today. This view of the isolated learner is at odds with the thrust in mainstream higher education to develop learning communities and to promote feedback environments rich in peer dialogue (Boud and Molloy, 2013). In this case study feedback was key to making the RPL process both welcoming and transparent. The programme provides a rich feedback environment which in turn provides a strong sense of identity for learners. This supports Whittaker et al.s’ (2006) argument that RPL has the potential to alter social identities in a transformative sense through the recognition participants can get from others as well as from assessors. Evaluation feedback from the students is consistently positive, and highlights the key roles played by the tutors and the peer group in navigating the demands of the process and maintaining motivation when the process felt overwhelming. This sense of belonging and transformation was achieved through the development of the feedback tasks set and activities provided which allowed the tutors to deal with images of “otherness” in learners’ minds, of what being a student means (me/not like me) and, most importantly for RPL, through the feedback rich dialogic


environment it was possible to create a community of learners developing together for assessment images of what knowledge is - and where it comes from. References Boud, D. and Molloy, E. (20113) ‘Rethinking models of feedback for learning. The challenge of design’, Assessment and Evaluation in Higher Education, 38 (6): 698-712. Fazackerley, A. (2017) ‘Part-time student numbers collapse by 56% in five years, Guardian online, 2nd May https://www.theguardian.com/education/2017/may/02/part-time-student-numbers-collapseuniversities. Harris, J. (2000) RPL: Power Pedagogy and Possibility, Conceptual and Implementation Guides, Pretoria: HSRC Press. Whittaker, S., Whittaker, R. and Cleary, P. (2006) ‘Understanding the transformative dimensions of RPL’ in P. Andersson & J. Harris (Eds) Re-theorising the Recognition of Prior Learning, Leicester:, NIACE, pp301321. Students as partners in co-creating a new module: Focusing on assessment criteria Speaker: Maria Kambouri-Danos, University of Reading, UK The project described here focused on developing staff-student partnerships with the objective of engaging students as partners in co-designing assessment and criteria for the new module. In more detail the project aims to a) go beyond student-feedback to engaging students by listening to the ‘student voice’ while developing curriculum, leading to a more inclusive experience and b) co-develop effective and student-friendly assessment criteria by engaging a diverse group of students, leading to a more inclusive pedagogy The new module is part of a work based programme, part of which students are required to work in a relevant workplace for at least two and a half days per week. Most students are mature students with family responsibilities, while working and studying full-time. Due to the cohort’s particular characteristics, engaging these students with the University has been challenging in the past. Through this project it was essential to engage students in four partnership workshops during which staff and students 1) discussed the aims of the project and reviewed existing modules, 2) explored some literature available in order to stimulate discussions about designing assessment 3) finalised details of the assessment design and discussed assessment related vocabulary and 4) reflected on the whole process. Students were asked to complete a short survey, at the beginning and at the end of the project, aiming to identify students’ perspectives/attitudes to student engagement in curriculum design before and after taking part to the partnerships sessions. The results indicate that actively engaging students in such activities helps to promote a sense of belonging and to create and sustained positive staff-students partnerships. References Black, P. & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, p olicy and practice, 5(1), 7-74. Available at http://search.proquest.com.idpproxy.reading.ac.uk/docview/204052267?accountid= 13460 Bloom, B. S., Hastings, J. T. & Madaus, G. F. (1971). Handbook on formative and summative evaluation of student learning. New York: McGraw-Hill. Bloxham, S. & Boyd, P. (2007). Developing effective assessment in higher education: a practical guide. Maidenhead: Open University Press. Coates, H. (2005). The value of student engagement for higher education quality assurance. 11(1), 25-36. Fullan, M. & Scott, G. (2009). Turnaround leadership for higher education. San Francisco: Jossey-Bass Harper, S.R. & Quaye, S.J. (2009). Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations. (eds.) New York and London: Routledge Race, P., Brown, S. & Smith, B. D. (2005). 500 tips on assessment. 2nd ed. London: Routledge. Assessment - Evaluating the Workload on Staff and Students Speakers; Mark Glynn, Dublin City University; Clare Gormley, Dublin City University; Laura Costelloe, Dublin City University, Republic of Ireland The session will discuss the analysis of assessment workload for staff and students and our journey beyond the surface of obvious metrics to reveal the real impact of assessment on the student experience. It has been


pointed out that ‘assessment is probably the most important determinant of the character of a course and of student learning within the course’ (Scott, 2015: 700 ). Indeed, as Biggs and Tang (2011: 196) point out, ‘assessment is the senior partner in learning and teaching. Get it wrong and the rest collapses’. Thus, mindful of assessment as a key ‘driver’ of student learning (Race, 2007) and of the importance of better understanding student and staff experiences of assessment, this session will report on the outcomes of a research project at Dublin City University which sought to profile summative assessment workloads within programmes, examining the assessment experience of students and staff within programmes and across disciplines. This study aims to investigate the extent of the assessment overload issue for both staff and students and is being undertaken to encourage programme teams to adopt a programmatic approach to assessment including taking data-informed decisions to enhance the student experience. Hearing Voices: First Year Undergraduate Experience of Audio Feedback Speaker: Stephen Dixon, Newman University, UK Recent changes to the UK higher education sector, including the growth and diversification of the student body, greater modularisation with fewer coursework assignments, and less staff-student contact time, have presented numerous challenges. The parallel rise in the use of digital technologies in professional practice can often be seen to exacerbate the perceived dehumanising effect of this massification. The focus of this short presentation centres on the use of one such technology – that of digital audio feedback with first year undergraduates at Newman University, Birmingham. Drawing on the findings of a longitudinal phenomenological study conducted for a doctorate, the presentation will stress the importance of moving beyond any technologically deterministic view, and the need to set any understanding in the wider context of students’ own interpretation of the feedback process. Whilst the use of audio feedback is seen to alleviate the failures of communication often identified in feedback, the findings are also seen to be significant in terms of means of access, use of learning technologies, dialogic perception and studentship and engagement. In particular, the use of audio feedback is seen as facilitating a shift from statement to discourse and the possibility of establishing more meaningful learning relationships with students.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.