7th International Assessment in Higher Education Conference Programme

Page 1

adem ent y ing

ent

d e m Lear y n

7TH INTERNATIONAL ASSESSMENT IN HIGHER EDUCATION CONFERENCE

ssm

26 & 27 June 2019 Manchester, UK

ssm

Aca

Asse

Conference Programme

Lear

ningEd

ucat

Asse

ion

AHE: Leading Assessment for Learning in Higher Education



Welcome Colleagues

On behalf of the executive committee, welcome to Manchester for the biennial two day Assessment in Higher Education (AHE) conference of 2019. We know that assessment is a powerful driver of student approaches to learning. In the form of low stakes formative assessment with thoughtful supportive feedback this drive may be towards powerful learning for students. However, this year our two keynote speakers consider some of the other challenges that face us in relation to this powerful influence of assessment. Bruce Macfarlane argues that student performativity in our age of measurement may be crushing the space and element of risk that learners require in order to thrive. He argues that students need ‘freedom to learn’. Phil Dawson tackles head on the issue of student cheating. He considers the ‘assessment conservatism’ responses of some universities and suggests alternatives. The master classes at the conference are facilitated by internationally leading academic developers and researchers. The choice of presentations of research, innovation and evaluation of practice cover a wide range of assessment related issues, including: assessment for learning; peer and selfassessment; academic thinking, speaking and writing; academic integrity and cheating; using technology in assessment and feedback; writing, speaking and engaging with feedback and feedforward; design, learning outcomes and freedom to learn; authentic assessment; assessment strategies; leading change in assessment practice; equity, inclusion and assessment accommodation; technologies and assessment; academic standards, using exemplars, grading and social moderation; and other distinctive individual studies. Many programme teaching teams work hard to develop dialogic feedback so that it more effectively supports student learning. At the AHE conference we are equally keen to provoke dialogue for learning and for networking. We plan for 50% discussion time and chair the conference to avoid speakers running over. It is traditional in the north of England to talk to people in informal situations, such as at the bus stop or on a train, and we try to create this kind of friendly, informal atmosphere at the conference. Please make an effort at every opportunity during conference to say hello and talk to colleagues in between sessions, in the queue for coffee or lunch, at the exhibitor stalls, at dinner and in the bar. The committee members will work hard to welcome you, to introduce you to colleagues. The AHE conference has a reputation for sharing high quality research and innovation in assessment BUT the committee are also keen to maintain its reputation as a friendly and dialogic conference. Within this conference programme please check through to information on the post-conference webinar (see page 162), as well as looking ahead to the one day AHE conference planned for 2020 (page 163) and the call for papers for a special assessment issue of the Practitioner Research in Higher Education journal (page 165). We hope you will enjoy the conference, learn something new, contribute to new thinking and make connections for future collaboration. Pete Boyd Conference Chair pete.boyd@cumbria.ac.uk Linda Shore AHE Event Manager linda.shore@cumbria.ac.uk

3


Delegate Information AHE Executive Committee Pete Boyd (Conference Chair) Amanda Chapman Jess Evans Linda Graham Peter Holgate Mark Huxham Rita Headington Geraldine O’Neil Natasha Jankowski Sally Jordan Nicola Reimann Kay Sambell (President) Linda Shore (Events Manager) Rebecca Westrup

University of Cumbria University of Cumbria The Open University University of Sunderland Northumbria University Edinburgh Napier University Educational Consultant University College Dublin University of Illinois The Open University Durham University Edinburgh Napier University University of Cumbria University of East Anglia

Networking We hope the conference will provide you with an excellent opportunity to make connections and discover shared interests in higher education assessment with colleagues from across the UK and beyond. Evaluation and commentary We actively encourage you to make use of the conference hashtag #AssessmentHEconf on Twitter in order to share ideas, respond to sessions, ask questions and make connections. There will be an online evaluation after the conference but please feel free to share any comments or suggestions with members of the AHE Executive Committee whilst you are here. Interesting places to visit Manchester is a vibrant city with many interesting places to visit. Go to http://www.visitmanchester.com/ for plenty of ideas. Wi-Fi Access For Wi-Fi access at The MacDonald Manchester Hotel please go to ‘Wi-Fi settings’ on your laptop or mobile device and select ‘MacDonald WiFi’ this will enable you to access this service. AHE Registration Desk For assistance during the conference please visit the AHE Registration Desk located to the right of the entrance by the MacDonald Manchester Hotel Main Reception. 4


AHE Conference Programme Summary Time

Session

DAY 1

08.30 Registration & Refreshments

Room Hotel Reception Foyer

09.30 Pre-conference Master Classes 11.20 Parallel Session 1 12.00 Parallel Session 2 12.30 Lunch

Steak House Restaurant

13.30 Welcome 13.40 Keynote Speaker

Piccadilly Suite

14.45 Parallel Session 3 15.15 Refreshments 15.40 Poster & Pitch and Round Table Presentations 16.50 Parallel Session 4 17.30 Parallel Session 5 18.00 Close

19.30 Drinks Reception

Piccadilly Suite

20.00 Conference Dinner

Piccadilly Suite

Time

Session

08.30 Registration & Refreshments

DAY 1

Room Hotel Reception Foyer

09.30 Parallel Session 6 10.10 Parallel Session 7 10.40 Refreshments 11.00 Micro Presentations 12.10 Parallel Session 8 12.40 Lunch

Steak House Restaurant

13.30 Parallel Session 9 14.10 Keynote Speaker

Piccadilly Suite

15.15 Plenary and Poster & Pitch Award

Piccadilly Suite

15.30 Refreshments 15.45 Close

5


Keynote Address Professor Bruce Macfarlane University of Bristol, UK

Assessment, student performativity and the freedom to learn The student engagement movement has become a worldwide phenomenon and national student engagement surveys are now well-established internationally. Curriculum initiatives and assessment practices closely associated with student engagement policies include compulsory attendance requirements, class contribution grading, group and team working assignments and reflective exercises often linked to professional and experiential learning. These types of assessment practices often grade students for their ‘time and effort’ and commitment to active and participatory approaches to learning. They are justified by reference to both active learning as a new pedagogic orthodoxy along with the improvement of retention rates and achievement levels at an institutional level. However, many of these assessment practices constrain the extent to which higher education students are free to make choices about what to learn, when to learn and how to learn. Forms of student performativity – bodily, participative and emotional – have been created that demand academic nonachievements to be acted out in a public space. A higher education is, almost by

definition, intended to be about adults engaging in a voluntary activity but the performative turn in the nature of student learning is undermining student rights as learners - to non-indoctrination, reticence, choosing how to learn, and being trusted as an adult - and perverting the true Rogerian meaning of ‘student-centred’. This lecture will be based on arguments presented in my 2017 book, Freedom to Learn (Routledge). Bruce Macfarlane is professor of higher education and Head of the School of Education at the University of Bristol, UK and distinguished visiting professor at the University of Johannesburg, South Africa. He has previously held chairs at a number of universities in the UK and Hong Kong. Bruce’s publications have developed concepts related to values in higher education such as academic freedom, the ethics of research and teaching, the service role, and academic leadership. His books include Freedom to Learn (2017), Intellectual Leadership in Higher Education (2012), Researching with Integrity (2009), The Academic Citizen (2007) and Teaching with Integrity (2004).

6


Keynote Address Associate Professor Phillip Dawson Deakin University, Melbourne, Australia

Why you should cheat: Building an evidence base to resist ‘assessment conservatism’ Recent media coverage could lead us to believe that there has been exponential growth in the use of ‘contract cheating’ websites by students. These services provide bespoke assignments for students – for a fee – in as little as a few hours. Contract cheating websites often claim that this type of academic dishonesty is undetectable, and aside from instances where students are careless or stupid there is evidence that routine marking does not detect contract cheating. Left unchecked, this poses a serious threat to the integrity of higher education courses, with flow-on effects for students and public safety. To combat contract cheating, and other new threats to academic integrity, many institutions are becoming increasingly conservative in their assessment practices. In particular, invigilated pen-and-paper examinations and remote proctored online examinations are being touted as necessary solutions to rampant cheating. But are conservative assessment approaches actually more secure than authentic take-home tasks? Or is a shift towards surveilled assessment types another case of ‘security theatre’, which at a great cost to learning provides little improvement to integrity? Are there alternatives that balance learning and integrity? This keynote brings together evidence from studies where researchers try to hack or cheat to understand the relative security of

different types of assessment. It encourages thinking about assessment design from the perspective of someone who might want to break things. Most importantly it sets out challenges the field of assessment for learning must meet in order to counter assessment conservatism. If assessment for learning does not own this conversation, we risk it being colonised by those who are more risk averse, or others who think the worst of students. Phillip (Phill) Dawson is an Associate Professor and Associate Director of the Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Melbourne, Australia. He holds degrees in education, artificial intelligence and cybersecurity. Phill leads CRADLE’s research agenda on academic integrity, with a focus on experimental studies and new technologies. Phill has published some of the first experimental studies on contract cheating detection and computerbased exam hacking. He is currently engaged in research on different approaches to detect and deter contract cheating, including assessment designs and technologies. He also has a keen interest in how academics make decisions in assessment design. Blog: http://philldawson.com Twitter: @phillipdawson Email: p.dawson@deakin.edu.au

7


Conference Themes The conference focuses on 6 overlapping themes. However, this is not intended to exclude innovative and boundary-crossing presentations: 1. 2. 3. 4. 5. 6.

Assessment for learning and the meaning and role of authentic assessment Leading change in assessment and feedback at programme and institutional level Addressing challenges of assessment in mass higher education Integrating digital tools and technologies for assessment Developing academic integrity and academic literacies through assessment Assessment: learning communities, social justice, diversity and well-being

Conference Programme Wednesday 26 June 2019 08:30 - 09:30 Registration 09:30 - 11:00 Master Class: David Boud 1 | Developing evaluative judgement within courses D. Boud, University of Technology, Sydney, Australia Master Class: Peter Hartley 2 | Programme Assessment Strategies: Learning from a decade of PASS P. Hartley, Independent Educational Consultant, Ormskirk, United Kingdom Master Class: Sally Brown 3 | Making assessment work for you: Pragmatic ways of assessing students in large classes S. Brown, Independent Consultant, Leeds, United Kingdom Master Class: Geraldine O'Neil 4 | Authentic Assessment: Concept, Continuum and Contested G. O'Neil, University College, Dublin, Ireland Master Class: Phill Dawson 5 | Detecting contract cheating P. Dawson, Deakin University, Melbourne, Australia Master Class: David Carless 6 | Developing staff and student feedback literacy in partnership D. Carless, University of Hong Kong, Hong Kong, Hong Kong

8

Hotel Foyer Piccadilly Suite

Room 3

Room 4

Room 5

Room 7

Room 11


Conference Programme Day 1 (contd.) 11:20 - 11:50 Theme 1 Parallel Session 1 Session Chair: Fiona Meddings 7 | A Methodology that Makes Self-Assessment an Implicit Part of the Answering Process –Results from a Year Long Study P. McDermott, University of East Anglia, Norwich, United Kingdom. R. Jenkins, University of East Anglia, Norwich, United Kingdom Theme 1 Parallel Session 1 Session Chair: Linda Graham 8 | Improving Cross Disciplinary Assessment Literacy through the use of Rubric conversations A. Chapman, University of Cumbria, Lancaster, United Kingdom S. Ruston, University of Cumbria, Lancaster, United Kingdom Theme 6 Parallel Session 1 Session Chair: Juuso Nieminen 9 | Exploring group-work and physical space: a case study in factors influencing student success J. Cohen, University of Kent, Canterbury, United Kingdom A. Dean, University of Kent, Canterbury, United Kingdom Theme 2 Parallel Session 1 Session Chair: Jess Evans 10 | Establishing a university wide Community of Practice for Exemplars at Harper Adams University J. Headley, Harper Adams University, Newport, United Kingdom H. Pittson, Harper Adams University, Newport, United Kingdom Theme 4 Parallel Session 1 Session Chair: Sally Jordan 11 | Online tools to enhance students experience: assessment and feedback M. Marsico, University of Exeter, Exeter, United Kingdom Theme 5 Parallel Session 1 Session Chair: Nicola Reimann 12 | What went wrong? Students’ and lecturer reflection on why face-to-face feedback was ineffective T. Harvey, University Of Cumbria, Carlisle, United Kingdom Theme 2 Parallel Session 1 Session Chair: Hilary Constable 13 | Assessment as learning: developing student-teacher peer feedback and formative assessment practice N. Quirke-Bolt, Mary Immaculate College (MIC), Thurles, Ireland M. Daly, Mary Immaculate College (MIC), Thurles, Ireland Theme 3 Parallel Session 1 Session Chair: Natasha Jankowski 14 | Is it time to moderate moderation? UK academic staff perceptions of the effectiveness and location of different moderation strategies A. Lloyd, Cardiff University, Cardiff, United Kingdom

Piccadilly Suite

Room 2

Room 3

Room 4

Room 5

Room 7

Room 9

Room 10

9


Conference Programme Day 1 (contd.) 11.20 – 11.50 Theme 3 Parallel Session 1 Session Chair: Kimberly Ondo 15 | Responsibility sharing in the feedback process: Perspectives of educators E. Pitt, University of Kent, Canterbury, United Kingdom. N. Winstone, University of Surrey, Guildford, United Kingdom 12.00 – 12.30 Theme 1 Parallel Session 2 Session Chair: Maria Valero 16 | Bringing Accounting to life through iterative curriculum design and assessment for learning: A case study in enhancing student performance J. Cohen, University of Kent, Canterbury, United Kingdom Theme 4 Parallel Session 2 Session Chair: Jill Barber 17 | Introducing marked rubrics to enhance the student experience: One programme’s journey to improve consistency J. Taylor, University of Cumbria, Carlisle, United Kingdom. A. Charters, University of Cumbria, Carlisle, United Kingdom. Theme 6 Parallel Session 2 Session Chair: Peter Holgate 18 | Informing Change Through Quality Assurance and CoCurricular Assessment A. Babcock, Northcentral University, San Diego, USA Theme 2 Parallel Session 2 Session Chair: Jack Walton 19 | Students’ survey reloaded: An attempt for the Italian higher education system S. Pastore, University of Bari, Bari, Italy Theme 6 Parallel Session 2 Session Chair: Sara Eastburn 20 | Learning the language of uncertainty: Assessing use of epistemic markers in academic writing within Higher Education C. Wilson, University of Cumbria, Lancaster, United Kingdom Theme 2 Parallel Session 2 Session Chair: Emma Gillaspy 21 | Refining Re-assessment H. Woolf, University of Worcester, Worcester, United Kingdom. W. Turnbull, Liverpool John Moores, Liverpool, United Kingdom M. Stowell, University of Worcester, Worcester, United Kingdom Theme 4 Parallel Session 2 Session Chair: Rita Headington 22 | Using technology to provide feedback to large classes S. Voelkel, University of Liverpool, Liverpool, United Kingdom

10

Room 11

Piccadilly Suite

Room 2

Room 3

Room 4

Room 7

Room 9

Room 10


Conference Programme Day 1 (contd.) 12.00 – 12.30 Theme 3 Parallel Session 2 Session Chair: Eileen O'Leary 23 | Academic standards in professional and vocational programmes in Higher Education: marking cultures in a post1992 university J. Dermo, University of Salford, Salford, Manchester, United Kingdom

Room 11

12:30 - 13:30 Lunch

Steak House Restaurant

13:30 - 13:40 Welcome: Pete Boyd 13:40 - 14:40 Keynote: Bruce MacFarlane Introduction: Nicola Reimann

Piccadilly Suite Piccadilly Suite

24 | Assessment, student performativity and the freedom to learn B. MacFarlane, University of Bristol, Bristol, United Kingdom 14:45 - 15:15 Theme 1 Parallel Session 3 Session Chair: Philip Denton 24 | From Essay to Evaluative Conversation: exploring the use of viva voce assessments to facilitate students’ engagement with feedback F. Arico', University of East Anglia, Norwich, United Kingdom Theme 6 Parallel Session 3 Session Chair: Tina Harvey 25 | Student perceptions of assessment accommodations in higher education: An analysis of power J. Nieminen, University of Helsinki, Helsinki, Finland Theme 2 Parallel Session 3 Session Chair: Hilary Constable Hilary Constable 26 | Rethinking the Crit: Changing Assessment in Architecture Schools P. Flynn, TU Dublin, Dublin, Ireland. M. Dunn, University of Limerick, Limerick, Ireland. M. O Connor, CIT, Cork, Ireland. M. Price, UCD, Dublin, Ireland Theme 4 Parallel Session 3 Session Chair: Sally Jordan 27 | Retaining Students and Designing for Success with Interactive Technologies J. Hvaal, International College of Management Sydney, Sydney, Australia V. Quilter, International College of Management Sydney, Sydney, Australia

Room 2

Room 3

Room 4

Room 5

11


Conference Programme Day 1 (contd.) 14:45 - 15:15 Theme 5 Parallel Session 3 Session Chair: Kimberly Ondo 28 | ‘When feedback fails’: an exploration of the use of feedback literacies and the utility of language within healthcare education S. Eastburn, University of Huddersfield, Huddersfield , United Kingdom Theme 2 Parallel Session 3 Session Chair: Sam Elkington 29 | An alternative model to assessment grading tools: The Continua model of a Guide to Making Judgments P. Grainger, University of the Sunshine Coast, Sippy Downs, Australia Theme 6 Parallel Session 3 Session Chair: Silke Lange 30 | Talking to the Teachers; how they observe gender at play in group work C. Sheedy, Dundalk Institute of Technology, Dundalk, Ireland Theme 1 Parallel Session 3 Session Chair: Dave Darwent 31 | Learning from rejection: Academics’ experiences of peer reviewer feedback and the development of feedback literacy K. Gravett, University of Surrey, Guildford, United Kingdom 15:15 - 15:40 Refreshments

Room 7

Room 9

Room 10

Room 11

Break Out Space 1&2& Piccadilly Suite

15:40 - 16:40 Themes 2 & 3 Poster & Pitch Session Chair: Geraldine O’Neil Room 3 32 | Issues in Assessing English as a Foreign Language Speaking Skills: A Case of Saudi University students N. ALMUASHI, Bangor University, Bangor, United Kingdom 33 | Making your marking, 'it's a messy business' F. Meddings, University of Bradford, Bradford, United Kingdom 34 | Perceptions of 'effective' assessment and feedback: a micro student-led study to investigate Postgraduate perceptions of effective assessment and feedback practice at a leading Russell Group Business School N. Forde-leaves, Cardiff University, Cardiff, United Kingdom 35 | Summative assessment workload management & implications for teaching practice: Dipping our toes into the depths of the murky waters that represent ‘Assessment’ in Higher Education N. Forde-Leaves, Cardiff University, Cardiff, United Kingdom.

12


Conference Programme Day 1 (contd.) 15:40 - 16:40 Themes 2 & 3 Poster & Pitch Session Chair: Geraldine O’Neil Room 3 N. Usher, University of Oxford, Oxford, United Kingdom 37 | Introducing digital technologies for formative assessment and learning support: A reflection from the University of Bath M. Valero, University of Bath, Bath, United Kingdom Theme 1 Round Table Session Session Chair: Kay Sambell Room 5 38 | Assessment of real world learning: a case study U. Cullen, Falmouth University, Penryn, United Kingdom 39 | Saving the planet through assessment: using foundation year assessment to communicate climate and environmental issues to young children K. Winter, Northumbria University, Newcastle Upon Tyne, United Kingdom 40 | Abreaction Catharsis Release Self-Acceptance: Critical Reflective Student Stories in Consumer Behaviour as Emotional Psychodynamic Therapy U. Sundaram, University of East Anglia, NORWICH, United Kingdom 41 | Promoting deep approach to learning and self-efficacy by changing the purpose of self-assessment: A comparison of summative and formative models J. Nieminen, University of Helsinki, Helsinki, Finland Themes 2&3 Round Table Session Session Chair: Jess Evans Room 7 42 | How to approach ‘assessment as learning’ in educational development and design? A viewpoint from an Educational Development Unit’s practice I. Rens, KU Leuven, Leuven, Belgium 43 | Listening to the students’ voice to orient instructions. The ongoing evaluation of an Assessing as Learning experience in higher education A. Bevilacqua, University of Verona, Verona, Italy 44 | Evolving assessment tools and processes to support the scaling-up of external assessors (mentors, supervisors, preceptors, clinicians etc) in formative and summative assessment R. Bacon, University of Nottingham, Nottingham, United Kingdom. D. Holmes, PebblePad, Telford, United Kingdom

13


Conference Programme Day 1 (contd.) 15:40 - 16:40 Themes 2&4 Round Table Session Session Chair: Pete Boyd Room 9 45 | Students as partners in fostering a culture of assessment for learning at a research-intensive university C. Samuel, McGill University, Montreal, Canada M. Tovar, McGill University, Montreal, Canada 46 | Mind the Gap: Strategies for self-monitoring. A conceptual framework and new findings J. van der Linden, HAN University of Applied Sciences, Nijmegen, Netherlands 47 | Breaking down barriers: Developing strategies for feedback uptake through self-directed online study B. Paris, University of Calgary, Calgary, Canada 48 | A flexible and fair web-based Group Marking Tool that combines both staff and student (peer-review) scores S. Ajit, University of Northampton, Northampton, United Kingdom 49 | Using digital tools to facilitate peer review and enhance feedback and assessment K. Wheeler, University of Essex, Colchester, United Kingdom Themes 1&2 Poster & Pitch Session Session Chair: Linda Graham Room 11 50 | The use of an Inter-Professional Simulation-based Education (IPSE) task as an authentic formative assessment: an Action Research project J. Coleman, University of Cumbria, Carlisle, United Kingdom A. Noblett, University of Cumbria, Carlisle, United Kingdom 51 | Developing the Scientific Reporting Skills of Chemistry Students through Dialogic Assessment-Feedback Cycles and use of Journal Articles as Paradigms of Professional Practice D. McGarvey, Keele University, Keele, United Kingdom 52 | Maximizing impact of peer-feedback on students learning: A longitudinal study in Teacher Education G. Ion, Universitat Autònoma de Barcelona, Barcelona, Spain 53 | Joining the dots: a qualification based approach to developing student assessment strategies in undergraduate engineering A. Goodyear, The Open University, Milton Keynes, United Kingdom C. Morris, The Open University, Milton Keynes, United Kingdom 54 | Impact of Feedback on Assessment of Final Examinations at Institutional Level D. Rakha, University of Health Sciences, Lahore, Lahore, Pakistan

14


Conference Programme Day 1 (contd.) 15:40 - 16:40 Themes 1&2 Poster & Pitch Session Session Chair: Linda Graham 55 | Exploratory study of the implementation of institutional assessment programs in higher education A. Remesal, Universidad Barcelona, Barcelona, Spain 56 | Assessment in engineering programmes: a systematic review of the literature J. Halls, University of Nottingham , Nottingham, United Kingdom. 16:50 - 17:20 Theme 1 Parallel Session 4 Session Chair: Amanda Chapman 57 | Assessment-as-portrayal: strategic negotiation of persona in assessment D. Boud, University of Technology, Sydney, Australia Theme 1 Parallel Session 4 Session Chair: Dave Darwent 58 | Feedforward: a systematic review of a concept I. Sadler, Liverpool John Moores University, Liverpool, United Kingdom. N. Reimann, Durham University, Durham, United Kingdom. K. Sambell, Edinburgh Napier University, Edinburgh, United Kingdom Theme 6 Parallel Session 4 Session Chair: Silke Lange 59 | Making the Case for Mindful Assessment Design S. Elkington, Teesside University , Middlesborough, United Kingdom Theme 2 Parallel Session 4 Session Chair: Edd Pitt 60 | Second-class citizens? Using Social Identity Theory to explore students’ experiences of assessment and feedback N. Lent, University of Edinburgh, Edinburgh, United Kingdom Theme 4 Parallel Session 4 Session Chair: Maria Rosari Marsico 61 | Changes in Technology-assisted Assessment and Feedback in UK Universities J. Fang, University of Hertfordshire, Hatfield, United Kingdom Theme 5 Parallel Session 4 Session Chair: Marie Stowell 62 | Evaluative Judgement in Chemistry Practical Project Modules A. Bertram, University of Nottingham, Nottingham, United Kingdom C. Tomas, University of Nottingham, Nottingham, United Kingdom Theme 3 Parallel Session 4 Session Chair: Jill Barber 64 | Family feedback: exploring the experiences of 'commuter students' R. Headington, Independent, Canterbury, United Kingdom

Room 11

Piccadilly Suite

Room 2

Room 3

Room 4

Room 5

Room 7

Room 10

15


Conference Programme Day 1 (contd.) 16:50 - 17:20 Theme 1 Parallel Session 4 Session Chair: Eileen O'Leary 65 | Critical reflections on the implementation of ‘The Puzzle’, an authentic assessment task designed for academics enrolled in a professional higher education course on assessment L. Dison, Wits University, Johannesburg, South Africa K. Padayachee, Wits University, Johannesburg, South Africa 17.30 – 18.00 Theme 1 Parallel Session 5 Session Chair: Geraldine O'Neil 66 | Students’ Enactment of Feedback Through a Series of Graded Low-Stake Learning-Oriented Presentations Z. Stone, University of Kent, Canterbury, United Kingdom Theme 1 Parallel Session 5 Session Chair: Mary McGrath 67 | Validation of a Feedback in Learning Scale; behaviourally anchoring module performance M. Jellicoe, The University of Liverpool, Liverpool, United Kingdom Theme 6 Parallel Session 5 Session Chair: Peter Holgate 68 | DADA: A toolkit to design and develop alternative assessment S. Colaiacomo, University of Kent, Canterbury, United Kingdom. P. Hanesworth, Advance HE, Edinburgh, United Kingdom. K. Lister, Open University, Milton Keynes, United Kingdom B. Watson, University of Kent, Canterbury, United Kingdom. T. Ashmore, University of Kent, Canterbury, United Kingdom Theme 2 Parallel Session 5 Session Chair: Helen Pittson 69 | Leading change with equity in mind: An institutional view of learning design N. Jankowski, University of Illinois, Champaign, USA Theme 4 Parallel Session 5 Session Chair: Naomi Winstone 70 | Students and assessors in conversation about authentic multimodal assessment M. Vogel, King's College London, London, United Kingdom Theme 5 Parallel Session 5 Session Chair: Pete Boyd 71 | ‘Excited’ yet ‘Paralysing’: The highs and lows of the feedback process E. Medland, University of Surrey, Guildford, United Kingdom Theme 2 Parallel Session 5 Session Chair: Jane Headley 72 | Inclusive assessment, a response to the experience of Students with Dyslexia J. Morrow, University of Chester, Chester , UAE

16

Room 11

Piccadilly Suite

Room 2

Room 3

Room 4

Room 5

Room 7

Room 9


Conference Programme Day 1&2 (contd.) 17:30 - 18:00 Theme 1 Parallel Session 5 Session Chair: Jack Walton Room 11 73 | Feedback, feedforward: evaluating the effectiveness of an oral peer review exercise amongst postgraduate students H. Dickson, Kings College London, London, United Kingdom 19:30 - 20:00 Drinks Reception & Books Launch with Sally Brown & Kay Sambell Piccadilly Suite Designing effective feedback processes: A learning-focused approach by Naomi Winstone and David Carless. Innovative Assessment in Higher Education: A Handbook for Academic Practitioners edited by Cordelia Bryan & Karen Clegg. 20:00 – 23:00 Conference Dinner Piccadilly Suite

Thursday 27 June 2019 08:30 - 09:30 Registration 09:30 - 10:10 Theme 1 Parallel Session 6 Session Chair: Laura Dison 74 | ‘We learned to control what students read rather than what they said!’ Practitioners’ shifting views of their role in the feedback process: an action research project K. Sambell, Edinburgh Napier University, Edinburgh, United Kingdom. L. Graham, Sunderland University, Sunderland, United Kingdom Theme 1 Parallel Session 6 Session Chair: Natasha Jankowski 75 | Assessment as a process, looking at an optional assessment retake strategy as a learner-centred approach to feedback, learning & assessment S. McCarthy, University College Cork, Cork, Ireland. E. O'Leary, Cork Institute of Technology, Cork, Ireland Theme 3 Parallel Session 6 Session Chair: Rita Headington 76 | Gaining Faculty and Staff Buy In to New Assessment Feedback Quality Expectations T. Lehan, Northcentral University, San Diego, CA, USA A. Babcock, Northcentral University, San Diego, CA, USA Theme 3 Parallel Session 6 Session Chair: Geraldine O’Neil 77 | Peer feedback to support student learning in large classes with the use of technology A. Steen-Utheim, BI Norwegian Business School, Oslo, Norway H. Threlkeld, BI Norwegian Business School, Oslo, Norway O. Gardener , BI Norwegian Business School, Oslo, Norway

Piccadilly Suite Piccadilly Suite

Room 2

Room 3

Room 4

17


Conference Programme Day 2 (contd.) 09:30 - 10:10 Theme 4 Parallel Session 6 Session Chair: Fiona Meddings 78 | Assessment of Developmental Students' Work in the Era of Learning Management Systems: Professors' Experiences with Benefits, Limitations, and Institutional Support A. Lewis, Community College of Philadelphia, Philadelphia, USA Theme 3 Parallel Session 6 Session Chair: Karen Gravett 79 | Assessment shock: Chinese international students' first year in Australian universities C. Deneen, The University of Melbourne, Melbourne, Australia Theme 2 Parallel Session 6 Session Chair: Pete Boyd 80 | Developing a coherent assessment strategy from established research findings: from model-building to practical implementation C. Moscrop, BPP University, London, United Kingdom. P. Hartley, Edge Hill University, Ormskirk, United Kingdom Theme 4 Parallel Session 6 Session Chair: Sally Jordan 81 | Assessment and Feedback strategies: An evaluation of academic and student perspectives of various assessment and feedback tools piloted as part of the LEAF project in TU Dublin L. Bellew, TU Dublin, Co. Dublin, Ireland Theme 5 Parallel Session 6 Session Chair: Peter Holgate 82 | Literature reviews as formative learning for PhD students in Education H. Constable, University of Cumbria, Carlisle, United Kingdom 10:10 - 10:40 Theme 3 Parallel Session 7 Session Chair: Jill Barber 83 | Marginal gains to enhance feedback processes N. Winstone, University of Surrey, Guildford, United Kingdom. D. Carless, University of Hong Kong, Hong Kong, Hong Kong Theme 1 Parallel Session 7 Session Chair: Charlie Smith 84 | Using dialogic feedback and feedback action plans to develop professional literacies in undergraduate speech and language therapy students A. Mallinson, Plymouth Marjon University, Plymouth, United Kingdom L. Parrott, Plymouth Marjon University, Plymouth, United Kingdom J. Harvey, Plymouth Marjon University, Plymouth, United Kingdom Theme 1 Parallel Session 7 Session Chair: Ana Remesal 85 | Dialogic feedback and the development of professional competence among further education pre-service teachers J. Rami, Dublin City University, Dublin, Ireland 18

Room 5

Room 7

Room 9

Room 10

Room 11

Piccadilly Suite

Room 2

Room 3


Conference Programme Day 2 (contd.) 10:10 - 10:40 Theme 2 Parallel Session 7 Session Chair: Andy Lloyd 86 | Enquiring Practices: Leading Institutional Assessment Enhancement at University of the Arts London (UAL) S. Orr, University of the Arts London, London, United Kingdom S. Lange, University of the Arts London, London, United Kingdom Theme 4 Parallel Session 7 Session Chair: Kimberly Ondo 87 | Goals, Benefits and Challenges: Implementing Digital Assessment at Brunel University London S. Lytgens Skovfoged, UNIwise, Aarhus, Denmark R. Tolstrup Blok, UNIwise, Aarhus, Denmark Theme 2 Parallel Session 7 Session Chair: Susanne Voelkel 88 | Peer Assessment in Irish Medical Science Education – the experiences and opinions of educators M. Mc Grath, Galway Mayo Institute Technology, Galway, Ireland Theme 2 Parallel Session 7 Session Chair: Jess Evans 89 | Rebels with a cause – can we disrupt assessment practice in professional education? E. Gillaspy, University of Central Lancashire, Preston, United Kingdom J. Keeling, University of Central Lancashire, Preston, United Kingdom Theme 2 Parallel Session 7 Session Chair: Patrick Flynn 90 | The Efficacy of Audio Feedback: An inter-institutional investigation P. Miller, New College Durham, Durham, Uruguay. M. Clarkson, Newcastle College, Newcastle, United Kingdom D. Murray, Newcastle College, Newcastle, United Kingdom Theme 1 Parallel Session 7 Session Chair: Amanda Chapman 91 | How can we transform peer-assessment into a formative and self-regulative process? An experience of assessment criteria transparency N. Cabrera, Universitat Oberta de Catalunya, Barcelona, Spain 10:40 - 11:00 Refreshments

Room 4

Room 5

Room 7

Room 9

Room 10

Room 11

Break Out Space 1&2& Piccadilly Suite

19


Conference Programme Day 2 (contd.) 11:00 – 12:00 Micro Presentations Session Chair: Pete Boyd Piccadilly Suite 92 | Academic feedback and performance of students in institutions of higher education: Who is in control? How does our feedback impact students? A. Musgrove, Sheffield Hallam University, Sheffield, United Kingdom D. Darwent, Sheffield Hallam University, Sheffield, United Kingdom 93 | Move over Prometheus: Reconceptualising feedback through the flame metaphor P. Denton, Liverpool John Moores University, Liverpool, United Kingdom 94 | EvalCOMIX®: a web-based programme to support assessment as learning and empowerment G. Rodríguez-Gómez, University of Cadiz, Cadiz, Spain 95 | Supporting Asynchronous, Multi-Institution, Student Learning, through Peer-Assessment and Feedback, Using PeerWise in Third-Level Chemistry E. O'Leary, Cork Institute of Technology, Cork, Ireland. 96 | Developing students’ evaluative judgement: A challenge to our assessment practices? D. Boud, University of Technology, Sydney, Australia 97 | Improving assessment by aligning with better learning outcomes S. Brown, Independent Consultant, Leeds, United Kingdom 98 | Feedback designs for large classes D. Carless, University of Hong Kong, Hong Kong, Hong Kong 99 | Future-proofing our assessment strategies: challenges and opportunities Peter Hartley, Independent Education Consultant, United Kingdom 12:10 - 12:40 Theme 1 Parallel Session 8 Session Chair: Linda Graham Piccadilly Suite 100 | ‘What do they do with my feedback?’ A study of how undergraduate and postgraduate Architecture students perceive and use their feedback C. Smith, Liverpool John Moores University, Liverpool, United Kingdom Theme 3 Parallel Session 8 Session Chair: Tina Harvey Room 3 101 | Reflecting on quality with first-year undergraduate students E. Whitt, University of Nottingham, Nottingham, United Kingdom 20


Conference Programme Day 2 (contd.) 12:10 - 12:40 Theme 2 Parallel Session 8 Session Chair: Hilary Constable 102 | Effecting change in feedback practices across a large research intensive institution T. McConlogue, UCL, London, United Kingdom J. Marie, UCL, London, United Kingdom Theme 4 Parallel Session 8 Session Chair: Sally Jordan 103 | An International Comparison on the Use of New Technologies in Teaching Economics D. Paparas, Harper Adams University, Newport, United Kingdom Theme 3 Parallel Session 8 Session Chair: Geraldine O’Neil 104 | Student peer feedback and assessment: Progress using adaptive comparative judgement J. Barber, University of Manchester, Manchester, United Kingdom Theme 2 Parallel Session 8 Session Chair: Naomi Winstone 105 | Implications of social legitimation for changing assessment practices: Learnings from the Australian higher music education sector J. Walton, Griffith University, Brisbane, Australia Theme 1 Parallel Session 8 Session Chair: Serafina Pastore 106 | Assessment as learning and empowerment: A formative mediation model G. Rodríguez-Gómez, University of Cadiz, Cadiz, Spain Theme 3 Parallel Session 8 Session Chair: Amy Lewis 107 | Reflection, realignment and refraction: Using Bernstein’s evaluative rules to support the markers of the summative assessment of reflective practice J. Gibbons, University of York, York, United Kingdom 12:40 - 13:30 Lunch

Room 4

Room 5

Room 7

Room 9

Room 10

Room 11

Steak House Restaurant

13:30 - 14:00 Theme 1 Parallel Session 9 Session Chair: Pete Boyd Piccadilly Suite 108 | Making students and instructors aware of authentic emotional and metacognitive processes underlying assessment: a first year pre-grads experience A. Remesal, Universidad de Barcelona, Barcelona, Spain Theme 1 Parallel Session 9 Session Chair: Rita Headington Room 2 109 | Investigating teachers’ use of exemplars: Difficulties in managing effective dialogues P. Smyth, The University of Hong Kong, Hong Kong, Hong Kong D. Carless, The University of Hong Kong, Hong Kong, Hong Kong

21


Conference Programme Day 2 (contd.) 13:30 - 14:00 Theme 3 Parallel Session 9 Session Chair: Mira Vogel 110 | Evaluating students' self-assessment in large classes J. Rämö, University of Helsinki, Helsinki, Finland. J. Häsä, University of Helsinki, Helsinki, Finland Theme 1 Parallel Session 9 111 | Developing the self-regulation capacity of learners in a competence-based Masters’ program N. Cabrera, Universitat Oberta de Catalunya, Barcelona, Spain Theme 1 Parallel Session 9 Session Chair: Amanda Chapman 112 | Theorising Alternative Pathways for Feedback in Assessment for Learning; The Triple-F Approach G. Kehdinga, Mangosuthu University of Technology, Durban, South Africa Theme 5 Parallel Session 9 Session Chair: Alexandra Mallinson 113 | Academic Integrity through e-authentication and authorship verification for e-assessment: impact study D. Okada, The Open University, Milton Keynes, United Kingdom P. Whitelock, The Open University, Milton Keynes, United Kingdom Theme 2 Parallel Session 9 Session Chair: Justin Rami 114 | PASSES at Durham: Perceptions of Assessment from Students and Staff in Earth Sciences at Durham M. Funnell, Durham University, Durham, United Kingdom Theme 1 Parallel Session 9 Session Chair: Jenny Gibbons 115 | 10 Things No Student Wants to Hear from their Instructor K. Ondo, Purdue University Global, Chicago, USA Theme 3 Parallel Session 9 Session Chair: Jess Evans 116 | What underlies students’ relative difficulties in recalling future-oriented feedback? R. Nash, Aston University, Birmingham, United Kingdom 14:10 - 15:10 Keynote: Phill Dawson Introduction: Kay Sambell 138 | Why you should cheat: Building an evidence base to resist assessment conservatism P. Dawson, Deakin University, Melbourne, Australia 15:10 - 15:15 Poster & Pitch Award: Rita Headington 15:15 - 15:30 Plenary: Jess Evans & Geraldine O'Neil 15:30 - 15:45 Refreshments & Close

22

Room 3

Room 4

Room 5

Room 7

Room 9

Room 10

Room 11

Piccadilly Suite

Piccadilly Suite Piccadilly Suite Break Out Space 1 & 2 & Piccadilly Suite


Author Abstracts Master Class: David Boud Time: 9:30 - 11:00 Date: 26th June 2019 Location: Piccadilly Suite 1 - Developing evaluative judgement within courses David Boud University of Technology, Sydney, Australia Abstract When students graduate they move into a world in which their work is not assessed in ways experienced in educational institutions. For the most part, they have to judge for themselves whether their own performance (and that of their immediate colleagues) is good enough for the tasks they do. Unfortunately, the conventional assessment practices of universities mostly fail to equip students for what they have to enact for themselves in everyday work. This session conceptualizes and explores ways in which courses can be designed to develop the capacity of students to make evaluative judgements, that is, to make informed decisions about the quality of their own work and that of others. While a range of familiar activities can be deployed—identifying criteria, self- and peer assessment, feedback, etc.—it is the way in which they are put together across a curriculum that has a useful effect. Master Class: Peter Hartley Time: 9:30 - 11:00 Date: 26th June 2019 Location: Room 3 2 - Programme Assessment Strategies: Learning from a decade of PASS Peter Hartley Independent Educational Consultant, Ormskirk, United Kingdom Abstract There is growing interest in programme assessment strategies – developing assessment practices across a course or programme which provide students with a more coherent grasp of overall learning outcomes. Several UK Universities have made commitments towards more integrated programme-level assessment to improve both student performance and satisfaction and the staff experience of assessment. These initiatives have often used resources and case studies from the PASS project – the NTFS Group Project which aimed to define and evaluate what we called ‘programme-focused assessment’. We start by revisiting issues which PASS identified back in 2010 as typical consequences of modular systems and question whether these issues remain the most significant we have to resolve. We then evaluate how far specific practical and policy developments (influenced by PASS and related projects like TESTA) provide a solid basis for future development. Participants will gain a critical overview of programme assessment alongside practical techniques and guidelines which they can apply to their own context/practice.

23


Master Class: Sally Brown Time: 9:30 - 11:00 Date: 26th June 2019 Location: Room 4 3 - Making assessment work for you: Pragmatic ways of assessing students in large classes Sally Brown Independent Consultant, Leeds, United Kingdom Abstract Increasingly staff in universities and colleges are finding assessment workloads difficult to manage, as cohort sizes rise and there is pressure to provide students with more and better feedback. Formative feedback, that is developmental and supportive, and given at the right stage so that it guides future performance can be exceptionally powerful in improving achievement and retention. Moreover, feedback and ‘feed-forward’ must be integral to student learning programmes, rather than something that students opt into, so needs to be within live or virtual face-to-face interaction. However, designing and delivery good assessment systems can be time-consuming. Recognising that it is challenging to assess and give feedback well, this workshop will explore a range of viable and pragmatic ways to do so effectively and efficiently, and to encourage students to trust and make good use of the feedback they receive. Master Class: Geraldine O'Neil Time: 9:30 - 11:00 Date: 26th June 2019 Location: Room 5 4 - Authentic Assessment: Concept, Continuum and Contested Geraldine O'Neil University College, Dublin, Ireland Abstract Authentic assessment is a form of assessment which involves students conducting ‘real world’ tasks in meaningful contexts. With competing demands on students’ time, the relevance of the assessment process is one solution to student engagement. However, authentic assessment can be a contested term: What is ‘real’ for students? Can life outside of the higher education context be genuinely replicated? Authentic assessment can be presented as a continuum, at one end is work-based, or similar ‘real world’, assessments and at the other end is an applied question in a traditional exam (National Forum, 2017). This workshop facilitates participants to:   

explore the concept and disciplinary examples of authentic assessment, debate the continuum of authentic assessment, discuss the challenges and enablers to developing their own authentic assessments.

Key References National Forum for the Enhancement of Teaching and Learning (2017) Authentic Assessment in Irish Higher Education in teachingandlearning.ie

24


Master Class: Phill Dawson Time: 9:30 - 11:00 Date: 26th June 2019 Location: Room 7 5 - Detecting contract cheating Phill Dawson Deakin University, Melbourne, Australia Abstract Contract cheating happens when a third party completes assessed work on behalf of a student. The term has become synonymous with ‘essay mills’ that can produce seemingly any type of assignment for students in a matter of hours. Estimates of prevalence range from 3% to 15%in terms of the proportion of higher education students admitting to having used a commercial contract cheating site at some stage. Many contract cheating sites claim that contract cheating is undetectable. In this masterclass you will put that claim to the test by attempting to detect contract cheated work. You will also learn how to run a workshop for educators on detecting contract cheating, using the workshop design in a recent paper (Dawson & Sutherland-Smith, 2018). On completing that workshop, markers in the study detected contract cheating 82% of the time. How will you fare? https://www.tandfonline.com/doi/abs/10.1080/02602938.2018.1531109 Master Class: David Carless Time: 9:30 - 11:00 Date: 26th June 2019 Location: Room 11 6 - Developing staff and student feedback literacy in partnership David Carless University of Hong Kong, Hong Kong, Hong Kong Abstract Feedback processes should be designed to promote student uptake of feedback. This goal implies that both teachers and students need feedback literacy. Teachers need to design assessment and feedback sequences which promote student capacities in generating and using feedback. Students need the skills and dispositions to make the most of the feedback possibilities that are available. In this Masterclass, we will consider the different capacities that teachers and students require to be feedback literate and how they might develop them further. We will discuss how teachers and students could work in partnership to narrow different perceptions of feedback. We will analyze the potential of programme-based approaches in enabling staff and student feedback literacy over the longer-term. Emphasis will be placed on feedback strategies that do not increase teacher workloads and are applicable with large classes. Professor David Carless works in the Faculty of Education, University of Hong Kong. His current research is focused on students’ experiences of feedback in different disciplines. He is co-authoring with Naomi Winstone a book for Routledge provisionally titled, Designing for Student Uptake of Feedback. He is a Principal Fellow of the Higher Education Academy. He tweets about feedback research @CarlessDavid.

25


Parallel Session 1 Chair: Fiona Meddings Time: 11:20 - 11:50 Date: 26th June 2019 Location: Piccadilly Suite 7 - A Methodology that Makes Self-Assessment an Implicit Part of the Answering Process – Results from a Year Long Study Paul McDermott1, Robert Jenkins1, Mohamed Dungersi2, Fabio Arico1 1 University of East Anglia, Norwich, United Kingdom. 2Peterborough and Stamford Hospitals NHS Trust, Peterborough, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract For a number of years we have employed an active learning approach (Team Based Learning) that involves a multiple choice quiz (MCQ) on material that will have been studied prior to class. The answer format for this MCQ has been designed so that students distribute 5 marks across the answer options in a strategic manner based on their confidence rather than tick a single box (there is only one correct answer). It is our hypothesis that each answer strategy gives a clear, implicit rather than explicit, indication of a student’s confidence. Whereas the student’s primary motivation in this scenario is to maximise their grade. The methodology employed here allows us to gather self-assessment data without explicitly asking our test subjects. This opens-up the opportunity to embed pedagogical research into our teaching and gather large amounts of data simply by adapting conventional MCQ tests. In our classes, we also use instant feedback assessment technique (IF-AT) scratch cards to facilitate group discussions immediately after the individual MCQ tests. This strategy has been derived from Team Based Learning (TBL) pedagogy1. In a year-long study we initially validated this approach as a way to measure learner’s selfassessment accuracy by comparing results from a series of undergraduate workshops to those of a conventional Dunning-Kruger study2. We then applied this approach to the development of clinical decision-making skills in a cohort of pre-registration trainee pharmacists. We will share insights gained from our quantitative analysis of test answers as well as qualitative evaluation of student experiences gained from focus groups. We will contextualise our results through a discussion of the way we have used repeated formative assessment and item level feedback3,4,5 alongside our confidence marking methodology to improve learning outcomes. We will also discuss the merits of this approach in developing learner’s metacognitive skills by encouraging reflection on the calibration between confidence and actual measured performance. Key References Michaelson, L. K., Bauman-Knight, A. and Dee Fink, L., Team Based Learning: A Transformative Use of Small Groups in Teaching, Stylus Publishing, 2004, ISBN: 157922086X. Kruger, J., and Dunning, D. 1999. Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence leads to inflated self-assessment. Journal of Personality and Social Psychology, 77, 1121-1134. Callender, A. A., Franco-Watkins, A. M., and Roberts, A. S., Improving Metacognition in the Classroom through Instruction, Training and Feedback, Metacognition Learning, 2016, 11, 215-235.

26


Renner, C. H. and Renner, M. J., But I thought I Knew That: Using Confidence Estimation as a Debiasing Technique to Improve Classroom Performance, Appl. Cognit. Psychol., 2001, 15, p23-32. Huff, J. D., and Nietfield, J. L., Using Strategy Instruction and Confidence Judgements to Improve Metacognitive Monitoring, Metacognition Learning, 2009, 4, p161-176. Parallel Session 1 Chair: Linda Graham Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 2 8 - Improving Cross Disciplinary Assessment Literacy through the use of Rubric conversations Amanda Chapman, Sarah Ruston University of Cumbria, Lancaster, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract As one of the approaches for improving transparency and grading consistency across the University, programme teams have been encouraged to consider adopting traditional matrix-style levelled rubrics. This follows the success of rubric adoption across the Psychology programme. Development sessions were held to encourage and facilitate initial rubric planning. The Community of Practice conversations between Programme Teams are interesting in themselves and can highlight inconsistencies and varied approaches to practice. When the rubric is cross-disciplinary, these issues can be more acute. Inter-professional learning is an important aspect of many programmes but can also be challenging, particularly around assessment. Cross-disciplinary rubrics may be an answer to issues of fairness and equity. The research was gathered through observation in the Rubric Development sessions, with particular emphasis on those conversations that were cross-disciplinary. The results show that these assessment conversations are a worthwhile and important activity that can help reinforce and improve assessment literacy and can consolidate the Community of Practice. This is particularly important for new academic staff for whom the marking process can be a daunting one. Whilst the development of the rubric itself is the collective goal of these sessions, we argue that it is the conversation and professional discussion that brings the most benefits to improving the assessment literacy of staff, and ultimately students. Key References Bennett, C. (2016). Assessment rubrics: Thinking inside the boxes. Learning and Teaching, 9(1), 50-72. Bharuthram, S. (2015). Lecturers’ perceptions: The value of assessment rubrics for informing teaching practice and curriculum review and development. Africa Education Review, 12(3), 415-428. Cockett, A., & Jackson, C. (2018). The use of assessment rubrics to enhance feedback in higher education: An integrative literature review. Nurse Education Today, 69, 8-13.

27


Dawson, P. (2017). Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347-360. Grainger, Christie, Thomas, Dole, Heck, Marshman, & Carey. (2017). Improving the quality of assessment by using a community of practice to explore the optimal construction of assessment rubrics. Reflective Practice, 18(3), 410-422. Howell, Rebecca J. (2014). Grading rubrics: Hoopla or help? Innovations in Education & Teaching International, 51(4), 400-411. Parallel Session 1 Chair: Juuso Nieminen Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 3 9 - Exploring group-work and physical space: a case study in factors influencing student success Judy Cohen, Alison Dean University of Kent, Canterbury, United Kingdom Conference Theme Assessment: learning community, social justice, diversity and well-being Abstract This study explores the links between student confidence and learning perspective for managing group work for high-stakes assessments at final year under-graduate level. During a regular peer review of teaching in 2018 in a final year undergraduate module on Strategy, a link seemed evident between student performance in the group work task and the physical location of the groups within the teaching space (a ‘digital classroom’ of selfselected groups seated around a large shared monitor). Each group has the same layout of table and monitor, with each monitor facing inwards and towards the centre-front of the room. Everyone can see each screen, with two groups facing the front of the room and one to each side. Groups had to prepare an interactive presentation of a case review of a company. Students are assessed on both the quality of the review and the sophistication of their presentations. Students could choose both where they sat in the room and which students they worked with. The use of the digital classroom in this module reflects the implementation of a teaching innovation involving use of technology to teach the curriculum, research methods and presentation skills. In addition, the concept of the digital classroom is relatively new at the University, and there are only two rooms of this type available. Informal comments indicate that students appreciate the facilities of the room, but have reservations about working in groups. Lee and Branch (2017) demonstrate that student expectations of the learning environment impacts engagement, particularly where student-centred approaches to learning are used. While this affective response may be subconscious, it may also influence ‘their attitudes and abilities to adjust themselves to higher education environments’ (Lee & Branch 2017 p2. As indicated by Cohen and Robinson (2017) ‘novice’ students (with a didactic approach to learning) may not fully benefit from teaching innovations, particularly those relying on technology. By contrast, students with an ‘expert’ approach to learning thrive on studentcentred teaching, including technology and, by implication, are likely to be high achievers in group tasks.

28


Drawing on these ideas, this study aims to investigate whether a novice/expert perspective to learning may be reflected in students’ choice of group to join and their learning in the module. Early results indicate that student confidence is linked to group, and it is anticipated that final data will illuminate links between student confidence, learning perspective and classroom space. Key References Cohen, J & Robinson, C (2017): Enhancing teaching excellence through team-based learning, Innovations in Education and Teaching International, 55:2, 133-142, DOI: 10.1080/14703297.2017.1389290 Lee, S J & Branch, R M (2017) Students’ beliefs about teaching and learning and their perceptions of student-centred learning environments Innovations in Education and Teaching International 55:5, 585-593, DOI: 10.1080/14703297.2017.1285716 Parallel Session 1 Chair: Jess Evans Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 4 10 - Establishing a university wide Community of Practice for Exemplars at Harper Adams University Jane Headley, Helen Pittson Harper Adams University, Newport, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Sally Jordan, summarising the thoughts of Sally Brown and Kay Sambell observed that, “committed, passionate and convincing change agents achieve more than top-down directives” (Jordan, 2016). This practice exchange will demonstrate how staff members can work together from the bottom up by visibly championing a topic and seeking opportunities to promote this internally, to drive sustainable institutional change. Harper Adams University is a small, specialist institution where staff have led change by forming a university wide Community of Practice for Exemplars (CoPfE) to support student transition, assessment and feedback. The CoPfE began in 2016 and currently meets monthly. In this time 24 staff have engaged across the five academic departments, English language support and educational development teams from a total of 125 staff. This professional learning community provides support and guidance for the use of exemplars within the institution and advances understanding about effective practice (Hudson et al., 2013). The University was awarded Gold TEF and the CoPfE was included in the university’s submission as an example of innovation supporting teaching. Building on the work of Hendry et al. (2016) members of the CoPfE have been using shared questions to gather a university wide body of evidence detailing undergraduate students’ responses to the use of exemplars. Results of this novel longitudinal exploration have found students value the opportunity to learn from exemplars noting improved understanding and the ability to self-critique work. In addition a University wide mapping exercise to gather information on academic staff members’ use of exemplars in their teaching practice has been conducted. Findings from staff and students will be presented. A recent visitor from Keele University commented, “the collegiate and non-hierarchical nature of this group benefits from being both practitioner-led and practitioner-focused. 29


This enthusiastic group creates a place where both the experienced and uninitiated can discuss their work on an equal footing, with no dominant force or voice.” Key References Hendry, G. D., White, P. and Herbert, C. 2016. Providing exemplar-based ‘feedforward’ before an assessment: the role of teacher explanation. Active Learning in Higher Education, 17(2), pp.99-109. Hudson, P., Hudson, S., Gray, B. and Bloxham, R. 2013. Learning about being effective mentors: Professional learning communities and mentoring. Procedia – Social and Behavioural Sciences, 93, 1291-1300. Jordan, S. 2016. Keys to transforming assessment at institutional level: selected debates from AHE2016. [Online]. Available from: http://www.open.ac.uk/blogs/SallyJordan/?p=1763 Parallel Session 1 Chair: Sally Jordan Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 5 11 - Online tools to enhance students experience: assessment and feedback Maria Rosaria Marsico University of Exeter, Exeter, United Kingdom Conference Theme Integrating digital tools and technologies for assessment Abstract A variety of technologies is essential to make the learning process more dynamic and interesting, and to engage the new generation of students in modern challenges in the Higher Education (Marsico 2015). However, teaching and learning with technology is not just about staying current on the latest tools. It is about how to successfully incorporate the best tools into teaching when and where it supports learning (Faculty Focus 2014; COLLABORATE 2011-13). Using technology can translate into individualism (Buemi 2014) therefore it is vital to combine online learning tools with activities aiming at developing a collaborative learning environment; for instance by facilitating group/tutorial sessions or by moderating group discussions. This work is about reflecting on and evaluating the effectiveness of in-house developed online tools to assess students’ learning and to provide individual feedback. Online tools for formative and summative assessments were tested over five cycles and different programmes/modules. The in-house online tools have the following features: a) they are computer-marked and can be used for self-assessment; b) provide personalized feedback; c) are flexible (shuffling, generating ‘infinite’ questions, etc) d) they fit for survey, for example for peer-to-peer review used for group working; e) have an embedded system that compares students’ performance and attendance. Since the integration of those tools in the engineering programmes at the University of Exeter, the number of students failing modules in year 1 and 2 dropped by 76%. With a large cohort of students (e.g. >300), online tools are proved to be very effective: they enable providing instant and high quality feedback to students so that they have time to assimilate and improve. Tailor-made feedback is based on the student’s academic performance, addressing specific needs and supporting their learning and progression. 30


Key References A concise list of references is presented below. Additional references will be provided to audience during the session. Marsico MR, University of Exeter Education Strategy 2015-2020 on http://www.exeter.ac.uk/about/vision/educationstrategy/aims/researchinspiredinquiry-ledlearning/drmariarosariamarsico/, 2015. COLLABORATE: working with employers and students to design assessment enhanced by the use of digital technologies, JISC Grant Funding 5/11 - Programme – Strand A, £456,926.44 (2011-2013). Faculty Focus: Higher education teaching strategies from Magna Publications, professional development for higher education, a series of articles 2014. Buemi S. Taking the Tech out of technology, in The teaching professors, 28.5, Magna publications, 2014. Parallel Session 1 Chair: Nicola Reimann Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 7 12 - What went wrong? Students’ and lecturer reflection on why face-to-face feedback was ineffective Tina Harvey University of Cumbria, Carlisle, United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment Abstract Publications from researchers such as Nutbrown, Higgins and Beesley (2016); Winstone et al (2017); and Carless and Boud (2018) are still trying to determine the best methods to provide assessment feedback and what influences student engagement with the process. There is also an increasing amount of literature that demonstrates a need to move away from feedback dialogue being a one-way process, and instead becoming one that involves the student as an equal partner (Ajjawi & Boud, 2017). Sambell et al (2017) passionately defend their belief that students need to be effectively engaged with feedback practices if they are to become successful and autonomous learners, and the authors claim this is possible through participation in active dialogue with tutors. This claim is further supported by Chalmers, Mowat and Chapman (2018) who identify the benefits of marking and providing feedback face-to-face with students. A cohort of 23 2nd year undergraduate students were invited to have individual, face-to-face summative assessment feedback with the module tutor. All 23 students participated and all 23 express their opinions on how much more useful they felt this process was compared to receiving electronic written feedback (which is current pedagogical practice). Students stated that it was beneficial to be able to ask questions and ensure that they understood the advice being given. However, for a high percentage of these students, the following assessment did not indicate that the face-to-face feedback advice had been applied. The failure rate was significantly greater than had been anticipated, and the researcher was left wondering if this was a result of low academic literacy skills. In order to determine a clearer understanding of the unexpected failure rate, a qualitative study was undertaken to elicit necessary evidence to explain how this approach appeared to not be as effective as the findings suggested by Chalmers, Mowat & Chapman (2018) and the claim from Sambell et al (2017). This presentation will discuss the discoveries of individual interviews undertaken 31


with the students, and reflect on the diverse perspectives of dialogic feedback, along with analysing where and why the process of face-to-face feedback was ineffective. Finally, the presentation will highlight the next steps to this ongoing project and how improvements can be made. Key words Feedback dialogue; Assessment feedback; Engagement with feedback; Feedback dialogue. Key References Ajjawi, R. & Boud, D. (2017) ‘Researching feedback dialogue: an interactional analysis approach.’ Assessment & Evaluation in Higher Education, 42(2), pp 252-265. Carless, D. & Boud, D. (2018) ‘The development of student feedback literacy: enabling uptake of feedback.’ Assessment & Evaluation in Higher Education, 43(8), pp 13151326. Chalmers, C., Mowat, E. & Chapman, M. (2018) ‘Marking and providing feedback face-toface: Staff and student perspectives.’ Active Learning in Higher Education, 19(1), pp 35-45. Nutbrown, S., Higgins, C. and Beesley, S. (2016) ‘Measuring the Impact of High Quality Instant Feedback’, Practitioner Research in Higher Education, 10(1),pp 130-139. Sambell, K., Brown, S., & Graham, L. (2017). Professionalism in practice: Key directions in higher education learning, teaching and assessment. London: Palgrave McMillan Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2017). ‘It'd be useful, but I wouldn't use it’: barriers to university students’ feedback seeking and recipience. Studies in Higher Education, 42(11), pp 2026-2041. Parallel Session 1 Chair: Hilary Constable Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 9 13 - Assessment as learning: developing student-teacher peer feedback and formative assessment practice Nigel Quirke-Bolt, Molly Daly Mary Immaculate College (MIC), Thurles, Ireland Conference Theme Addressing challenges of assessment in mass higher education Abstract This research study investigated the practice of peer feedback and peer review as an approach to assessment as learning. Peer feedback in this context is a process where students analyse and evaluate the work of their peers and provide and share feedback information with each other, as part of their coursework assessment. This study evaluated the work of a cohort of eighty-one second-year undergraduate postprimary student-teachers on a four-year concurrent initial teacher education (ITE) course, studying a compulsory education module. The students were asked to complete a component of their assessed coursework from this module by engaging in a peer review process. The module lecturers arranged for all the student-teachers to meet together, faceto-face, a week before the deadline for the assignment and facilitated the exchange of the students’ assignments with their peers. This resulted in each student both giving and receiving multiple peer reviews. The student-teachers had a further seven days in which to act and reflect on their peers’ feedback, and combine it with their own reflections from having seen and assessed the work of their peers, and make adjustments to improve their 32


own work, before submitting their assignment into the faculty. On conclusion of this process, the student-teachers were asked to critically reflect on what they have learnt from their peer assessment experience. The results from this study were positive, and gave much food for thought. It was judged that the students’ learning was enhanced through both giving and receiving peer feedback reviews (Nicol, Thomas & Breslin, 2014). The students also found that the feedback they received from each other was often easier to understand and more helpful than the feedback they received from their lecturers (Topping, 1998; Falchikov, 2005). The peer feedback obtained by the students from groups of peers was found to be of particular benefit (Topping, 1998). The quantity and variety of information provided for students from multiple peer review conversations provided a greater chance of the student receiving the quality feedback that they needed, which was relevant to them, and which they could usefully draw upon to improve their work (Cho & MacArthur, 2010). The insights gained from this research study have provided an example of how the practice of assessment as learning can enhance the students’ learning and how it can be adopted into educational modules on an initial teacher education programme. Key References Cho, K. & MacArthur, C. (2011). Learning by Reviewing, Journal of Educational Psychology, 103(1): 73–84. Falchikov, N. (2005). Improving Assessment through Student Involvement, London: Routledge–Falmer. Nicol, D., Thomson, A. & Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective, Assessment & Evaluation in Higher Education, 39(1): 102-122. Topping, K. (1998). Peer Assessment between Students in Colleges and Universities, Review of Educational Research, 68(3): 249–276. Parallel Session 1 Chair: Natasha Jankowski Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 10 14 - Is it time to moderate moderation? UK academic staff perceptions of the effectiveness and location of different moderation strategies Andy Lloyd Cardiff University, Cardiff, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract The role played by different moderation methods in helping to secure and protect the academic standards of taught degrees is an under-researched area of practice within UK higher education (Bloxham, Hughes and Adie, 2015). A range of different methods are regularly and routinely used, although comparatively little attention appears to have been given to the choice of methods used and their potential effectiveness. This paper will present and discuss data collected on academics’ perceptions of the different methods of moderation and the related processes used to assure and safeguard academic standards. The data was collected from an exercise undertaken by participants who attended the professional development course on the role and nature of external examining, the course being led by Advance HE as part of the Office for Students (OFS) sponsored ‘Degree 33


Standards Project’. The project was commissioned to identify ways of strengthening the role of the external examiner, a key and distinctive element of the UK quality assurance system, in light of the evidence that indicated a need for external examiners to be better placed to safeguard academic standards (HEFCE, 2015). Specifically, the paper will present the outcomes from an exercise in which course participants worked in small groups to position cards on a grid to indicate their views on a) the relative ‘effectiveness’ of different methods and b) their ‘location’ (i.e. whether they are ‘internally’ focused, or involve ‘externality’). The 21 different processes listed on the cards were then grouped into six categories, using an adaptation of the framework set out in 2015 (Bloxham, Hughes and Adie, 2015). The paper will present the results from this exercise and present a preliminary analysis of the main findings. It will conclude by considering what the outcomes from this exercise might mean, both for the moderation processes that could be best utilised across the sector to help reduce some of the variation in academic standards identified between academic staff, both within and between different institutions (Bloxham and Price, 2015), and what further support external examiners might benefit from to improve their assessment literacy (Medland, 2015). Key References Bloxham, S. and Price, M (2013) External examining: fit for purpose? Studies in Higher Education, 40 (2). pp. 195-211. Bloxham, S., Hughes, C. & Adie, L. (2015) What’s the point of moderation? A discussion of the purposes achieved through contemporary moderation practices. Assessment and Evaluation in Higher Education, 41:4, pp. 638-653 Higher Education Academy (2015) A review of external examining arrangements across the UK: Report to the UK higher education funding bodies Medland E. (2015) Examining the assessment literacy of external examiners. London Review of Education 13 (3) pp. 21-33 Smith C. (2012) Why should we bother with assessment moderation? Nurse Education Today 32(6): e45-8 Parallel Session 1 Chair: Kimberly Ondo Time: 11:20 - 11:50 Date: 26th June 2019 Location: Room 11 15 - Responsibility sharing in the feedback process: Perspectives of educators Edd Pitt1, Naomi Winstone2, Rob Nash3 1 University of Kent, Canterbury, United Kingdom. 2University of Surrey, Guildford, United Kingdom. 3Aston University, Birmingham, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract The impact of feedback on learning is driven by what students, not only educators, do (Carless, 2015). Proposing a culture of shared responsibility in the giving and receiving of feedback, Nash and Winstone (2017) argued that students need to be empowered to take more proactive roles in feedback processes. However, educator-centred models of feedback continue to dominate practice (Winstone & Boud, 2018). Shifting practice towards studentcentred models of feedback demands a better understanding of how educators view their own and their students’ responsibilities. In total, 216 lecturers from UK universities 34


answered two open-ended questions concerning their beliefs about (1) the responsibility of the educator and (2) the responsibility of the student in the feedback process. Content analysis of their responses revealed five themes representing the perceptions of educators’ responsibilities: grade justification; provision of comments; facilitation of students’ development; affective awareness; and following policy and procedures. Furthermore, there were six themes representing educators’ perceptions of students’ responsibility: process comments; follow guidelines; engage in reflection; enact comments; seek clarification; and engage in dialogue. By comparing the prevalence of these codes, we found a predominance of educator-centred over student-centred models of feedback. In particular, responses that conveyed transmission-focused perceptions of educators’ responsibility—focused on the mere provision of comments—were significantly more common than were responses that conveyed the student-focused model of facilitating students’ development. Similarly, when considering students’ responsibility, educators significantly more often made reference to the basic processing of comments than they did to the enactment of comments. We supplemented this by conducting a linguistic analysis of the words these educators used when describing their own and their students’ responsibilities in the feedback process. This analysis, conducted using LIWC (Linguistic Enquiry and Word Count; Pennebaker et al., 2015) software revealed that when describing their own responsibilities in the feedback process, educators’ language was characterised by more certain, emotionally positive, power-related, and causal language, than was the case when they described students’ responsibilities. Taken together, these findings indicate a predominance of transmission-focused models of feedback processes among university educators. When describing students’ responsibilities, educators used tentative language, and they were more likely to identify the importance of students’ basic processing of comments than to mention their proactive enactment of comments. Facilitating student-centred approaches to feedback may benefit from educators and their students engaging in a dialogue relating to student enactment of comments in order to develop a sense of shared responsibility in the feedback process. Key References Carless, D. (2015). Excellence in University Assessment: Learning from Award Winning Practice. Abingdon, UK: Routledge. Nash, R.A., and Winstone, N. E. (2017) Responsibility-sharing in the giving and receiving of assessment feedback, Frontiers in Psychology, 8, 1519. Pennebaker, J.W., Boyd, R.L., Jordan, K., and Blackburn, K. (2015). The development and psychometric properties of LIWC2015. Austin, TX: University of Texas at Austin. Winstone, N. E., & Boud, D. (2018). Exploring cultures of feedback practice: The adoption of learning-focused feedback practices in the UK and Australia. Higher Education Research and Development, DOI: 10.1080/07294360.2018.1532985 Parallel Session 2 Chair: Maria Valero Time: 12:00 - 12:30Date: 26th June 2019 Location: Room 9 16 - Bringing Accounting to life through iterative curriculum design and assessment for learning: A case study in enhancing student performance Godfred Afrifa, Judy Cohen University of Kent, Canterbury, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment 35


Abstract It had become apparent that student performance and satisfaction in a compulsory introductory accounting module was poor. Module evaluations of teaching were low and it was evident by performance in the second year that students were not adequately learning the introductory material. A review of the curriculum showed that, due to the volume of material in this introductory course, practice in a key learning outcome was not introduced until around week 20 of the 24 week module. This further indicated that students received inadequate practice in a key outcome of the module; notably, one which receives around 50% of the exam weighting. As this module is a core module in a professionally accredited programme, there is little flexibility in adjusting assessment weighting or curriculum content. Taking inspiration from a similarly professionally accredited programme with a dense curriculum (Pharmacy) it was decided to apply principles of the spiral curriculum (Bruner 1960, Harden & Stamper 1999) and to introduce the key outcome of production of financial statements much earlier in the module (week 5). This would allow students to become familiar with the idea, and to experience a building of the complexity of this task as more and more module content was taught. Weekly seminar tasks now provide earlier and regular practice in these key tasks which are constructively aligned to the learning outcomes and the main summative assessment. Immediate feedback on each task is provided during the teaching session, and students have the opportunity to discuss with peers as well as teaching staff thus meeting elements of the assessment for learning principles put forward by Gibbs and Simpson (2004). By applying a spiral and constructively aligned curriculum (Harden & Stamper, 1999; Biggs 2003) it is anticipated that students will acquire and retain the curriculum content through the mechanisms of regular, meaningful practice and repetition throughout the module. Early results indicate that student performance has improved, and that this will be confirmed by final data available after the end of the module. By analysing and comparing student performance (adjusted for entry grades) by cohort, the effectiveness of the revised curriculum delivery will be examined and further illuminated by qualitative and survey data. Future work will explore the roles of spiral curriculum and formative assessment in enhancing confidence and performance. Key References Biggs, J.B. (2003). Teaching for quality learning at university. Buckingham: The Open University Press Bruner J.S. (1960) The Process of Education. Cambridge: Harvard University Press Gibbs, G. & Simpson, C. (2004) Conditions under Which Assessment Supports Students’ Learning. Learning and Teaching in Higher Education, 1, pp3-31 Harden, R.M. & Stamper, N. (1999) What is a spiral curriculum? Medical Teacher 21: 141–3 Parallel Session 2 Chair: Jill Barber Time: 12:00 - 12:30 Date: 26th June 2019 Location: Room 2 17 - Introducing marked rubrics to enhance the student experience: One programme’s journey to improve consistency Julie Taylor1, Andrea Charters1, Elizabeth Bates2 1 University of Cumbria, Carlisle, United Kingdom. 2Uniuversity of Cumbria, Carlisle, United Kingdom

36


Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract In the transition to online marking our use of paper-based marking grids was temporarily lost. This loss coincided with reduced student satisfaction reports on the fairness, transparency and consistency of our assessment and feedback procedures. Additionally, colleagues reported that the burden of marking had increased. To remedy the situation, the psychology staff and academic technicians began a research project seeking to develop a marking process to address these issues. The process was designed around the quantitative turnitin rubric tool augmented by reference to best practice research findings from the literature. The literature suggested that to be effective rubrics needed to be task specific, explicit and part of the teaching and learning experience (Andrade, 2009; Fraile, 2017; Panadero, 2014; Reddy, 2010; Rezaei & Lovorn, 2010; Sundeen,2014). Over the past two years we have developed a series of developmental, level and task specific rubrics and a standardised approach to feedback. In parallel we have collected and responded to student, staff and external examiner feedback. This presentation describes our current process and the challenges and opportunities faced by staff and students throughout its inception to our present position. The data reported was collected during focus groups, in-class activities, questionnaires and from institutional quality procedures. Initial findings suggest that provided the rubrics are used consistently and as part of a programme level teaching and learning strategy they are positively received by students and staff alike. Enhancing student confidence in procedures and reducing the burden of marking for staff. Moreover, there have been several unintended positive consequences, for example their utility in academic tutorials; and reflective peer and self-assessment tasks. However, an unintended negative outcome was the response of students who were introduced to rubrics part way through their academic journey. The implications for future research and development will also be discussed. Key References Andrade, H. (2009). Promoting Learning and Achievement Through Self-Assessment. Theory Into Practice, 48(1), 12-20. Fraile, J. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69-77. Panadero, E. (2014). To rubric or not to rubric? The effects of self-assessment on selfregulation, performance and self-efficacy. Assessment in Education: Principles, Policy & Practice, 21(2), 133-149. Reddy, Y. M. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-449. Rezaei, A.R. & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing, 15(1), 18-39. Sundeen, T.H. (2014). Instructional rubrics: Effects of presentation options on writing quality. Assessing Writing, 21, 74-89.

37


Parallel Session 2 Chair: Peter Holgate Time: 12:00 - 12:30 Date: 26th June 2019 Location: Room 3 18 - Informing Change Through Quality Assurance and Co-Curricular Assessment Ashley Babcock, Tara Lehan Northcentral University, San Diego, USA Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Co-curricular departments such as learning centers can play an important role in students attaining academic success (Waryas, 2015). To that end, to show a relation to teaching and learning on the curricular side of things it is important to assess student learning both inside and outside of the classroom (Busby, 2015). However, learning centers across institutions can be very different from one another. This lack of a consistent identity makes assessing student outcomes across learning centers difficult (Lehan, Hussey, & Steiner, 2018). One way to combat the lack of consistency among learning center assessment is to align with curricular assessment. By focusing on the same outcomes and using a similar instrument, this alignment can help to inform continuous improvements as well as hold faculty and learning center professionals accountable for consistent student interactions (Zilvinskis, 2015). At a learning center at one completely online university, academic coaches support student learning in written communication and quantitative reasoning in various forms, including live one-to-one sessions. In order to assess competence during academic coaching, a data-collection instrument was created to capture the skills being coached, strategies used, and next steps that should take place as well as identifying the Bloom’s Taxonomy level of competence for that skill. Building off the instrument, a quality assurance protocol was developed and implemented to ensure that the academic coaches are engaging in meaningful interactions with students in a consistent way. A rubric that was jointly developed with center leadership and academic coaches highlights strengths and growth areas to assist with the development of additional training. Collaborative rubric development can ensure that agreed upon outcomes are consistently evaluated among educators (Willett, Iverson, Rutz, & Manduca, 2014), as well as designed with intention and a clear purpose that aligns with larger institutional outcomes (Jenkins & Allen, 2017). These processes are sufficiently flexible that they can easily be adapted and implemented at another learning center or department at any higher education institution. In this session, we will briefly describe the expectations of academic coaches to include how they should engage with students to promote learning and success as well as document what occurred in a session. Moreover, we will discuss how the data are employed towards continuous improvement in the learning center. The majority of the session will focus on the quality assurance protocol, with a focus on how it is leveraged at both the individual and the aggregate level. Key References Busby, K. (2015). Co-curricular outcomes assessment and accreditation. New Directions for Institutional Research, 2014(164), 39–50. doi: 10.1002/ir.20114 Jenkins, D. M., & Allen, S. J. (2017). Aligning instructional strategies with learning outcomes and leadership competencies. New Directions for Student Leadership, 2017(156), 43– 58. doi: 10.1002/yd.20270 Lehan, T. J., Hussey, H. D., & Shriner, M. (2018). The influence of academic coaching on persistence in online graduate students. Mentoring & Tutoring: Partnership in Learning, 26(3), 289-304. doi: 10.1080/13611267.2018.1511848 38


Waryas, D. E. (2015). Characterizing and assessing co-curricular activities for graduate and professional-school students: Exploring the value of intentional assessment planning and practice. New Directions for Institutional Research, 2014(164), 71–81. Willett, G., Iverson, E. R., Rutz, C., & Manduca, C. A. (2014). Measures matter: Evidence of faculty development effects on faculty and student learning. Assessing Writing, 20, 19–36. doi: 10.1016/j.asw.2013.12.001 Zilvinskis, J. (2015). Using authentic assessment to reinforce student learning in high-impact practices. Assessment Update, 27(6), 7–13. doi:10.1002/au30040 Parallel Session 2 Chair: Jack Walton Time: 12:00 - 12:30 Date: 26th June 2019 Location: Room 4 19 - Students’ survey reloaded: An attempt for the Italian higher education system Serafina Pastore, Amelia Manuti University of Bari, Bari, Italy Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract This paper reports on a study aimed to review the questionnaire currently used in the national students’ survey in Italy and to suggest a new version more aligned with the international framework of quality assurance. Student surveys are designed and implemented to gather valid and reliable evidence on the strengths and weaknesses of higher education institutions. Despite the unresolved theoretical and psychometric issues (e.g., focus on teacher efficacy, student perceptions, or student satisfaction), student surveys can serve different assessment purposes, for different stakeholder, and at different levels. The most recent reforms within the higher education field encourage teachers to incorporate a range of assessment practices that can reply to accountability and quality assurance requests, and be responsive to students’ learning needs. In Italy, despite the importance of gather students’ feedback in providing information about what students have gained through their engagement within the higher education system, the quality assurance process has remained mostly unchanged. Students’ behavior of compliance and a strong sense of disaffection represent some of the increasing malpractices in the Italian quality assurance system. In view of the theoretical assumptions and methodological issues related to the use of students’ surveys, the present study has been aimed to develop a new questionnaire. Moving from a preliminary literature review a comparative and contrastive analysis of main national students’ survey has been realized. More specifically the National Student Survey (UK) and the Course Experience Questionnaire (Australia) have been considered. This analysis allowed to better highlight the need for a questionnaire that could balance both organizational and managerial needs, as well as those relating to the most innovative teaching-learning aspects (such as, for example, the reference to Dublin Descriptors during the instructional design phase or in the assessment of learning outcomes). 39


The new questionnaire is made up of 33 items related to five different dimensions:     

Organization and teaching; Assessment and quality assurance; Learning support, Learning resources; Dublin Descriptors.

The validation process of the new questionnaire involved 572 students enrolled in different Bachelor degree and Master degree courses. A correlational and a principal component analysis have been performed. Adequate internal consistency was observed for all the dimensions. Nonetheless its intrinsic limitations this study represents an important step in order to shed light on the use of students’ surveys for quality assurance. These results can be helpful to better understand incongruences and criticalities in the quality assurance process. Key References Alderman, L., S. Towers, and S. Bannah (2012). Student Feedback Systems in Higher Education: A Focused Literature Review and Environmental Scan. Quality in Higher Education, 18(3): 261-280. Callender, C., P. Ramsden, and J. Griggs (2014). Review of the National Student Survey. Bristol: Higher Education Funding Council for England. Richardson, J. T. E. (2013). The National Student Survey and Its Impact on UK Higher Education. In S. Mahsood & S. N. Chenicheri (Eds). Enhancing Student Feedback and Improvement Systems in Tertiary Education, (pp. 76-84). CAA Quality Series (5). Abu Dhabi: Commission for Academic Accreditation, UAE. Parallel Session 2 Chair: Sara Eastburn Time: 12:00 - 12:30 Date: 26th June 2019 Location: Room 7 20 - Learning the language of uncertainty – assessing use of epistemic markers in academic writing within Higher Education Charlotte Wilson University of Cumbria, Lancaster, United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment Abstract Assessment criteria in H.E. place varying weight on the structure, style and overall presentation of academic writing. These components may be typically weighted at 15% or appear only within grade descriptors under a demonstrated ability to communicate effectively in writing. Assessment criteria and formalised expectations of linguistic expressions would benefit from a greater consensus in approach in order to deliver consistency and transparency for learners. One aspect of academic writing is the use of epistemic markers – these terms are most relevant where knowledge is disputed and subject to interpretation. The most commonly used markers; adverbs, modal verbs and lexical verbs, set conditions on knowledge claims, and can be analysed according to their frequency, range, clustering and level of commitment to propositions (Vandenhoek, 2018). The use of the epistemic modality – moderating views by either hedging (weakening) or 40


boosting (strengthening) a knowledge clam is associated with a greater sophistication in building argument, particularly in discursive writing at more advanced levels (Hyland, 1997). However, less is known about their expression in authentic student texts, or skill acquisition from level 3 to level 4 or throughout undergraduate study (Aull & Lancaster, 2014). The extent to which tutors are aware of the epistemic modality in learners’ written work or orientate to these expressions is also unknown.A corpus research study is discussed to explore the use of epistemic markers in balanced samples of work from students at different academic levels and non-native speaker status, in order to improve the rigour of assessment within H.E. Key References Aull, L. & Lancaster, Z. (2014) Linguistic Markers of Stance in Early and Advanced Academic Writing; A Corpus-based Comparison. Written Communication. https://doi.org/10.1177/0741088314527055 Hyland, K. (1997) Qualification and certainty in L1 and Level 2 students’ writing. Journal of Second Language Writing. Vol. 6. No. 2p183-205. Vandenhoek, T. (2018 ) Epistemic markers in NS & NSS Academic Writing. Journal of Academic Writing. Vol.8 no.1 p72-91 Parallel Session 2 Chair: Emma Gillaspy Time: 12:00 - 12:30 Date: 26th June 2019 Location: Room 9 21 - Refining Re-assessment Harvey Woolf1, Wayne Turnbull2, Marie Stowell1 1 University of Worcester, Worcester, United Kingdom. 2Liverpool John Moores, Liverpool, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Over the last 30 years the Student Assessment and Classification Working Group (SACWG) and the Northern Universities Consortium (NUCCAT) have been researching various aspects of the all too often overlooked element of institutions’ assessment strategies – reassessment.[1] The two most recent phases of the investigation focus on the management of re-assessment and the implications of that management for assessment strategies. In many respects institutions manage initial and re-assessment in the same way. However, there are three significant differences: 

Assessment tasks A different task might be set. Where setting different re-assessment tasks is permitted, this was generally interpreted as requiring students to undertake a different type of assessment, a report rather than a presentation for example. Scrutiny of the assessment process Re-assessment receives less intense oversight, especially from external examiners, than initial assessments. Marking, moderation and feedback windows The processes and timescales for the completion of marking and moderation of reassessed work and feeding back provided to re-assessed students are typically much more telescoped than those for initial assessment.

41


These differences may be the inevitable consequences of the (summer) scheduling of reassessment, which imposes pragmatic logistical constraints on the management of the reassessment process.Such constraints make it impossible to manage re-assessment in exactly the same way that initial assessment was managed, and it is unrealistic to think that the management of re-assessment could completely mirror initial assessment. However, it may be that the purpose and nature of re-assessment are fundamentally different from the purpose and nature of initial assessment, with different assumptions and requirements in such areas as marking, moderation and feedback. If this is the case, then it is right that the management of re-assessment not only will, but should, differ from the management of initial assessment. The UK Quality Code, Advice and Guidance: Assessment[2] draws no distinction between initial and re-assessment. The findings of our research question that advice. The presentation will explore the implications of the two explanations for assessment strategies and regimes overall and for re-assessment specifically. Key References Stowell, Marie, Marie Falahee and Harvey Woolf. 2016. “Academic Standards and Regulatory Frameworks: Necessary Compromises?.” Assessment & Evaluation in Higher Education 41(4): 515-31. Turnbull, Wayne and Harvey Woolf. 2016. "To What Extent Do Re-Assessment, Compensation and Trailing Support Student Success?". NUCCAT, available at https://tinyurl.com/y9kfqhqq (see footnote 3 of this report for the meagre journal literature on re-assessment, to which the following can be added: Ostermaier, Andreas, Philipp Beltz and Susanne Link. 2013. “Do university policies matter? Effects of Course Policies on Performance” Beiträge zur Jahrestagung des Vereins fü r Socialpolitik 2013: Wettbewerbspolitik und Regulierung in einer globalen Wirtschaftsordnung - Session: University Enrollment and Student Achievement, No. A07-V2 and Tafreschi, Darjusch and Petra Thiemann. 2016. "Doing It Twice, Getting It Right? The Effects of Grade Retention and Course Repetition in Higher Education." Economics of Education Review 55: 198-219.). Turnbull, Wayne and Harvey Woolf. 2017. “Winning the progression lottery owes more to luck than academic judgement: consequences for students of regulatory variation in the UKHE sector”. NUCCAT, available at https://tinyurl.com/y9ldopjs. Published 29 November 2018. Available at http://www.qaa.ac.uk/quality-code. Parallel Session 2 Chair: Rita Headington Time: 12:00 - 12:30 Date: 26th June 2019 Location: Room 10 22 - Using technology to provide feedback to large classes Susanne Voelkel University of Liverpool, Liverpool, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract Formative assessment in combination with feedback can have a profound impact on student learning (Hattie and Timperley, 2007). To be effective, feedback needs to be timely and specific and provide information to the student on how to close gaps between current and 42


desired performance (Nicol and Macfarlane-Dick, 2006). However, large classes often make it difficult for teachers to provide opportunities for formative assessment followed by high quality feedback. The following proposes two methods how formative assessment can be successfully implemented in large classes. Firstly, in-class quizzes using students’ mobile phones can help to engage students before, during and after class and provide instant feedback to students about their level of understanding. Based on the results of quizzes teachers can adapt their teaching accordingly. These popular phone polls increase student satisfaction as expressed in student evaluations and attendance rates (Voelkel and Bennett, 2014). Different audience response systems such as PollEverywhere and Kahoot! are being compared. The second approach consists of online tests which have the advantage that they can be disseminated quickly and can be accessed by students in their own time. They are marked automatically and therefore the results can be made available immediately. Online tests are not without problems, though. Tests that are entirely formative (i.e. not credit bearing) tend to be ignored by many students. Summative tests on the other hand may lead to student collusion if feedback is provided immediately. A two-stage online test that combines formative with summative testing may be the answer. For each test, students have to complete a first test stage. This stage is formative, can be repeated multiple times and provides instant feedback. A second stage, which only becomes available once students have reached a certain pre-set level in the first test, is credit-bearing. This second test stage does not provide instant feedback, but encourages students to complete the test. This test format where formative and summative stages are separate but connected through the pre-set threshold combines the benefits of the formative (plenty of feedback, multiple trials) with the summative (ensuring a high participation rate). A significant increase in the mean module mark was observed after the introduction of this popular test regime (Voelkel, 2013). The two-stage online test design proves to be versatile and can be used across many disciplines. Key References Hattie, J., Timperley, H. (2007) The power of feedback. Review of Educational Research. Vol 77 (1), 81-112 Nicol, D.J., Macfarlane-Dick, D. (2006) Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education. Vol 31 (2), 199-218 Voelkel, S. (2013) Combining the formative with the summative: the development of a twostage online test to encourage engagement and provide personal feedback in large classes. Research in Learning Technology. Vol 21, 1-18 Voelkel, S, Bennett, D. (2013) New uses for a familiar technology: introducing mobile phone polling in large classes. Innovations in Education and Teaching International. 51 (1), 46-58 Parallel Session 2 Chair: Eileen O'Leary Time: 12:00 - 12:30 Date: 26th June 2019 Location: Room 11 23 - Academic standards in professional and vocational programmes in Higher Education: marking cultures in a post-1992 university John Dermo University of Salford, Salford, Manchester, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education 43


Abstract The Quality Code for higher education requires that universities have "sufficient appropriately qualified and skilled staff to deliver a high-quality academic experience ... [including] consideration of their knowledge and expertise in assessment" (QAA, 2018:3). In recent decades universities have sought to provide opportunities for academic staff to develop expertise and skills in assessment and feedback practice, but assessment literacy among university teaching staff has been identified as an area in need of improvement (Price et al, 2012; Carless, 2015; Medland, 2016). With regard to marking and maintaining academic standards, it is necessary for university teachers to be aware of issues around professional judgement, keep up-to-date on developments in good marking practice, and constantly reflect on their experience (Bloxham, 2012). This can be particularly challenging in an aspect of teaching that is all too often “in part tacit and bordering on the ineffable" (Yorke, 2011:268). This presentation critically discusses initial findings from research investigating how experienced academics working in a vocationally oriented university perceive the development of their assessment literacy through their academic career. Qualitative data have been collected through semi-structured interviews with experienced academic staff, and critical thematic analysis has been carried out of the different influences on their current and developing practice in assessment and feedback, with specific reference in this presentation to issues around marking and academic standards. The study investigates the main influences on the development of assessment literacy around marking-related issues, to explore how teaching staff have learned to mark the way they do. The study focuses on a number of key themes constructed from the interview data, including the following influences on marking practice:  student needs and concerns  pressures, processes and policy from institutions and the wider sector  the teacher’s own learning, prior experience, personality and beliefs  more formal training, development, educational theory and literature  professional discipline, identity, and external bodies  collaboration with colleagues, developing a community of marking and overcoming conservatism and tradition. The presentation concentrates specifically on how marking cultures develop through different forms of engagement between academic and professional colleagues, discussing these influences within the context of a developing model which attempts to explain these marking cultures, moving from personal individual influences, through disciplinary and departmental factors, on to institutional, national and global forces in higher education policy and practice. Key References Bloxham, S. (2012) ‘You can see the quality in front of your eyes’: grounding academic standards between rationality and interpretation, Quality in Higher Education, 18:2, 185-204. Carless, D. (2015) Excellence in University Assessment: learning from award-winning practice. Abingdon: Routledge. Medland, E. (2016) Assessment in higher education: drivers, barriers and directions for change in the UK, Assessment & Evaluation in Higher Education, 41:1, 81-96. Price, M. Rust, C., O’Donovan, B., Handley, K. and Bryant, R. (2012) Assessment Literacy: the foundation of improving student learning. Oxford: OCSLD. 44


Quality Assurance Agency (2018) UK Quality Code for Higher Education, Advice and Guidance. Assessment. Available online http://www.qaa.ac.uk//en/qualitycode/advice-and-guidance/assessment. Yorke, M. (2011) Summative assessment: dealing with the ‘measurement fallacy’, Studies in Higher Education, 36:3, 251-273. Parallel Session 3 Chair: Philip Denton Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 2 24 - From Essay to Evaluative Conversation: exploring the use of viva voce assessments to facilitate students’ engagement with feedback Fabio R. Arico'1, Naomi Winstone2 1 University of East Anglia, Norwich, United Kingdom. 2University of Surrey, Guildford, United Kingdom Conference Themes Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Facilitating student engagement with feedback processes begins at the point of assessment design (Winstone & Carless, 2019). Given their dialogic nature, there is potential for viva voce assessment designs to support student learning through feedback exchanges. Whilst viva voce examinations are common in doctoral assessment across a range of different disciplines, this type of assessment is seldom adopted to assess undergraduate students. There are some examples of successful application of this practice to undergraduate assessment in diverse disciplines such as nursing (Davis and Engward, 2018), dentistry (Ganji, 2017), business studies (Pearce and Lee, 2009), and education (Carless, 2002). However, the topic of viva voce assessment appears to be under-researched and not fully conceptualised (Dobson, 2008) in the education literature. This paper presents the preliminary findings from an evaluation of an innovative assessment design aimed at developing students’ critical evaluation and debating skills in a History of Economic Thought (HET) module running at the School of Economics of the University of East Anglia. The assessment structure of the HET module consists of three pieces of summative assessment: (i) a group video-presentation; (ii) a critical essay; and (iii) an evaluative conversation (akin to a viva voce). Whilst the group submission constitutes a stand-alone component of assessment, the critical essay and evaluative conversation assessments are inter-linked. The evaluative conversation is designed to enable students to demonstrate how they have acted upon the feedback received on their critical essay. This assessment design resounds with the work of Carless (2002), where the viva voce served as an appraisal of the essay piece, after its submission but prior to marking. However, our design generates more structured feed-forward dynamics because students’ engagement with the feedback received on their essay assignment directly affects their performance in the final evaluative conversation. A preliminary analysis of module evaluation data suggests that the dialogic learning experience fostered in this module was impactful and engaging. Students also recognised that the support provided through the process enabled them to perform well in the viva. Ultimately, they felt that their viva performance was aligned with their expectations for this assessment. These preliminary results demonstrate the potential for the use of evaluative conversations in undergraduate assessment designs. Research to explore student agency over the feed-forward mechanism is currently underway, which will inform our understanding of students’ ability to engage with feedback as a process and opportunity for further learning, rather than as an end outcome. 45


Key References Carless, D.R. 2002. The 'Mini-Viva' as a tool to enhance assessment for learning. Assessment and Evaluation in Higher Education, 27 (4), pp. 353-363. Davis, G., and Engward, H. 2018. In defence of the viva voce: Eighteen candidates’ voices. Nurse Education Today, 65, pp. 30-35. Dobson, S. 2008. Theorising the academic viva in higher education: The argument for a qualitative approach. Assessment and Evaluation in Higher Education, 33 (3), pp. 277288. Ganji, K. 2017. Evaluation of reliability in structured viva voce as a formative assessment of dental students. Journal of Dental Education, 81 (5), pp. 590-596. Pearce, G., and Lee, G. 2009. Viva voce (oral examination) as an assessment method: Insights from marketing students. Journal of Marketing Education, 31 (2), pp. 120-130. Winstone, N., and Carless, D. 2019, in press. Designing for student uptake of feedback in higher education. Routledge. Parallel Session 3 Chair: Tina Harvey Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 3 25 - Student perceptions of assessment accommodations in higher education: An analysis of power Juuso Nieminen University of Helsinki, Helsinki, Finland Conference Theme Assessment: learning community, social justice, diversity and well-being Abstract This study investigates the issues of power that underlie assessment accommodations in higher education. Assessment accommodations, such as extended testing time or a personal room during testing, are commonly referred to simply as a 'menu of services'. However, these accommodations, even though often based on warm-hearted intentions, are also rarely built on evidence-based practice. Also, since they are known to be potentially controversial and even discriminatory, there is a need for analysis of the power structures that underlie them. Three contrasting notions of power (sovereign power, epistemological power and disciplinary power) were used to analyse the experiences of the students themselves. In this study, ten mathematics students with learning and/or mental disabilities shared their experiences of testing accommodations in a narrative interview. A conceptdriven qualitative content analysis and further data-driven coding process followed. According to the results, the students had experienced unfair and shameful moments while participating in modified testing situations, a clear manifestation of unilateral sovereign power. Epistemological power could be identified in the ways in which the students normalised the idea of how mathematical knowledge should be tested. Also, disciplinary power could be seen in the ways in which assessment accommodations helped to construct exclusion through discourse rather than working as inclusive practices enabling equal access to assessment practices. This study suggests that it is crucial to hear the voice of the students who use the assessment accommodations administered for them in order to shed light on the power structures that might create inequity and injustice; a process that could be identified from these ten student interviews. To conclude, it is argued that there is a need to further understand power relations underlying assessment accommodations rather than framing them as simple, objective practices.

46


Key References Barnak-Brak, L., Lectenberger, D., & Lan, W. L. (2010). Accommodation strategies of college students with disabilities. The Qualitative Report, 15(2), 411–429. Cohen, A. S., Gregg, N., & Deng, M. (2005). The Role of Extended Time and Item Content on a High-Stakes Mathematics Test. Learning Disabilities Research and Practice, 20(4), 225– 233. Denhart, H. (2008). Deconstructing Barriers: Perceptions of Students Labeled With Learning Disabilities in Higher Education. Journal of Learning Disabilities, 41(6), 483–497. Foucault, M. (1977). Discipline and punish: The birth of the prison. New York: Vintage Books. Hanafin, J., Shevlin, M., Kenny, M., & Neela, E. M. (2007). Including young people with disabilities: Assessment challenges in higher education. Higher Education, 54(3), 435– 448. Kurth, N., & Mellard, D. (2006). Student perceptions of the accommodation process in postsecondary education. Journal of Postsecondary Education and Disability, 19(1), 71–84. Parallel Session 3 Chair: Hilary Constable Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 4 26 - Rethinking the Crit: Changing Assessment in Architecture Schools Patrick Flynn1, Miriam Dunn2, Maureen O Connor3, Mark Price4 1 TU Dublin, Dublin, Ireland. 2University of Limerick, Limerick, Ireland. 3CIT, Cork, Ireland. 4 UCD, Dublin, Ireland Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Assessment in architecture and creative arts schools has traditionally adopted a ‘one size fits all’ approach by using the ‘crit’, where students pin up their work, make a presentation and receive verbal feedback in front of peers and academic staff. In addition to increasing stress and inhibiting learning, which may impact more depending on gender and ethnicity, the adversarial structure of the ‘crit’ reinforces power imbalances and thereby ultimately contributes to the reproduction of dominant cultural paradigms. This paper critically examines the role of this standard method of assessment for architectural students internationally, known as the ‘crit’. It examines the pedagogical theory underlying this approach, discusses recent critiques of this hundred-year old approach and the reality of the ‘crit’ is examined through analysis of practice. This leads into a discussion of a semester-long piece of action research in this academic year in which academic staff have piloted new methods of formative and summative student-centred assessment without a ‘crit’. Our research project adds blended learning to new assessment methods in a radical approach challenging the dominant pedagogical theory and practice in architecture internationally. Feedback on the pilot from students, academic staff and external examiners has been extremely positive. We are now in the process of expanding this pilot across four Higher Education Institutes, reviewing emergent best practice abroad and aim to bring international experts to evaluate and develop the approach. While our focus will be on architecture, it will be relevant to

47


other creative disciplines which use the ‘crit’ method. We will also explore digital approaches to support student reflection. This approach has the potential to give students greater agency, enhanced critical faculties, professional skills and resilience, supporting transitions into and out of third level study. Our first pilot has shown that this new feedback and assessment method uses staff time in a more efficient and effective manner, with the student becoming central to the learning process. Key References Anthony, Kathryn. (1991). Design Juries on trial. London: Van Nostrand Reinhold. Buchanan, Peter. (2012). The Big Rethink: Rethinking Architectural Education. Stevens, Garry (1998). The Favored Circle. MIT Press Till, Jeremy (2005). Writings in Architectural Education no.26. : Lost Judgement. EAAE. Wegner, Etienne (1999). Communites of Practice: Learning, Meaning and Identity. Cambridge University Press. Webster, Helena (2008). Architectural Education after Schon: Cracks, Blurs, Boundaries and Beyond. Journal for Education in the Built Environment, Vol. 3, Issue 2, December 2008 pp. 63-74 Parallel Session 3 Chair: Sally Jordan Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 5 27 - Retaining Students and Designing for Success with Interactive Technologies Jonathan Hvaal, Victoria Quilter International College of Management Sydney, Sydney, Australia Conference Theme Integrating digital tools and technologies for assessment Abstract Retaining students to successful outcomes in higher education is complex and challenging, however the fundamentals are well documented in the research. The International College of Management Sydney (ICMS) has developed a set of Student Success Levers to lift student outcomes, by drawing on decades of literature on the first-year experience, transition pedagogies and the factors most likely to support students successfully through their studies. As part of the Student Success Pilots Project, sessional academics at ICMS were supported in piloting the Success Levers through the planning and implementation of a set of targeted interventions that aimed to reduce failure rates in their courses. This working paper will highlight the importance of interactive technologies, such H5P and Socrative, as the perfect tools to build these interventions. The affordances of interactive content, including H5P's documentation tool, hotspots, multi-choice quiz, and interactive video were leveraged so that students could be given more opportunity to succeed through practices such as receiving clear guidance in assessment completion, seeing illustrated samples of work aligned to standards-based assessment rubrics, interacting with quizzes and activities that scaffolded learning and feedback as well as watching worked examples of common questions to aid in preparation for a final exam.

48


Interim results have revealed successes in all subjects with 10%+ reductions in failure rates across some assessments. Despite the successes, the intervention process had some drawbacks, especially in terms of impact sustainability for the small ICMS L&T team. Therefore, this paper will also discuss the importance of building capability in the sessional academic ‘champions’, helping them to understand why and how the H5P tools can be embedded appropriately and to enable the champions’ sustained support of others at the College. Key References Chickering, A. W., & Gamson, Z. F. (1987). Seven Principles for Good Practice in Undergraduate Education. AAHE Bulletin, 39(7), 8-12. Kift, S., et al. (2010) Transition Pedagogy: A third generation approach to FYE–A case study of policy and practice for the higher education sector. The International Journal of the First Year in Higher Education, 1(1), 1-20. The International Journal of the First Year in Higher Education, 1, 1-20 Lizzio, A. and K. Wilson (2013). Early intervention to support the academic recovery of firstyear students at risk of non-continuation. Innovations in Education and Teaching International, 50 (2). McNeill, M., Hvaal, J., Quilter, V., Ridge, L., Iraninejad, M. (2018) Steps toward excellence: Retention pilots for student success. In 3rd Annual TEQSA Conference: A selection of Papers from the Combined TEQSA Conference & HEQ Forum (pp. 133-158). Retrieved from https://www.hes.edu.au/sites/default/files/uploadedcontent/field_f_content_file/a_selection_of_papers_from_the_combined_teqsa_conf erence_heq_forum_2018.pdf Nelson, K., Creagh, T., Kift, S. & Clarke, J. (2014) Transition Pedagogy Handbook A Good Practice Guide for Policy and Practice in the First Year Experience at QUT . 2nd Edition. Retrieved from http://fyhe.com.au/wp-content/uploads/2012/11/TransitionPedagogy-Handbook-2014.pdf Spies, M., McNeill, M., Rekhari, S., Woo, K. (2018) [Re]evaluating practice: measuring what is important, to inform improvements to student outcomes . Proceeding of Higher Education Research and Development Society of Australasia (HERDSA), Adelaide 2-5 July, Parallel Session 3 Chair: Kimberly Ondo Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 7 28 - “When feedback fails”: an exploration of the use of feedback literacies and the utility of language within healthcare education Sara Eastburn University of Huddersfield, Huddersfield , United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment Abstract The 2012 work by Paul Sutton established the specific concept of feedback literacy within the broader context of academic literacies (Sutton, 2012). This work enforced the significance of the socially constructed nature of the learning from feedback situation and explored learner agency within this. This work involved students and academics from within the social science disciplines and reported that learners may be unable to read, make sense of and/or act on feedback in the way that was intended by the feedback provider. More recent work by Winstone et al (2017) and Carless and Boud (2018) has developed the detail 49


of learner agency more explicitly and offers insight into the interplay between the key components of the concept. In particular, the conceptual work by Carless and Boud (2018) suggests that learners who are able to a) understand the meaning, importance and breadth of feedback, b) make their own judgements about the quality of work, and c) appropriately manage the affective aspects of feedback are those learners who, consequently, are better able to effectively engage with feedback. Within healthcare, effective engagement with feedback is fundamental to lifelong learning and maintaining professional registration. This paper presents empirical data from doctoral-level research. The research adopted an interpretive phenomenological approach and the data gives voice to the “actors” engaged within authentic feedback situations. From a socially constructed theoretical position, this paper will present examples of qualitative data from learners, university-based educators and practice-based educators to explore the concept and utilisation of overt and covert feedback literacies within healthcare education. It will specifically explore use of and problems associated with language within feedback, the notion of “game playing” within the practice of feedback that challenges the transparency of feedback literacy, and the practical dichotomies faced by educators. This paper will present data which suggests how and why feedback might fail the healthcare learner in developing the agentic feedback literacy skills presented by Carless and Boud (2018) that are fundamental to professional practice and offer suggestions as how these failings might be addressed. Key References Carless, D. and Boud, D. (2018) “The development of student feedback literacy: enabling uptake of feedback”, Assessment and Evaluation in Higher Education, 43(8), pp.13151325. Sutton, P. (2012) “Conceptualizing feedback theory: knowing, being and acting”, Innovations in Education and Teaching International, 49(1), pp.34-40. Winstone, N. E., Nash, R.A., Parker, M. And Rowntree, J. (2017) “Supporting Learners’ Agentic Engagement with Feedback: A Systematic Review and a Taxonomy of Recipience Processes”, Educational Psychologist, 51(1), pp.17-37. Parallel Session 3 Chair: Sam Elkington Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 9 29 - An alternative model to assessment grading tools: The Continua model of a Guide to Making Judgments Peter Grainger University of the Sunshine Coast, Sippy Downs, Australia Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Assessment is a key driver in fostering student learning and “is one of the most significant influences on students’ experiences of higher education and all that they gain from it” (Boud and Associates, 2010, p. 1). A major source of student dissatisfaction in higher education is related to assessment and assessment rubrics are often criticized by tertiary students as being vague, subjective and difficult to understand. Despite the emphasis on assessment practices in higher education in the last decade in particular, and a focus on the use of grading tools such as rubrics “there is a dearth of empirical research on the quality of rubrics as assessment instruments” (Humphry & Heldsinger, 2014, p.254). Some research (Humphry & Heldsinger, 2014) has indicated that many rubrics actually compromise, inadvertently, the 50


integrity of the assessor’s judgment. Although there is considerable research about rubrics and their impact, there is neither little systematic research about the design aspects of rubrics (Panadero & Jonsson, 2013) nor the process of development to establish their quality (Reddy & Andrade, 2010). The most commonly used model of an assessment rubric in tertiary education, and hence the most commonly criticised model, is the matrix model. There are however, many different models of what I refer to as grading tools, for example, marking guides or criteria sheets or Guides to Making Judgments (GTMJ) or holistic rubrics or just rubrics. All of these are grading tools used by an assessor to evaluate (summative) student performance and assign a grade. This research reports on the efficacy of the Continua model of a Guide to Making Judgments (GTMJ). To my knowledge this model is used in just a handful of universities in Queensland and restricted to teacher education programs, and there is little published research about the efficacy of the model. The point of difference lies in design features, specifically, its focus on nestedness, poles of quality rather than fixed points, its ability to define different number of standards for each criterion independently and its focus on only identifying discerning or discriminating behaviours using action verbs (standards descriptors) for each of the criterion identified in any specific assessment task. The research reported here suggests there is utility in considering and researching the efficacy of alternative models to assessment grading tools such as the Continua model of a Guide to Making Judgments. Key References Brookhart, S.M.; and Fei Chen. (2015). The quality and effectiveness of descriptive rubrics. Educational Review,Vol. 67, No. 3, 343–368, http://dx.doi.org/10.1080/00131911.2014.929565 Humphry, S and Heldsinger, A. (2014). Common Structural Design Features of Rubrics May Represent a Threat to Validity. Educational Researcher, Vol. 43 No. 5, pp. 253– 263 DOI: 10.3102/0013189X14542154 Grainger, P. and Weir, K. (2016). An alternative grading tool for enhancing assessment practise and quality assurance in higher education. Innovations In Education & Teaching International. Vol. 53, Issue. 1,pp73-83 http://dx.doi.org/10.1080/14703297.2015.1022200 Panadero, Ernesto, and Anders Jonsson. (2013). “The Use of Scoring Rubrics for Formative Assessment Purposes Revisited: A Review.” Educational Research Review 9: 129–144. doi:10.1016/j.edurev.2013.01.002. Parallel Session 3 Chair: Silke Lange Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 10 30 - Talking to the Teachers; how they observe gender at play in group work Caroline Sheedy Dundalk Institute of Technology, Dundalk, Ireland Conference Theme Assessment: learning community, social justice, diversity and well-being Abstract This work presents an introduction to a series of discussions with lecturers focusing on their experiences and observations of how gender impacts group work for students. The lecturers work primarily with computing students in hegemonically male groups, and it is common for 51


the groups to have no women by final year. Increasingly, assessment of these students is moving towards group assessments. Additionally, industry feedback emphasis is on so called ‘soft skills’, which are developed to a large degree through group interactions in these assessments. They are asked to reflect upon the importance of communication within these groups, and the impact that the gender imbalance can have, from their experience. The question is posed as to whether they observe communication is altered by the presence of a woman in a group, and if so, how. They are then asked to consider the value of talking within these groups, and the dynamics they observe in student groups regarding individuals desire to have a voice. In particular, they are asked to consider if there are any gendered aspects to the interactions at play when giving a voice to one another within groups. The lecturers are then asked to consider the implications of hypothetically assigning groups to achieve a gender balance in each group, particularly at final year, and how this would be received by students. The impact of diversity on academic success, which is measured via assessment, is also discussed. Finally, they are asked to reflect on whether they experience any personal impact of working with these hegemonically male groups, and if so, what that is. This study aims to understand how the hegemonic masculinity that is present in many STEAM courses at third level is observed by those leading the learning, the lecturers. The study focuses in particular on those who observe the interaction of students assessed in capstone group projects, which represent a significant aspect of their overall degree. It is part of a continuing series of empirical work, focusing on the implications of the masculinisation of technical subjects, and the resulting far reaching impact this has. Key References American Psychological Association. "Guidelines for psychological practice with boys and men”(2018) Sheedy, Caroline (2018) Hegemony and assessment: the student experience of being in a male homogenous higher education computing course. Practitioner Research in Higher Education, 11 (1). pp. 59-69. Pryor, John. "Gender Issues in Groupwork—a case study involving with with computers." British Educational Research Journal 21.3 (1995): 277-288. Parallel Session 3 Chair: Dave Darwent Time: 14:45 - 15:15 Date: 26th June 2019 Location: Room 11 31 - Learning from rejection: Academics’ experiences of peer reviewer feedback and the development of feedback literacy Karen Gravett University of Surrey, Guildford, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Writing for publication is a key part of an academic career. However, this practice can be fraught with difficulties, particularly at the peer review stage, where journal rejection rates are often high and the publication process competitive. Peer reviewers’ feedback can be particularly problematic as it will often elicit an affective response, may be inconsistent between reviewers, and may also be difficult to decipher or to action. Yet with publication being so important to academics’ career progression, as well as to academics’ development of their scholarly identities, how staff manage, learn, and develop from these experiences becomes an interesting area to examine. 52


The issues surrounding peer review and academics’ experiences of rejection have been addressed in recent literature from a number of approaches across the disciplines. Yet, while the literature has attended to the debilitating consequences of this aspect of academic life, it has not adequately explored how individuals or departments may use this rejection as a learning event that may contribute to the development of feedback literacy skills or to the development of adaptive capacity within individuals. Furthermore, academics’ development of feedback literacy in this area of their work will also impact upon their work supporting students’ feedback literacy development, and the relationship between these two areas is also of interest. Through concept map-mediated interviews with six academics working in a UK University, we explored their insights following receiving publication rejections and peer-reviewers’ feedback. Our research shows that academics have developed effective strategies to manage and to learn from feedback over time. Our research shows that these experiences also impact upon academics’ work with students, enabling staff to empathise with the emotive aspects of feedback, as well as motivating staff to invest time in creating constructive and dialogic feedback experiences. However, our interviews also highlighted unresolved tensions within the peer-review process and particularly with the monologic method of delivery of reviewer feedback. Ultimately, this presentation will examine how this area of feedback impacts upon both staff and student feedback literacies, and will consider the parallels we might draw from staff and student experiences of feedback recipience. Key References Carless, D. and Boud, D. (2018) ‘The development of student feedback literacy: enabling uptake of feedback’. Assessment & Evaluation in Higher Education, 43:8, 1315-1325, DOI: 10.1080/02602938.2018.1463354. Dobele, A. R.(2015) ‘Assessing the quality of feedback in the peer-review process’.Higher Education Research & Development,34:5,853-868,DOI: 10.1080/07294360.2015.1011086 Hattie, J. and Timperley, H. (2007). ‘The Power of Feedback’ Review of Educational Research, 77: 1, 81-112. DOI: 10.3102/003465430298487. Hyland, K. (2011). ‘Welcome to the machine: Thoughts on writing for scholarly publication’. Journal of Second Language Teaching and Research, 1(1), 58–68. DOI: 10.5420/jsltr.01.01.3319. Horn, S. (2016). ‘The social and psychological costs of peer review: stress and coping with manuscript rejection’. Journal of Management Inquiry, 25: 11, 11-26. Kinchin, I. M. and Winstone, N. E. (2017). Pedagogic frailty and resilience in the university. Rotterdam, Sense. Poster & Pitch Session Chair: Geraldine O'Neil Time: 15:40 - 16:40 Date: 26th June 2019 Location: Room 3 32 - Issues in Assessing English as a Foreign Language Speaking Skills: A Case of Saudi University students NAWAL ALMUASHI Bangor University, Bangor, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education

53


Abstract Speaking is one of the most essential skills of language usage. In general, teaching and testing speaking ability is both difficult and time consuming. This study focuses on the different methods used to assess speaking as a foreign language ability in Saudi universities. It starts by presenting a brief summary of the methods of testing speaking skills used in Saudi universities. It then discusses several of the problems and difficulties associated with assessing EFL speaking skills and describes some of the practical methods used. For example, it explains how computers can be used effectively when assessing EFL speaking ability. In general, the purpose of this research is to help university instructors to both design adequate speaking tests and improve their testing methods. Key Words Assessing, English as a Foreign Language, Speaking. 33 - Making your marking, 'it's a messy business' Fiona Meddings University of Bradford, Bradford, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract The Quality Assurance Agency’s (QAA) role in safeguarding academic standards and quality in higher education resulted in a publication devoted entirely to assessment (QAA 2012). It contains information on the implications of decisions lecturers make regarding assessment to be deployed, from the design to delivery, with Knight (1995) asserting that there could be 50 different artefact types used to evidence student achievement. An absence remains in information or instruction on how a lecturer might undertake the process of marking and grading of any assessment artefact submissions. The research aimed to discover what occurs during the process of marking and grading assessment artefacts, learning how lecturers carry out this component of their role. Identification of lecturer thoughts and actions which lead to the evaluation and generation of a mark and grade for an assessment artefact using written submissions only. Without knowledge of how to approach evaluation of the product of assessment appropriately this could pose a threat to the integrity, validity and reliability of the assessment processes used to quantify student achievement. There is a growing body of literature which explores the concept of communities of practice in higher education, some of which explicitly explores the induction and socialisation of new academics to their work context (Trowler and Knight 2000; Garrow and Tawse 2009). The relevance to marking and grading activity is based around what Gascoigne and Thornton (2014) define as knowledge of the unspoken rules (‘know how’), and practices which shape the application of the ‘know that’, and are unique to that community of practice. Sadler (2009) proposes in relation to marking and grading that not all measures used to undertake assessment artefact evaluation are communicable. They propose the existence of a tacit level of knowledge, which can come to be known by members of a group who have shared experiences and is difficult to convey to those who are outside of this collective context. The exact mechanisms by which lecturers undertake the evaluation of an assessment artefact can only be revealed and communicated through observation and close contact. 54


Research outputs following interview of 26 heterogeneous health academic from 4 institutions revealed several sub themes within the major theme of the ‘messiness of marking and grading’. A number of these were tacit practices, and others were externally driven but all exerted influence on lecturer approaches to working with student written assessment artefacts and are the focus of this presentation. Key References Garrow, A. and Tawse, S. (2009) An exploration of the assessment experiences of new academics as they engage with a community of practice in higher education. Nurse Education Today 29 (6), 580-584. Gascoigne, N. and Thornton, T. (2014) Tacit knowledge. Routledge. Knight, P. (1995) Assessment for learning in higher education. Psychology Press. QAA. (2012) Understanding assessment: its role in safeguarding academic quality and standards in Higher Education. A guide for early career staff. (978 1 84979 390 2). Quality Assurance Agency for Higher Education. Gloucester: Quality Assurance Agency for Higher Education. http://www.qaa.ac.uk/Publications/InformationAndGuidance/Pages/understandingassessment.aspx Sadler, D. R. (2009) Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education 34 (2), 159-179. Trowler, P. and Knight, P. T. (2000) Coming to Know in Higher Education: Theorising faculty entry to new work contexts. Higher Education Research & Development 19 (1), 27-42. 34 - Perceptions of 'effective' assessment and feedback: a micro student-led study to investigate Postgraduate perceptions of effective assessment and feedback practice at a leading Russell Group Business School Natalie Forde-leaves Cardiff University, Cardiff, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract The micro-study was a small student-led 'Student Education Innovation Project' that aimed to gain insight into assessment and feedback practice at a leading Russell Group Business School. The study looked to shed light on ‘what works’ in Assessment & Feedback practices that a) constitute best practice in line with extant pedagogy, and b) are considered as ‘best practice’ by students. The objectives of the study were to understand staff and student perceptions towards varying 'effective' assessment and feedback practices and gain insight into student performance across varying modes of assessment. An undergraduate student led the study conducting a mini literature review, proposing that effective assessment (as extensively covered by Boxham & Boyd, (2007)) and most suitably represented by sustainable assessment practice (Boud & Soler, 2016) and learning orientated assessment (Carless, 2007) incorporates many facets including: groupwork (Entwistle & Waterston, 1988), self-assessment (Boud, 2013), peer-assessment (Liu & Carless, 2006), using exemplar banks (Handley & Williams, 2011), formative assessment and assessment criteria (Nicol & Macfarlane‐Dick, 2006, Nicol, 2007). These along with many

55


other constructs may be deemed as being key to effective assessment in contemporary Higher Education. Regarding effective feedback; feedback should be seen as being a process, not a product (Boud & Molloy 2012), and as dialogue (Nicol, 2009). Innovative techniques like audio feedback (Voelkel, Mello & Varga-Atkins, 2018) or face to face feedback (Chalmers, Mowat & Chapman, 2018) have also been deemed most effective in the literature hence these methods warranted further discussion. The study was a micro study exploring questionnaire responses from a small sample of Postgraduate students (n=35) and interviews with a small number of academic colleagues (n=4). Secondary data of PTES results across 13 programmes and assessment results across 144 modules and 254 assessments were used as inputs to a multiple regression model within the study. Key findings indicate students perceptions of effective assessment are in line with pedagogic discourse and include the desire for one to one feedback, the use of exemplars and more formative / feedforward opportunities. Staff perceptions of effective assessment were largely aligned to student perspectives however constrained by institutional factors including large student numbers and workload. Findings also suggest multiple low-stakes assessment result in higher overall module marks and implementation of group work initiatives also contribute to enhanced average module marks. Although not on a large scale, this micro-study is valuable as balances both qualitative and quantitative findings to present a 'local' 'through the key hole' perspective into current assessment and feedback practice. Key References Bloxham, S. and Boyd, P., 2007. Developing Effective Assessment In Higher Education: A Practical Guide: A Practical Guide. McGraw-Hill Education (UK). Boud, D., 2013. Enhancing learning through self-assessment. Routledge. Boud, D. and Soler, R., 2016. Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), pp.400-413. Carless, D., 2007. Learning‐oriented assessment: conceptual bases and practical implications. Innovations in education and teaching international, 44(1), pp.57-66. Handley, K. and Williams, L., 2011. From copying to learning: Using exemplars to engage students with assessment criteria and feedback. Assessment & Evaluation in Higher Education, 36(1), pp.95-108. Liu, N.F. and Carless, D., 2006. Peer feedback: the learning element of peer assessment. Teaching in Higher education, 11(3), pp.279-290. Nicol, D.J. and Macfarlane‐Dick, D., 2006. Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice. Studies in higher education, 31(2), pp.199-218.

56


35 - Summative assessment workload management & implications for teaching practice: Dipping our toes into the depths of the murky waters that represent ‘Assessment’ in Higher Education Natalie Forde-Leaves1, Irina Harris1, George Theodorakopoulos1, Phillip Renforth2 1 Cardiff University , Cardiff, United Kingdom. 2Heriot-Watt University, Edinburgh, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract Awash in a sea of adjectives, concepts such as broken, unfit for purpose and a cause of dissatisfaction tend to breach the rolling tide of assessment discourse and emerge a constant. Undertones that Higher Education (HE) assessment practice requires reassessment, revisiting or potentially revolutionising ripple in this sea of discontent. Whilst the authors acknowledges the current macro level challenges faced by Assessment in HE (embodied in the rise of neoliberalism, managerialism, credentialism and consumerism) we do not embark in a lengthy critique of contemporary practice. Rather, this poster considers summative assessment workload management at a leading UK HE institution and focuses on the related implications for assessment scheduling and workload. This focus is mobilised via an enhanced understanding of issues faced by those primarily subjected to institutional summative assessment regimes; the students – and their perspectives on managing summative assessment workload. The research aims to understand student approaches to assessment management and associated barriers / enablers regarding impending summative assessment deadlines. It is envisaged the study will contribute to current assessment discourse by accentuating institutional hindrances such as assessment clustering and ‘over assessment’. This study sets out to illuminate strategies employed by students in their struggles to manage the multiplicity of assessment burdens HE places upon them. The study involved administration of a questionnaire and student focus groups held with 59 students across 3 different Schools within the Russell Group institution. These findings suggest proximity between distribution of coursework and teaching related ‘content’ as a pertinent factor, with typical preferred coursework ‘set’ dates being as early as week 3 in the semester. For low stakes assessment findings suggest students impose staged deadlines involving periods of significant under and over adjustments / deviations as opposed to an equal coterminous spread of work whilst for larger credit bearing assessments students engage in multitasking and buffering to increase the ‘spread’ of work. Commonly reported ‘barriers’ to study were themed as ‘Assessment Clarity’ and ‘Time Management’ whilst ‘Peer Communications’, ‘Effective Scheduling’ and ‘Learning Support’ emerged as reported ‘enablers’. In light of these findings this poster visualises workload management strategies and further problematizes the vast consequential implications for teaching practice. Ultimately this research stirs the undercurrents of the perpetual Assessment Scheduling dilemma and reviews how students manage summative workload pressures. Outputs of this study include prototype development of an assessment scheduler tool as a proposed institutional solution to facilitate summative assessment workload management.

57


Key References Boud, D. and Soler, R., 2016. Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), pp.400-413. Bowyer, K., 2012. A model of student workload. Journal of Higher Education Policy and Management, 34(3), pp.239-258. Carless, D., 2007. Learning‐oriented assessment: conceptual bases and practical implications. Innovations in education and teaching international, 44(1), pp.57-66. Kember, D., Ng, S., Tse, H., & Wong, E. T. T. (1996). An examination of the interrelationships between workload study time learning approaches and academic outcomes. Studies in Higher Education, 21, 347–358. Scully, G. and Kerr, R., 2014. Student workload and assessment: Strategies to manage expectations and inform curriculum development. Accounting Education, 23(5), pp.443-466. Whitelock, D., Thorpe, M. and Galley, R., 2015. Student workload: a case study of its significance, evaluation and management at the Open University. Distance Education, 36(2), pp.161-176. 36 - Using holistic assessment to develop students’ evaluative judgement: scaling up research-based writing workshops for first-year undergraduates Natalie Usher University of Oxford, Oxford, United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment Abstract Assessment for learning should empower students to both understand what makes good work, and recognise it in practice. Developing and applying such evaluative judgement is important for independent, self-regulated, and lifelong learning as students can make better-informed decisions about work (Tai et al., 2018). Sadler (2009, 2010) proposes holistic assessment as a way of learning about the quality of complex work. In holistic assessment, criteria are not pre-specified. Instead, students weigh up what most contributes and detracts from the quality of work, then use these observations to develop an overall impression, and possible criteria. This is potentially more challenging than criterion-based approaches. The poster will examine the practical promise of holistic assessment for academic writing, using insights from two iterations of design-based research. First, I will share the design of holistic assessment workshops in which students assessed and discussed example essays and also undertook peer assessment. The participants were 21 first-year English Language and Literature undergraduates, who were learning to write a new essay genre. A risk of holistic assessment is that student-generated criteria may not capture what staff value in work. To investigate the validity of holistic assessment, I coded students’ comments on example essays, peer essays and their own writing to identify the underlying criteria they were using. These criteria codes were mapped and compared with the Faculty’s criteria. There was a high degree of overlap. In fact, students went beyond Faculty criteria to comment on other aspects of writing, including genre features such as introductions and conclusions. In follow-up interviews and writing a month later, eight of ten case study writers had adapted their approach, and many changes were related to student rather than Faculty criteria. 58


Finally, the poster will present preliminary evaluation and reflections on a scaled-up workshop, run in Spring 2019. The whole year group (approx. 240 students) were offered a two-hour workshop on three example essays, with further support on the VLE. As discussion was identified as being important in the previous iteration and in the literature (Carless and Chan, 2017), students took the workshop in small groups, led by graduate students. The scaled-up workshop presents numerous opportunities and challenges. In particular, the poster will address the support offered to graduate tutors to develop their own evaluative judgement. The workshop can be readily applied to any discipline where students are developing writing in a new genre, and the poster will share lessons learned. Key References Carless, D. and Chan, K. K. H. (2017), Managing dialogic use of exemplars, Assessment & Evaluation in Higher Education 42(6), 930–941. Sadler, D. R. (2009), Transforming holistic assessment and grading into a vehicle for complex learning. In G. Joughin (eds) Assessment, Learning and Judgement in Higher Education, pp. 45-63, Dordrecht, Springer. Sadler, D. R. (2010) Beyond feedback: developing student capability in complex appraisal, Assessment & Evaluation in Higher Education 35(5), 535–550. Tai, J., Ajjawi, R., Boud, D., Dawson, P., and Panadero, E. (2018) Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education 76 (3), 467-281. Usher, N. (2018) Learning about academic writing through holistic peer assessment. Unpublished doctoral thesis, University of Oxford. 37 - Introducing digital technologies for formative assessment and learning support: A reflection from the University of Bath Maria Valero, Benjamin Metcalfe, Yvonne Moore, Rowan Cranwell, Jo Hatt University of Bath, Bath, United Kingdom Conference Theme Integrating digital tools and technologies for assessment Abstract The total number of UK Higher Education (HE) students in 2017/18 stood at over 2 million, with Postgraduate taught (PGT) and overseas student numbers increasing significantly. This growth means that HE facilitators face the challenges of an increasing and more diverse cohort in terms of backgrounds, learning styles and needs. Integrated assessment practices, combining summative and formative assessment, can enable a more meaningful learning, with students receiving regular feedback that provides the self-check and scaffolding for ongoing learning and understanding. This paper presents the initial experience and reflections from pilot teams at the University of Bath (UoB) in a bid-winning pilot project(*). This is a collaborative project involving multidisciplinary teams with staff from the Department of Electronics and Electrical Engineering, Technology Enhanced Learning team and Centre for Learning and Teaching at the UoB. Digital technologies are embedded in formative assessment and feedback, and used as a platform for information sharing, creation and collaborative knowledge building, thus enabling a more interactive experience.

59


This project targets Undergraduate (UG) and PGT students from different programmes across the UoB, exploring the use of technology-enhanced teaching for: A.- Diagnostic testing for (formative) assessment ++ Exam practice and self-testing are carried out by using online assessment tools (e.g. Inspera), which offer richer question types, thus allowing a wider range of skills to be assessed and promoting deeper learning. ++ In-class evaluation of key concepts taught are facilitated by online voting systems (e.g., Turning Point and eVoting) -namely through quizzes; B.- Learning support and feedback capture ++ Use of online video platforms (e.g. Panopto) for real time capture of feedback; ++ Use of online voting systems for immediate feedback on core concepts to both the students and the academic; ++ Use of Digital Annotation for the capture of feedback in diverse formats, ranging from written to diagrams and equations through Digital Annotation (e.g., Webwhiteboard). Preliminary data from student focus groups have provided excellent feedback, showing an increase in student engagement and improvement in accessibility at all levels. Student Testimonials “The use of technology such as eVoting and Digital Annotation makes a massive difference to our lectures. The voting technology highlights key points and the digital annotations make sure that everyone has a chance to engage.” – K. Shopland “The annotation technology is brilliant, during meetings the lecturer annotates our work directly, then just e-mails a PDF with all the notes on. No more forgetting what was discussed!” – G. Rossides *Santander Technology Fund Key References Higher Education Student Statistics: UK, 2017/18 link: https://www.hesa.ac.uk/news/17-012019/sb252-higher-education-student-statistics Boud, D. & Falchikov (2006) Aligning assessment with long-term learning. Assessment & Evaluation in Higher Education, Vol 31, (4), 399-413 Nicol, D. and Macfarlane-Dick, D. (2006) ‘Formative assessment and self-regulated learning: A model and seven principles of good feedback practice’. Studies in Higher Education, 31 (2), pp. 199-218 Irons, A. (2007). Enhancing learning through formative assessment and feedback. 1-158 Gikandi, J.W., Morrow, D. and Davis, N.E. (2011) Online formative assessment in higher education: A review of the literature. Computers & Education, 57 (4), pp. 2333-2351 Round Table Session Chair: Kay Sambell Time: 15:40 - 16:40 Date: 26th June 2019 Location: Room 5 38 - Assessment of real world learning: a case study Ufi Cullen Falmouth University, Penryn, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment

60


Abstract This research paper aims to discuss the process and outcomes of assessment of real-world learning within the real world settings in the context of an undergraduate level entrepreneurship module. The paper introduces the features and phrases of the assessment, the way in which it is structured and embedded into the module curriculum and finally the adopted strategy to position students within this setting as one of the key stakeholders of the assessment process. The feedback obtained from the stakeholders of the assessment process and the quantitative outcomes such as student marks show that this kind of assessment approaches break down the barriers between the academia and the industry and enable the both to work together towards improving students’ entrepreneurial capabilities and reducing the risk of an immediate business failure. From the students’ perspective, although this approach is challenging in various ways, it gives them a precious opportunity to test their entrepreneurial capacity and a reliable reality check without undertaking any risk of a business failure. Key References Bandura, A., 1971. Social Learning theory, New York: General Learning Press. Gilmore, T., Krantz, J. & Ramirez, R., 1986. Action Based Modes of Inquiry and the HostResearcher Relationship. Consultation, 5(3), pp. 160-176. Hattie, J., 1999. Influences on Student Learning, Auckland: University of Auckland. HEA, 2018. New to Teaching in HE. [Online] Available at: https://www.heacademy.ac.uk/training-events/new-teaching-higher-education-0 Looney, J., 2009. Assessment and Innovation in Education, Paris: OECD Publishing. Morris, E., 2018. Re-assessing innovative assessment. [Online] Available at: https://www.heacademy.ac.uk/blog/re-assessing-innovative-assessment Bamber, V. & Jones, A., 2015. Challenging students. In: H. Fry, S. Ketteridge & S. Marshall, eds. A handbook for teaching and learning in HE. New York: Routledge, pp. 152-166. Butcher, C., 2015. Describing what students should learn. In: H. Fry, S. Ketteridge & S. Marshall, eds. A handbook for learning and teaching in HE. New York: Routledge, pp. 80-92. Kingsbury, M., 2015. Encouraging Independent Learning. In: H. Fry, S. Ketterridge & S. Marshall, eds. A Hanbook for Teaching and Learning in HE. New York: Routledge, pp. 169-179. Kolb, D., 1984. Experiental Learning. New Jersey: Case Western Reserve University. Csikszentmihalyi, M., 1990. Literacy and Intrinsic Motivation. American Academy of Arts & Sciences, 119(2), pp. 115-140. 39 - Saving the planet through assessment: using foundation year assessment to communicate climate and environmental issues to young children Joe Shimwell, Matthew Pound, Kate Winter Northumbria University, Newcastle Upon Tyne, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract As urban areas expand and “screen-time” increases, people are having less contact with natural environments (Soga and Gaston, 2016). Over the last 20 years, this has led to a cycle of disaffection toward natural environments and discourages positive emotion and action towards environmental issues including climate change (Soga and Gaston, 2016). The lack of public understanding of the importance of climate issues is stark, even though younger 61


generations have grown up in an era of greater environmental awareness, there is not a generational difference in terms of how often climate change is thought about, which is partly due to educational background and opportunities (Fisher, Fitzgerald, and Poortinga 2018). More needs to be done to engage society at a younger age with these importance issues to prevent future adult generations from continuing the cycle of disaffection. Civic engagement is an important factor in personal growth and identity formation of University students transitioning to work (Constance Flanagan and Peter Levine 2010). The opportunity for students to engage with society regarding their topics of study is rarely embedded at module level. It is also known that students engage with assessment and learning at a greater level when confronted with educationally purposeful activities (Kuh 2003). With these factors in mind, a novel method of assessing foundation year students in Geography and Environmental Sciences was developed. Students on an ‘Understanding and Communicating Environmental Issues’ module were assessed on their ability to communicate to primary school children by creating a leaflet based upon current environmental research. Students demonstrated their understanding of their topic through a critical literature review, which they used to develop a leaflet appropriate for children in primary schools. The literature review, leaflet, group engagement and reflection on the feedback from the young children who read the leaflets was used to formally assess attainment in the module. This novel model of assessment was perceived as challenging by the foundation students, but engagement was good and enabled them to develop their skills in communication, academic rigour and group working, whilst embedding the importance civic engagement as a tool for effecting change. For the foundation students, it also promoted deeper engagement and understanding of complex environmental issues through the literature review and the opportunity to describe the research to a different, nonacademic audience. Feedback from teachers showed positive engagements and deepened understanding of environmental issues had taken place amongst the children. Key References Constance Flanagan, and Peter Levine. 2010. “Civic Engagement and the Transition to Adulthood.” The Future of Children 20 (1): 159–179. doi:10.1353/foc.0.0043. Fisher, S, R Fitzgerald, and W Poortinga. 2018. Climate Change: Social Divisions in Beliefs and Behaviour. British Social Attitudes: The 35th Report. Kuh, George D. 2003. “What We’re Learning About Student Engagement From NSSE: Benchmarks for Effective Educational Practices.” Change: The Magazine of Higher Learning 35 (2): 24–32. doi:10.1080/00091380309604090. Moser, Susanne C. 2014. “Communicating Adaptation to Climate Change: The Art and Science of Public Engagement When Climate Change Comes Home.” Wiley Interdisciplinary Reviews: Climate Change 5 (3): 337–358. doi:10.1002/wcc.276. Schlemer, Lizabeth, John Oliver, Katherine Chen, Sofia Rodriguez Mata, and Eric Kim. 2012. “Work in Progress: Outreach Assessment: Measuring Engagement: An Integrated Approach for Learning.” In 2012 Frontiers in Education Conference Proceedings, 1–2. IEEE. doi:10.1109/FIE.2012.6462341. Soga, Masashi and Kevin J.Gaston. 2016. " Extinction of experience: the loss of human– nature interactions." Frontiers in Ecology and Environment 14 (2): 94-101. doi: 10.1002/fee.1225

62


40 - Abreaction Catharsis Release Self-Acceptance: Critical Reflective Student Stories in Consumer Behaviour as Emotional Psychodynamic Therapy Usha Sundaram University of East Anglia, NORWICH, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract This paper draws from summative assessment experiences of students on a final year UG module on Consumer Behaviour, writing a critical reflective personal story on a consumption scenario/context of their choice. In the process of choosing a topic, writing, completion, and post-reflection, students journey through different levels of abreaction, catharsis, release, and self-acceptance. This suggests that critical reflective assessment in management education can function as a form of meaning making psychodynamic therapy, offering huge potential to emotionally enrich management education through customisation of crossdisciplinary learning and assessment. Reflective writing is viewed as empowering, promoting self-development, self-discovery (Chandler 2009, Dyment and O’Connell 2010, Harris 2005). Beyond the instrumental, there is much potential for analysing and theorising linkages between critical reflection and emotions in effecting complex, transformative learning (Swan and Bailey 2004), providing greater legitimacy for management education in equipping students to deal with complex, emotive real-life issues (Cunliffe and Easterby-Smith 2004). This module – delivered to UG students on Business Management and Marketing Degrees – is structured around themes of Buying Having Being, exploring interdisciplinary perspectives in consumption behaviours and marketplace cultures beyond just shopping, acquisition, and possession of material objects. Students have complete freedom in choosing their consumption contexts, interrogating their own motives and consumer behaviours by navigating dimensions of self and consumption as expressions of identity. Initially students struggle with the available unstructured creative freedom, reluctant and resistant to embark on the emotional challenge of self-discovery. Slowly however as they immerse themselves in lived experiences, they traverse a journey of abreaction, relive memories through words, re-experience emotions as cathartic and therapeutic (Khoo and Oliver 2013) and find emotional release through confessional stories. The writing process enables students to make sense of life at crossroads, struggling with issues of personal identity, extended adolescence, and fears about future. Stories draw from a range of dramaturgical, identity narratives of body image, aesthetics, sexuality, orientation, gender norms, socio-cultural conflicts, class consciousness, mental health, and substance abuse while dealing with dark stories of past and confessional behaviours from present. The explorations draw from aspects of self not immediately accessible (Phillip and Rolfe 2016) to conscious minds but the essay itself is a safe holding place for material that students find difficult to manage otherwise. Both the writing process and the written artefact help students construct personal, experiential, emotional learning and meaning making (Dirkx 2001, 2006) as a bridge leading to measured self-understanding and self-acceptance.

63


Key References Cunliffe, A.L., and Easterby-Smith, M. (2004) From Reflection to Practical Reflexivity: Experiential Learning as Lived Experience In. Organizing Reflection (Eds.) Michael Reynolds and Russ Vince London: Routledge Chapter 3 Dirkx, J.M. (2001) The Power of Feelings: Emotion, Imagination, and the Construction of Meaning in Adult Learning New Directions for Adult and Continuing Education 89 Spring pp.63-72 Dirkx, J.M. (2006) Engaging Emotions in Adult Learning: A Jungian Perspective on Emotion and Transformative Learning New Directions for Adult and Continuing Education 109 April pp.15-26 Khoo, G.S. and Oliver, M.B. (2013) The Therapeutic Effects of Narrative Cinema through Clarification: Re-examining Catharsis Scientific Study of Literature 3(2) January pp.266293 Phillips, L. and Rolfe, A. (2016) Words that Work: Exploring Client Writing in Therapy Journal of Counselling and Psychotherapy Research 16(3) September pp.193-200 Swan, E. and Bailey, A. (2004) Thinking with Feeling: The Emotions of Reflection In. Organizing Reflection (Eds.) Michael Reynolds and Russ Vince London: Routledge Chapter 7 41 - Promoting deep approach to learning and self-efficacy by changing the purpose of self-assessment: A comparison of summative and formative models Juuso Henrik Nieminen, Henna Asikainen, Johanna RämÜ University of Helsinki, Helsinki, Finland Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Self-assessment has often been portrayed as a way to promote lifelong learning in higher education. While most of the previous literature builds on the idea of self-assessment as a formative tool for learning, some scholars have suggested using it in a summative way. However, many scholars have stated that self-assessment, when counting towards grades, leads into surface approach to learning and cheating. Formative assessment, containing selfand peer-assessment, has been introduced as a way to promote deeper kind of learning. In this study, we empirically compared two different models for self-assessment (N = 299) within the same course context. Another model used self-assessment in a formative way while the course grade was still based on an exam, and the other model introduced selfassessment as a summative, future-driven act by letting the students decide their final grade after formatively practicing self-assessment skills. This comparative study design used latent profile analysis as a person-oriented way to observe student subgroups in both of the models in terms of deep and surface approaches to learning. The results show that the student profiles vary between the two self-assessment models, even though the learning environment was exactly the same for both groups, with the only exception being the final grading method. The students taking part in the summative self-assessment group were overrepresented amongst the deep oriented student profile. Also, summative selfassessment was related to an increased level of self-efficacy. The study implies that summative self-assessment, based on self-grading but also on formative elements, is a possible way to promote deep approach to learning and greater self-efficacy in certain educational contexts.

64


Key References Boud, David, and Nancy Falchikov. 2006. “Aligning Assessment With Long‐Term Learning.” Assessment & Evaluation in Higher Education 31 (4): 399–413. Panadero, Ernesto, G. T. Brown, and J. W. Strijbos. 2016. “The Future of Student SelfAssessment: A Review of Known Unknowns and Potential Directions.” Educational Psychology Review 28 (4): 803–830. Sadler, P. M., and Eddie Good. 2006. “The Impact of Self- and Peer-Grading on Student Learning.” Educational Assessment, 11 (1): 1–31. Tan, K. H. 2009. “Meanings and Practices of Power in Academics’ Conceptions of Student Self-Assessment.” Teaching in Higher Education 14 (4): 361–373. Taras, Maddalena. 2015. “Situating Power Potentials and Dynamics of Learners and Tutors Within Self-Assessment Models.” Journal of Further and Higher Education 40 (6): 846– 863. Round Table Session Chair: Jess Evans Time: 15:40 - 16:40 Date: 26th June 2019 Location: Room 7 42 - How to approach ‘assessment as learning’ in educational development and design? A viewpoint from an Educational Development Unit’s practice Ine Rens, Karen Van Eylen KU Leuven, Leuven, Belgium Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Future-oriented education is one of the strategic goals of KU Leuven. More than before, KU Leuven wants to focus on active learning, with a strong focus on the disciplinary future self. To develop their competencies to the maximum, students need a learning environment in which assessment is an integrated part and not just an endpoint in the learning process. This brings the current institutional use of assessment into question and challenges us to shift our focus from assessment of learning to a more dialogic approach in which feedback and assessment for and as learning are central (Dochy & Segers, 2018; Evans, 2013; McLean, 2018; Schneider & Preckel, 2017). How can we at an institutional level promote assessment for and as learning? We will discuss two different approaches in our Educational Development Unit to answer this question: 

Design of courses and support of educational staff: we use the ABC method as developed by University College London (UCL) (Young & Perovic, 2016) and are further exploring its possibilities in the Erasmus+ funded project “ABCtoVLE” (Perovic, 2018). In this hands-on workshop teaching teams create a storyboard outlining the sequence of learning activities throughout their course, and the teaching as well as the (formative and summative) assessment methods to support these learning activities. Promote the use of specific tools in our Learning Management System (Blackboard): we enable teachers to use the available tools not only in a summative but also in a formative way, e.g. ePortfolio as a tool for assessment as learning in which support of the learning process is key.

65


Moreover are we interested to learn how other institutions tackle the question of transition in assessment approach and curious to know how our answers could work for them. Key References Dochy, F., & Segers, M. (2018). Creating impact through future learning. The High Impact Learning that Lasts (HILL) model. London: Routledge Publishers. Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70-120. McLean, H. (2018). This is the way to teach: insights from academics and students about assessment that supports learning. Assessment & Evaluation in Higher Education, 43(8), 1228-1240. National Forum for the Enhancement of Teaching and Learning in Higher Education (2017). Expanding our understanding of assessment and feedback in Irish higher education. Retrieved from https://www.teachingandlearning.ie/publication/expanding-ourunderstanding-of-assessment-and-feedback-in-irish-higher-education/ Perovic, N. (2018, July 13). ABC LD – the next steps. Retrieved from https://blogs.ucl.ac.uk/abc-ld/abc-ld-next-steps/ Schneider, M. & F. Preckel (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143(6), 565600. Young, C., Perovic, N. (2016). Rapid and creative course design: As easy as ABC? Procedia – Social and Behavioral Sciences, 228, 390-395. 43 - Listening to the students’ voice to orient instructions. The ongoing evaluation of an Assessing as Learning experience in higher education Alessia Bevilacqua University of Verona, Verona, Italy Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract The Assessment as Learning model (Earl, 2014) is considered an appropriate strategy in higher education to actively involve students in their learning paths through self-assessment and peer-assessment tools. It significantly supports students in achieving greater awareness and autonomy concerning their learning processes (Boud et al., 2010), if implemented constantly by the students throughout the course (Sambell et al., 2012). Within the bachelor’s degree in Organizational Training of the University of Verona (Italy), in the course of “Methodology of the pedagogical research” an educational experience characterized by the AaL model has been implemented. In order to support students dealing with high-level cognitive processes, the teacher provided a formative assessment path (Stiggins, 2002) based on moments of peer-assessment of authentic tasks, where the functions of giving and receiving feedback are perceived as crucial (Nicol, 2014), as well as on self-assessment regarding the individual achievement of micro-objectives. This proposal is aimed at enabling students working in a large class to better monitor their own learning processes, as well as their outcomes, to gain awareness and consequently be able to make changes in their learning style if necessary. To orient the teacher actions, students were asked to provide feedback throughout the course, answering to some reflective questions in the Moodle platform concerning different aspects of the educational path. The adoption of the students’ voices is encouraged (Grion & 66


Cook-Sather, 2013) both to introduce the views of those who daily experience the concrete effects of educational policies and strategies and to promote a tangible action of democracy at school. The question the oriented the researcher was: what kind of feedback can be proposed by the students to the teacher to orientate a course based on the Assessment as Learning model? The research has been contextualized within the ecological paradigm, since it allows to grasp, through the use of qualitative research methodologies, the essence and the qualities of the reality. The document analysis has been carried out through an inductive content analysis to elaborate a model which deeply deals with the meanings, the intentions, the consequences and the context in which the phenomenon is located. The expected outcomes the researcher includes benefits regarding students’ learning outcomes, the strengthening of their transversal skills (especially the “learning to learn” key competence”), as well as the individual well-being. Key References Boud, D. and Associates (2010). Assessment 2020: Seven propositions for assessment reform in higher education. Sydney: Australian Learning and Teaching Council. Earl, L.M. (2014). Assessment as Learning. Using Classroom Assessment to Maximize Student Learning. Cheltenham (Vic): Hawker Brownlow. Grion, V., Cook-Sather, A. (eds.) (2013), Student voice: prospettive internazionali e pratiche emergenti in Italia, Milano: Guerini e Associati. Nicol, D., Thomson, A. and Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102-122. Sambell, K., McDowell, L., and Montgomery, C., (2013) Assessment for learning in Higher Education, London: Routledge. Stiggins, R. J. (2002). Assessment Crisis: The Absence of Assessment for Learning. Phi Delta Kappan, 83(10), 758–765. 44 - Evolving assessment tools and processes to support the scaling-up of external assessors (mentors, supervisors, preceptors, clinicians etc) in formative and summative assessment Rachel Bacon1, Debbie Holmes2 1 University of Nottingham, Nottingham, United Kingdom. 2PebblePad, Telford, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract The use of portfolios to support clinical assessment, enhance reflection and better able the monitoring of progress is a growing area of practice (Haggerty and Thomson 2017). External assessors (mentors, supervisors, preceptors) are becoming increasingly involved in using electronic systems to assess and validate student’s learning in clinical practice settings. Maintaining validity and authenticity of the assessment will be central to the presentation. This session aims to explore the challenges in implementing electronic clinical assessment at scale and will draw on a number of examples in the UK. The session is not about particular software solutions but is instead a discussion of the challenges facing Universities as they attempt to scale up the use of electronic clinical assessment. Key issues, directly related to external assessment, will be explored within the presentation. These will include the illustrative examples below:

67


Consistency of the assessment documentation across placement areas that support students from different HEIs One example will be from a consortium of Universities in the North of England who collaborated to produce a single electronic document that the student owns and is able to share for assessment with an approved assessor from the clinical area. The Shape of Caring review (Willis, 2015) highlights the current inconsistency in the practice assessment requirements between HEIs, which are open to differing interpretation and the above approach provides consistency for the placement assessors. Security and privacy and adhering to GDPR Managing who has access to which student's work, to protect confidentiality, whilst maintaining reliability and authenticity is an ongoing challenge and more so now in the light of GDPR in the UK (GOV.UK, 2019). A move to a reliable and well tested software solution can improve existing paper processes considering security and GDPR adherence. We will explore an example from Queen Margaret University that was driven by a need to remove the responsibility of the storage of data from the University to the student. Providing support and guidance for students, mentors, supervisors, preceptors and others engaged in supporting and assessing learning, In another example, the University of Nottingham is encouraging students to support and teach other students to use the electronic document as the use scales up. It is a major challenge to educate all the assessors in placement too (McIntosh, Gidman, and Smith, 2014) and there are elements of practice that we can share that Nottingham University and others have used during the implementation process Key References GOV.UK. (2019). Guide to the General Data Protection Regulation. [online] Available at: https://www.gov.uk/government/publications/guide-to-the-general-data-protectionregulation. [Accessed 20 Jan. 2019]. Haggerty, C. and Thompson, T. The challenges of incorporating portfolio into an undergraduate nursing programme. Open Praxis, vol. 9 issue 2, April–June 2017, pp. 245–252 McIntosh, A., Gidman, J., & Smith, D. (2014) Mentors’ perceptions and experiences of supporting student nurses in practice. International Journal of Nursing Practice, 20, 360-365 Willis, P. (2015) Raising the Bar. Shape of Caring: A Review of the Future Education and Training of Registered Nurses and Care Assistants. London, UK: Royal College of Nursing Round Table Session Chair: Pete Boyd Time: 15:40 - 16:40 Date: 26th June 2019 Location: Room 9 45 - Students as partners in fostering a culture of assessment for learning at a researchintensive university Carolyn Samuel, Mariela Tovar McGill University, Montreal, Canada Conference Theme Leading change in assessment and feedback at programme and institutional level

68


Abstract For several years, our teaching support unit has been actively involved in promoting a culture of assessment for learning (Sambell, McDowell, & Montgomery, 2012) at our research-intensive university. Our goal is for assessment to be viewed not only as a tool to grade and rank students, but also as a strategy integral to supporting student learning. To promote this culture, we have been leading a university-wide faculty community of practice (CoP) (Lave & Wenger, 1991) with faculty, students and staff (http://www.mcgill.ca/tls/teaching/assessment/afg). The goal of the CoP is to engage instructors in considering creative and effective assessment strategies to help improve students’ learning and motivation. We have also developed a program of brown-bag sessions, workshops, webinars, and most recently, a symposium entitled Beyond Grading: Effective Assessment Strategies for Better Learning (http://mcgill.ca/tls/events/assessmentsymposium-2018). The purpose of this session is to illustrate the value of incorporating student perspectives in promoting a culture of assessment (Peseta et al., 2016) and to provide specific examples of how we have implemented this strategy at our institution to promote a culture of assessment for learning, more specifically. Ultimately, since students are the ones who experience instructors’ assessment decisions, they are well positioned to provide instructors with insight about the effects of these decisions. Students can take on a variety of roles when it comes to making changes in higher education (Bovill, Cook-Sather, & Felten, 2011; Healey, Flint, & Harrington, 2014). What are the different roles that students can take as partners in fostering a culture of assessment for learning? Examples from our institution include: 

    

Students as active members of CoPs (see https://teachingblog.mcgill.ca/2018/11/22/one-students-role-in-improvinguniversity-assessment-and-feedback-practices/) Students as bloggers (see https://teachingblog.mcgill.ca/2018/10/11/assessmentfor-learning-questions-a-feedback-practice-we-learned-from-socrates/) Students as stakeholders (see https://youtu.be/psyifRBHFM4) Students as co-organizers of events Students as panelists and presenters (see http://mcgill.ca/tls/events/assessmentsymposium-2018 Students as coaches of other students

During the session, we will share in greater detail how student involvement in these roles had a beneficial impact on instructors, as well as the students themselves. We would also like to learn from others how students might further be engaged in fostering a culture of assessment for learning. Key References Bovill, C., Cook-Sather, A., & Felten, P. (2011). Students as co-creators of teaching approaches, course design, and curricula: Implications for academic developers. International Journal for Academic Development, 16(2), 133-145. Healey, M., Flint, A., & Harrington, K. (2014). Engagement through partnership: Students as partners in learning and teaching in higher education. York: The Higher Education Academy. Retrieved January 14, from https://www.heacademy.ac.uk/sites/default/files/resources/Engagement_through_p artnership.pdf 69


Lave, J. & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press. Peseta, Y., Bell, A., Clifford, A., English, A., Janarthana, J., Jones, C., Teal, M., & Zhang, J. (2016) Students as ambassadors and researchers of assessment renewal: Puzzling over the practices of university and academic life. International Journal for Academic Development, 21(1), 54-66, https://doi.org/10.1080/1360144X.2015.1115406 Sambell, K., McDowell, L., & Montgomery, C. (2012). Assessment for learning in higher education. New York: Routledge. 46 - Mind the Gap: Strategies for self-monitoring. A conceptual framework and new findings Jeroen van der Linden, Tamara van Schilt-Mol, Harry Stokhof HAN University of Applied Sciences, Nijmegen, Netherlands Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Preparing for knowledge assessments is an important learning activity for students in Higher Education. However, how and when students monitor their learning and which resources they use, is not the focus of most studies. Previous research shows that not all students monitor their learning and that students who fail knowledge assessments, often have several characteristics in common: they do not monitor their learning and do not have a clear understanding of task demands (Broekkamp & Van Hout-Wolters, 2007). Research shows that students who are aware of the way they learn, perform better than unaware students (Butler, 1998). For example, Zohar and Peled (2008) found that explicit teaching of metastrategic knowledge (which may include the ability to evaluate, and thus monitoring) was valuable for both high and low achievers, but especially so for the latter. How students monitor their learning is not well known. The focus of most studies pertaining to metacognition has been on methods of assessing the impact of monitoring activities and not on the actual cognitive skills involved in monitoring (Garrett, Alman, Gardner, & Born, 2007). Nelson and Narens(1990) proposed a theoretical framework for metacognition which provides insight in the relationship between metacognition (meta-level) and cognition (object-level) and insight in the interaction between monitoring and the regulation process. The judgement-of-learning (JOL) in this framework plays an important role in our framework. Within the object-level cues arise (Koriat, 1997), for example ease of learning, which influence the awareness on the meta-level. In this study, a conceptual framework is established which attempts to describe the (meta-cognitive) monitoring process of students when learning for knowledge assessments. The bases for this model were various theoretical frameworks combined with qualitative data from four pilot interviews with second and third year pre-service teachers. Transcripts of the interview were analysed using a GroundedTheory approach in Atlas-Ti 8. This resulted in a conceptual framework which can be used to map the monitoring of learning for knowledge assessments. The expectation is that the model will support us in making students and teachers more aware of the monitoring processes when learning for knowledge assessments. By raising awareness of the different cues and the possible consequences, students will be able to study more efficiently and effectively. During the round table session, both results from the literature analyses and the interviews, as well as the conceptual framework will be discussed.

70


Key References Broekkamp, H., & Van Hout-Wolters, B H A M. (2007). Students' adaptation of study strategies when preparing for classroom tests. Boston: Springer. doi:10.1007/s10648006-9025-0 Butler, D. L. (1998). The strategic content learning approach to promoting self-regulated learning: A report of three studies doi:10.1037//0022-0663.90.4.682 Garrett, J., Alman, M., Gardner, S., & Born, C. (2007). Assessing students' metacognitive skills. United States: American Journal of Pharmaceutical Education. doi:10.5688/aj710114 Koriat, A. (1997). Monitoring one's own knowledge during study: A cue-utilization approach to judgments of learning doi:10.1037//0096-3445.126.4.349 Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings Elsevier Science & Technology. Zohar, A., & Peled, B. (2008). The effects of explicit teaching of metastrategic knowledge on low- and high-achieving students Elsevier Ltd. doi:10.1016/j.learninstruc.2007.07.001 47 - Breaking down barriers: Developing strategies for feedback uptake through selfdirected online study Britney Paris University of Calgary, Calgary, Canada Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract A significant amount of work in the area of formative assessment has focused on how teachers can better deliver corrective feedback (e.g., Hattie & Timperley, 2007; Leighton, Chu, & Seitz, 2013; Shute, 2008) and some more recent work has focused on the barriers learners experience when trying to understand and implement feedback (Winstone, Nash, Rowntree, & Parker, 2017), however little research has focused on how learners might develop the competencies and strategies they need to truly render feedback useful. Carless and Boud (2018) have begun to discuss how both learners and teachers might begin to simultaneously development feedback literacy, but we need to take the conversation further into how feedback literacy might be practically developed by learners in their own learning contexts. In this round table discussion, I will present an online module that was developed with the intent of scaffolding learner development of feedback literacy in the form of specific feedback strategies, which learners can undertake either independently or within a classroom environment. The intent of this discussion is to gather feedback on the online module, and even the concept of the use of online learning modules in this context, before it is launched in the classroom context as part of a research project on the development of instructional strategies to improve learner uptake of feedback. While the online module has been developed for the intention of use in an English for Academic Purposes context, the theory behind the module is relevant to all disciplines. This round table discussion is relevant to the conference theme of leading change in assessment and feedback at programme and institutional level as the aim is to discuss possible interventions that improve learner feedback literacy at the programme level. The intent of the discussion is to critically engage with the research in the field by evaluating a practical application that I developed based on this research. 71


Key References Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 2938(May), 1–11. https://doi.org/10.1080/02602938.2018.1463354 Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487 Leighton, J. P., Chu, M.-W., & Seitz, P. (2013). Errors in student learning and assessment: The Learning Errors and Formative Feedback (LEAFF) model. In R. Lissitz (Ed.), Informing the Practice of Teaching Using Formative and Interim Assessment: A Systems Approach (pp. 185–208). Information Age Publishing. Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795 Winstone, N. E., Nash, R. A., Rowntree, J., & Parker, M. (2017). ‘It’d be useful, but I wouldn’t use it’: Barriers to university students’ feedback seeking and recipience. Studies in Higher Education, 42(11), 2026–2041. https://doi.org/10.1080/03075079.2015.1130032 48 - A flexible and fair web-based Group Marking Tool that combines both staff and student (peer-review) scores Suraj Ajit1, Paul Vossen2, Andrew Dean1 1 University of Northampton, Northampton, United Kingdom. 2IndIndependent Researcher, Stuttgart, Germany Conference Theme Integrating digital tools and technologies for assessment Abstract Various forms of peer, collaborative or group activities, are increasingly used within university courses to assist students meet a variety of learning outcomes. Scoring methods/models for assessing such group activities by involving peer feedback/rating and deriving individual grades have been a contentious issue across higher education (Gibbs, 2009; Lejk and Wyvill, 2001). Problems include inconsistent marking processes, and potentially unfair scoring/grading methods. Our study reviewed group marking processes across several courses including software engineering, computing, business computing, web technology and security within one university and, also some of the prominent tools and methods used by other universities in the UK. We then implemented a web-based tool based on a novel scoring model (Vossen and Kennedy, 2017; Vossen and Ajit, 2018) for assessing group activities. We discuss preliminary results of evaluating this tool within a software engineering course/module containing over hundred students in one university. Key References Vossen, PH and Ajit, S (2018) Fuzzy scoring theory applied to team-peer assessment: additive vs. multiplicative scoring models on the signed or unsigned unit interval. Advances in Intelligent Systems and Computing. ISSN 2194-5357, Springer. Vossen, PH and Kennedy, I (2017) A fair group marking and student scoring scheme based upon separate product and process assessment responsibilities. Paper presented at Assessment in Higher Education, Manchester, UK. Gibbs, G. (2009) The assessment of group work: Lessons from the literature. The Assessment Standards Knowledge Exchange, Centre for Excellence in Teaching and Learning in Higher Education, Oxford Brookes University.

72


Lucy Johnston and Lynden Miles (2004) Assessing contributions to group assignments, Assessment & Evaluation in Higher Education, 29:6, 751-768, DOI: 10.1080/0260293042000227272. Freeman, M. & McKenzie, J. (2002). SPARK, a confidential web-based template for self and peer assessment of student teamwork: benefits of evaluating across different subjects. British Journal of Educational Technology, 33(5), 551–569. Wiley Online Library. Mark Lejk and Michael Wyvill (2001) Peer Assessment of Contributions to a Group Project: A comparison of holistic and category-based approaches, Assessment & Evaluation in Higher Education, 26:1, 61-72, DOI: 10.1080/02602930020022291. Mark Lejk , Michael Wyvill & Stephen Farrow (1996) A Survey of Methods of Deriving Individual Grades from Group Assessments, Assessment & Evaluation in Higher Education, 21:3, 267-280, DOI: 10.1080/0260293960210306. 49 - Using digital tools to facilitate peer review and enhance feedback and assessment Katy Wheeler University of Essex, Colchester, United Kingdom Conference Theme Integrating digital tools and technologies for assessment Abstract Feedback is crucial part of the university student experience, yet one that students and teachers often report frustrations with (Sadler, 2010). Students value timely and forwardoriented feedback, as well as diversity of feedback styles (QAA, 2018). But embedding such feedback within module design and assessment can be challenging (Boud and Molloy, 2013; Sadler, 2010). Using peer feedback is one way of enhancing the learning of students (Boud et al., 1999; Nicol et al., 2014), and diversifying modes of feedback. Moodle and other digital platforms have been successfully used as a means of facilitating peer review, though they carry specific challenges (Wilson et al., 2015). This talk will introduce a specific peer-review assessed task that was implemented within an MA course on a module on sustainable consumption. Students were asked to compose a short blog and to post the blog to a small number of their peers by a selected date, using the Moodle Forums platform. The students were asked to include a request for feedback alongside their blog. Each small group then had 7 days in which to give constructive feedback to their peers. Once the feedback process was completed, students were given another 7 days in which to revise their blog and to write a short reflective ‘response to their reviewers’, indicating what feedback they had taken on board and why. The instructor then marked both the final blog, the reflective statement and the quality of the feedback given. Students reported enjoying this activity and highlighted how the act of giving and receiving feedback improved their work. Students were then encouraged to develop their blog topic into a full essay, meaning there was also an opportunity for feedforward from the instructor. In the session, we can reflect on different ways of embedding feedback within learning design, the usefulness of digital platforms to facilitate the process of peer review, preparing students to give constructive feedback, and the importance of engaging students in the process of giving and receiving feedback for their own development. My contribution to this debate is that peer review is indeed a valuable way of meeting learning outcomes and

73


enhancing learning through feedback, but that it requires time and development of students’ evaluative skills. Key References Boud D and Molloy E (2013) Rethinking models of feedback for learning: The challenge of design. Assessment and Evaluation in Higher Education 38(6): 698–712. DOI: 10.1080/02602938.2012.691462. Boud D, Cohen R and Sampson J (1999) Peer learning and assessment. Assessment and Evaluation in Higher Education 24(4): 413–426. DOI: 10.1080/0260293990240405. Nicol D, Thomson A and Breslin C (2014) Rethinking feedback practices in higher education: a peer review perspective. Assessment and Evaluation in Higher Education 39(1): 102– 122. DOI: 10.1080/02602938.2013.795518. QAA (2018) Focus on : Feedback From Assessment. Available at: https://www.qaa.ac.uk/scotland/focus-on/feedback-from-assessment. Sadler DR (2010) Beyond feedback: Developing student capability in complex appraisal. Assessment and Evaluation in Higher Education 35(5): 535–550. DOI: 10.1080/02602930903541015. Wilson MJ, Diao MM and Huang L (2015) ‘I’m not here to learn how to mark someone else’s stuff’: an investigation of an online peer-to-peer review workshop tool. Assessment and Evaluation in Higher Education 40(1). Routledge: 15–32. DOI: 10.1080/02602938.2014.881980. Poster & Pitch Session Chair: Linda Graham Time: 15:40 - 16:40 Date: 26th June 2019 Location: Room 11 50 - The use of an Inter-Professional Simulation-based Education (IPSE) task as an authentic formative assessment: an Action Research project. Jayne Coleman, Ann Noblett University of Cumbria, Carlisle, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Inter-Professional Education (IPE) has, over the last decade taken a more prominent position in the curriculum of medical and health related degree courses (Lawlis et al, 2014). Within this, Experiential Learning theory has become an accepted underpinning educational model, with students supported to learn ‘from, with and about each other’ whilst practically engaging in a meaningful activity (Fewster-Thuente, 2018). As part of this more situated approach, assessment tasks that require students to respond to realistic recreations of clinical scenarios has become more prevalent (Baker et al, 2008). The increasing inclusion of Inter-Professional Simulation-based Education (IPSE) tasks reflects this, as well as the emerging role and use of technology to enhance learning (Gough et al, 2012). Guided by the above trends, an innovative action research project was undertaken to explore the use of an IPSE task as an authentic formative assessment for IPE learning outcomes; communication, teamwork and understanding of the role of other professions (Interprofessional Education Collaborative, 2011). Emphasis was placed upon exploring IPSE

74


from both the student and tutor’s perspective, as the current evidence base often takes an exclusively student viewpoint. As part of the research project, Physiotherapy (PT) and Social Work (SW) degree programmes collaborated to run an Inter-Professional Learning event, during which students completed an IPSE task. Students from across PT, SW and Occupational Therapy worked together to complete a risk assessment, and then propose a management strategy for a service user following a home visit completed at a simulation property. Analysis of student feedback gathered after the event was consistent with a recent systematic review (Olson & Bialocerkowski, 2014). However, whilst tutors also recognised the value of IPSE to student gain, previously under considered issues regarding authenticity, collaboration logistics and Higher Education barriers were highlighted. This suggests that future exploration of the use of IPSE as an authentic formative assessment tool within IPE needs to take a more inclusive perspective. The perceptions and gains to the tutor need to be better understood to ensure drivers and restrainers to effective IPE assessment are better understood. It is now the intention to develop this project further; to explore the use of IPSE as an authentic summative assessment tool, to consider what variables (e.g. a wider representation of professions) influence simulation authenticity, and to continue to add to the evidence base with regards to the perceived value of IPSE from both the student and tutor’s perspective. Key References Baker, C., Pulling, C., McGraw, R., Dagnone, J.D., Hopkins‐Rosseel, D. and Medves, J. (2008) Simulation in interprofessional education for patient‐centred collaborative care. Journal of advanced nursing, 64(4), pp.372-379. Fewster-Thuente, L. (2018) Kolb's Experiential Learning Theory as a Theoretical Underpinning for Interprofessional Education. Journal of allied health, 47(1), pp.3-8. Gough, S., Hellaby, M., Jones, N. and MacKinnon, R. (2012) A review of undergraduate interprofessional simulation-based education (IPSE). Collegian, 19(3), pp.153-170. Interprofessional Education Collaborative (2011) Team-based competencies: Building a shared foundation for education and clinical practice. Washington, DC: Interprofessional Education Collaborative. Lawlis, T.R., Anson, J. and Greenfield, D. (2014) Barriers and enablers that influence sustainable interprofessional education: a literature review. Journal of interprofessional care, 28(4), pp.305-310. Olson, R. and Bialocerkowski, A. (2014) Interprofessional education in allied health: a systematic review. Medical education, 48(3), pp.236-246. 51 - Developing the Scientific Reporting Skills of Chemistry Students through Dialogic Assessment-Feedback Cycles and use of Journal Articles as Paradigms of Professional Practice Natalie Capel, Laura Hancock, Katherine Haxton, Martin Hollamby, Richard Jones, Daniela Plana, David McGarvey Keele University, Keele, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment

75


Abstract A ubiquitous component of assessment in chemistry degree courses is ‘laboratory reports’, which are vehicles for reporting the outcomes of laboratory experiments/investigations. However, the term ‘laboratory report’ encompasses a wide variety of styles/formats, many of which are not emulated by professional chemists in academia or other professional situations and therefore poses questions about the authenticity of such assessments. Hanson and Overton (2010) have identified ‘report writing skills’ amongst the skills that UK chemistry graduates would have liked more opportunity to develop in their undergraduate degrees, and subsequent studies confirm that such generic skills are perceived as valuable for future employment. It is therefore important that ‘report writing’ has value/credibility from the student perspective and one way to approach this is to emphasise the generic (professional) skills involved within an authentic context. In this work we have used journal articles as paradigms of professional conventions and practice, recognising that report writing that mirrors journal articles presents opportunities for students to develop a range of generic skills (e.g. written communication, numeracy and computational skills, data analysis and interpretation, information retrieval, critical thinking). Our aim is therefore to develop the generic skills of undergraduate chemistry students that are associated with scientific reporting skills. To achieve this we have developed an approach that draws upon journal articles as paradigms of professional conventions and practice, coupled with an assessment-feedback strategy that spans a full academic year. The strategy incorporates many aspects of recent thinking surrounding effective assessmentfeedback practice (O’Donovan, Price and Rust, 2004; Beaumont, O’Doherty and Shannon, 2011; Nicol, 2010; Winstone et al., 2017; Carless and Boud, 2018), placing strong emphasis on the development of students’ assessment literacy and meaningful use of feedback. The approach is characterised by a series of iterative assessment-feedback cycles that are supported by assessment briefing sessions and a range of dialogic formative and collaborative learning activities (e.g. use of exemplars, use of assessment guidance and marking criteria, self/peer-review of draft work, use of journal articles, self-reporting of use of feedback). From our evaluation of the approach over a number of years we find that students recognise and appreciate its rationale, show good engagement with the associated learning activities and, providing they fully engage across the year, produce work that evidences acquisition of reporting skills to a high standard for early undergraduate students. The approach is flexible and adaptable to local contexts and academic disciplines. Key References Beaumont C., O’Doherty M. and Shannon L. (2011). ‘Reconceptualising assessment feedback: a key to improving student learning?’, Studies in Higher Education, Vol. 36, pp. 671–687. Carless D. and Boud D. (2018), ‘The development of student feedback literacy: enabling uptake of feedback’, Assessment & Evaluation in Higher Education, 43, 1315-1325. Hanson, S. and Overton, T. (2010), Skills required by new chemistry graduates and their development in degree programmes. (http://www.rsc.org/learn-chemistry/resources/business-skills-and-commercial-awarenessfor-chemists/docs/skillsdoc1.pdf accessed January 2019) Nicol, D. (2010). ‘From monologue to dialogue: Improving written feedback processes in mass higher education’, Assessment & Evaluation in Higher Education, Vol. 35, pp. 501–517. O’Donovan, B. Price, M. and Rust, C. (2004). ‘Know what I mean? Enhancing student understanding of assessment standards and criteria’, Teaching in Higher Education Vol. 9, pp. 325–35. 76


Winstone N. E., Nash R. A., Parker M. and Rowntree J. (2017), ‘Supporting Learners' Agentic Engagement with Feedback: A Systematic Review and a Taxonomy of Recipience Processes’, Educational Psychologist, Vol. 52 No. 1, pp. 17–37. 52 - Maximizing impact of peer-feedback on students learning: A longitudinal study in Teacher Education Georgeta Ion Universitat Autònoma de Barcelona, Barcelona, Spain Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Feedback has been defined as a “process through which learners make sense of information from various sources and use it to enhance their work or learning strategies “ (Carless and Boud, 2018:1). This perspective emphasis on student engagement with feedback in terms of the shorter-term, or longer-term engagement and through different feedback loops and feedback spirals (Carless, 2018, Winstone et al, 2016, Ion, Sanchez and Agud, 2018). The main aim of the communication is to investigate feedback impact on students learning in two different experiences with different learning design involving different peer-feedback loops. Data were collected though the questionnaire which comprised 3 learning dimensions: cognitive, metacognitive and inter/intra personal skills and includes 42 items with a Likert 7 point scale (for both giving and receiving). The survey has been administrated at the end of each academic course, once completed the activity involving peer-feedback. Firstly, findings suggest that students involved in both experiences are highly satisfied with their learning especially associated to the feedback provided (M1=4.80, SD1 = 1.50; M2 = 4.08, SD2 = 1.63) comparing with the feedback received (M1=3.67, SD1 = 1.40; M2 = 3.68, SD2 = 1.43). Secondly students perceive learning in all its components (cognitive and metacognitive skills and inter and intrapersonal skills) above the mean during both experiences. However, in the findings suggest that students perceive they learn more in the longterm feedback loop comparing to the short term feedback loop experience both in the case of feedback received as in given feedback (M1=5.80, SD1 = 1.50; M2 = 5.28, SD2 = 1.63). When analysing interpersonal and intrapersonal factors, we found that, all items received means above the mean (“Giving feedback improved my team work skills” (M 1= 5.12, SD1 = 1.38; M2 = 5.01, SD2 = 1.40, “Improved the communication between my teammates” M1=5.00, SD1 = 1.43; M2 = 5.00, SD2 = 1.52). Concluding the findings revel the potential of peer-feedback to improve students learning, with higher scores in the case of giving feedback comparing to receiving feedback. In the long term feedback loop the scores are higher in all the learning dimensions. Being involved in peer-feedback experiences during two consecutive courses do not predict by its self better learning, but the elements configuring the learning context as numbers of loops apparently is associated to a higher perception of learning.

77


Key References Carless, D. & Boud, D. (2018): The development of student feedback literacy: enabling uptake of feedback, Assessment & Evaluation in Higher Education, DOI:10.1080/02602938.2018.1463354 David Carless (2018). Feedback loops and the longer-term: towards feedback spirals. Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2018.1531108 Ion, G., Sánchez Martí, A. & Agud Morell, I. (2018) Giving or receiving feedback: which is more beneficial to students’ learning?, Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2018.1484881 Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: MIT Press. Winstone, N., Nash, R., Michael Parker; M. & Rowntree, J. (2016) Supporting Learners' Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes, Educational Psychologist, 52:1, 17-37 53 - Joining the dots: a qualification based approach to developing student assessment strategies in undergraduate engineering Alec Goodyear, Carol Morris The Open University, Milton Keynes, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract We report on the implementation of a qualification based approach to assessment for undergraduate engineering students, which formed part of a curriculum reconfiguration. Engineering students at the Open University are part-time, distance learners, bringing with them a wide variety of prior learning and work experience. A third of the students have previous educational qualifications which would preclude them from conventional university entry. The reconfiguration gave the engineering qualification team an unprecedented opportunity to review student retention and progression within and across modules, develop assessment strategies across the whole qualification, and to design assessment tasks which enhanced student engagement with the module materials. We established an assessment working group consisting of module team leaders and qualification leads. This group developed a set of principles to inform the formulation of assessment tasks and reached a shared understanding of the purpose and importance of assessment to student learning before any module materials were written. This maximised the opportunities to develop students’ engagement with the materials through assessment, as each module team had a clearly defined assessment structure which contributed to the overall strategy. Modules are studied sequentially enabling an assessment strategy which ensures that assessment tasks gradually build in difficulty as students progress through an individual module and in type as they progress from one module to the next. Module teams were required to submit their draft assessment tasks and tutor marking guides to the working group which reviewed them for fit with the strategy and the intended Learning Outcomes. Specific assessment tasks which make up assignments link clearly to learning activities within each module and students are guided to do the tasks as they progress instead of leaving 78


them until just before assignment cut-off dates. An ‘Engineering Assessment Guide’ for students has been developed which does not repeat the rules and regulations of the University Assessment Handbook, but gives academic support on undertaking assessment and clear guidance on ‘process’ words so that students understand what is required for each task. The working group are in the process of developing an assessment authoring guide and resources for academic staff to aid their writing of assessment tasks. Enhanced staff development is considered an important aspect of the working group. Students who were part of the first cohort on the reconfigured qualification report increased satisfaction with assessment and feedback and their results indicate a significant increase in retention and progression over previous cohorts. Key References Biggs, J., & Tang, C. (2011) Teaching for Quality Learning at University, Macgraw-Hill Education (UK), ISBN 0335242758 Brown, S. (2005). Assessment for Learning. Learning and Teaching in Higher Education, Issue 1, pp81-89. ISSN 1742-240X Available at: http://eprints.glos.ac.uk/3607 Boud, D & Falchikiv, N (2007) Assessment and Evaluation in Higher Education, Volume 31, Issue 4. Available at: http://www.tandfonline.com/doi/abs/10.1080/02602930600679050 Coats, Maggie; Dillon, Christopher; Hodgkinson, Linda and Reuben, Catherine (2005). 'Learning outcomes and their assessment: putting Open University pedagogical practice under the microscope'. Ist International Conference on Enhancing Teaching and Learning through Assessment, July 2005, Hong Kong, China. Available at: http://oro.open.ac.uk/5557/ Gibbs, G and Simpson, C. (2005). ‘Conditions Under Which Assessment Supports Students’ Learning.’ Learning and Teaching in Higher Education, Issue 1, pp 3-31. ISSN 1742240X Available at: http://eprints.glos.ac.uk/3609 Morris, C and Goodyear, A; (2017). ‘Transforming assessment practices for undergraduate engineering students in a distance-learning environment’ In Transforming Assessment in Higher Education, a case study series, HEA, pp 94-98, December 2017. Available at: https://www.heacademy.ac.uk/system/files/hub/download/Transforming%20Assess ment%20in%20Higher%20Education%20-%20A%20Case%20Studies%20Series.pdf 54 - Impact of Feedback on Assessment of Final Examinations at Institutional Level Dr. Allah Rakha, Mr. Waqas Latif University of Health Sciences, Lahore, Lahore, Pakistan Conference Themes Leading change in assessment and feedback at programme and institutional level Abstract Assessment is an integral part of the teaching and learning process that describes students’ best performance across time (Biggs, 1996; Hattie & Jaeger, 1998). Such assessment only provides useful feedback to the students on how to improve their performance. But not provide the feedback to the medical institutes about how to modify the teaching techniques in those subjects in which students’ performance was not up to the mark. In medical education, assessment of medical students is measured using final examination at the end of a course or period of learning. Final examination of medical students is comprised on practical, oral demonstrations of learning and written examination of different subjects of 79


basic medical sciences and clinical sciences. Final examinations only assess what students have learnt and give feedback to students on learning gaps to improve their learning. These exams do not provide feedback regarding gaps at institutional level. University of health sciences, Lahore performed audit and analyze various components of anatomy, biochemistry and physiology 1st professional examination of MBBS program in 2017 and shared the feedback with principals of all the medical colleges. The audit and analysis indicated that more effective teaching efforts are required on the part of teachers and the practical examinations. All principals were directed to look into the matter and discuss with the concerned professors to rectify the learning gaps to improve students’ learning. After one year, the result of 1st professional MBBS annual examination 2017 was compared with result of 1st professional MBBS annual examination 2018. Result indicates that there was an improvement in pass percentage of students in 1st professional MBBS annual examination 2018 as compared to 2017. Feedback at institution level also enhances the students’ performance as well as reputation of institution and confidence of stakeholders. Key References Biggs, J. (Ed.) (1996) Testing: to educate or to select? Education in Hong Kong at the crossroads (Hong Kong, Hong Kong Educational Publishing Company). Hattie, J. & Jaeger, R. (1998) Assessment and classroom learning: a deductive approach, Assessment in Education: principles, policy and practice, 5(1), pp. 111–122. 55 - Exploratory study of the implementation of institutional assessment programs in higher education Ana Remesal, María José Rochera, Núria Juan Universidad Barcelona, Barcelona, Spain Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract The objective of this exploratory case study is to analyze how two university professors implement an institutional assessment program, identifying the main factors that shape those implementations. In the study participated two long experienced professors and their pre-graduate students, who were in their last semester carrying out their Final Bachelor Project. The students must perform a small research or educational implementation project bringing evidence of their professional competence developed throughout the four previous courses. The institutional assessment program foresees a formative assessment strategy over the whole semester and expects four deliveries of progressive advancement by the students. Moreover, the institution expects a blended supervision of the students’ work, offering its own technological instruments to the tutors. In the first delivery the students must propound the topic of research/implementation along with objectives/research questions; in the second delivery the students must present advancement in their written project; in the third delivery the final written report must be presented to the tutor; and eventually, in the fourth stage the students must defend their work in an oral exam before a tribunal formed by two professors. Throughout the process 80


the university tutors are supposed to offer formative feedback to the students, in order to improve their respective deliveries. In this poster we present results concerning the tutors as feedback providers. Data were collected attending the temporal course of this final assignment. An initial questionnaire was presented to the tutors to ascertain their knowledge about feedback. Later, three process questionnaires along with each of the students’ partial deliveries were also gathered, addressing the tutors’ personal evaluation of the use that their students were doing of their feedback. Finally, after the final delivery the tutors answered a semi-structured interview. Also, all feedback messages offered by both tutors were collected. Feedback was provided by a variety of means: personal interview with student, group meeting, google drive, collective forum in the institutional LMS (Moodle). The results show that different implementation formats, different programmatic assessment practices, take form under the same institutional umbrella. Factors intervening in the tutors’ action were, for instance, their ideas about assessment and feedback. Moreover, we identified limitations in the institutional policies regarding the assessment program for the development of this final Bachelor assignment, as for instance the tutors’ workload, preparation towards guiding such kind of final competence projects and students distribution regarding thematic expertise. Keywords Higher education; formative feedback; Bachelor Final Project. Key References Baartman, L. K., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. (2006). The wheel of competency assessment: Presenting quality criteria for competency assessment programs. Studies in educational evaluation, 32(2), 153-170. Carless, D., Bridges, S. M., Chan, C. K. Y., &Glofcheski, R. (Eds.). (2017). Scaling up assessment for learning in higher education. Springer. Evans, C. (2013). Making Sense of Assessment Feedback in Higher Education. Review of Educational Research, 83, 1, 70-120. Nicol, D.J. (2010). From monologue to dialogue: improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35, 501-517. Price, M., Handley, K., Millar, J. & O’Donovan, B. (2010). Feedback: all that effort, but what is the effect?Assessment and Evaluation in Higher Education. Vol. 35, No. 3, 277-289. Van der Vleuten, C. P., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical teacher, 34(3), 205-214. 56 - Assessment in engineering programmes: a systematic review of the literature Jonathan Halls1, Carmen Tomas2, John Owen1, Kamel Hawwash3 1 University of Nottingham , Nottingham, United Kingdom. 2University of Nottingham, Nottingham, United Kingdom. 3University of Birmingham , Birmingham, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Reported here is a systematic review of assessment practice in engineering, the aim of which 81


is to identify specific characteristics about assessment practice in engineering. This work is part of a larger collaborative project between two universities seeking to review and improve their assessment practices in Engineering. Using systematic review methodology (Gough, 2007), 104 sources were identified based on a broad set of keywords and inclusion criteria. These articles were coded for a range of factors including research methods and different stages in the assessment life-cycle, which were derived from the University of Nottingham’s Assessment Framework (Tomås & Scudamore, 2014). Studies on assessment practices in engineering education, and key findings appear well aligned with the broader literature and existing guidance on good assessment practice. However, there is still little work on the problem of how to cross the divide between research and practice, to better support those in engineering education settings. The literature review has provided insights into the main emphases in engineering education research relating to assessment. The literature review has identified a great emphasis on accreditation, automated marking, designing MCQ tests and assessment of group work. In contrast with this range of topics concerning how to implement good practice, much less research was found on programme level review. Only ten per cent of papers explored programme level review of assessment practices, suggesting a renewed focus is needed in research and practice on this important area. In terms of research methodology used, seventy five per cent of papers are comparisons or case studies, where ideas from the broader higher education literature on assessment have been adopted and applied in specific engineering settings. Such examples serve as guides for staff to implement changes. Studies with robust comparisons of approaches and methods of assessment are rare (thirteen per cent). It is suggested that more robust evaluation of competing approaches in real-world contexts might be highly desirable to continue advancing this field. The methods used in this project and lessons learnt should be beneficial to those in other disciplines wishing to a) improve their assessment practices and b) explore discipline-specific insights in relation to good assessment practice. Key References Gough, D. (2007) Weight of evidence: a framework for the appraisal of the quality and relevance of evidence. Applied and practice-based research, 22 (2), 213-228. Tomås, C. and Scudamore, R. (2014) Using an assessment conceptual framework to facilitate institutional transformation of assessment. Earli Assessment and Evaluation Sig, Madrid. August 2014. Parallel Session 4 Chair: Amanda Chapman Time: 16:50 - 17:20 Date: 26th June 2019 Location: Piccadilly Suite 57 - Assessment-as-portrayal: strategic negotiation of persona in assessment Rola Ajjawi1, David Boud2, David Marshall1 1 Deakin University, Melbourne, Australia. 2University of Technology, Sydney, Australia Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment

82


Abstract Assessment serves multiple purposes, such as certification, guiding intended learning and preparing students to operate effectively in the world through being able to make judgements of the quality of their work (Tai, Ajjawi, Boud, Dawson, & Panadero, 2018). We propose in this paper that assessment should be used to also shape how learners come to think about themselves and their practice and present themselves to the world. There has been a shift towards ‘assessment-as-portrayal’ where students are encouraged to portray their achievements, as aligned to intended learning outcomes, through formats such as portfolios (Clarke & Boud, 2016). Drawing on the body of work known as persona studies enables us to extend present notions of assessment-as-portrayal to benefit from how portrayal is used in worldly settings. Persona can be considered a strategic identity: “a fabricated reconstruction of the individual that is used to play a role that both helps the individual navigate their presence and interactions with others and helps the collective to position the role of the individual in the social” (Marshall & Henderson, 2016, p. 1). We propose here that the concept of persona can provide productive ways to understand the contemporary configuration of identity in assessment tasks. By introducing assessment portrayals we seek to understand how students can strategically construct their persona through different media and for different audiences through meaning making in relation to both the worlds of work and study. Reimagining assessment-as-portrayal – weaving in a strategic portrayal of self as well as achievements to particular audiences – challenges traditional notions of assessment on a number of fronts. First, it introduces an explicitly subjective dimension that assessment practices have attempted to obscure (Bloxham, den-Outer, Hudson, & Price, 2016). Second, strategic portrayals of self by students focus us on recognising that multiple representations can be equally valid. Portrayal of achievements can be validated by the course and institution, but portrayals of self and context prompt a rethink of how we might judge the quality of portrayal for future work. Third, rather than discounting the self, students can strive to be agentic in the construction of who they are and who they are becoming as well as what is required of them in defining their stance (Vu & Dall’Alba, 2014); hence, building the necessary judgements appropriate for the social world they wish to join. Persona studies offers the possibility of reimagining assessment to authentically engage students’ multiple identities. Key References Bloxham, S., den-Outer, B., Hudson, J., & Price, M. (2016). Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education, 41(3), 466-481. doi:10.1080/02602938.2015.1024607 Clarke, J. L., & Boud, D. (2016). Refocusing portfolio assessment: Curating for feedback and portrayal. Innovations in Education and Teaching International, 1-8. doi:10.1080/14703297.2016.1250664 Marshall, P. D., & Henderson, N. (2016). Political Persona 2016 - an Introduction. Persona Studies, 2(2), 1-18. Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76(3), 467-481. doi:10.1007/s10734-017-0220-3 Vu, T. T., & Dall’Alba, G. (2014). Authentic Assessment for Student Learning: An ontological conceptualisation. Educational Philosophy and Theory, 46(7), 778-791. doi:10.1080/00131857.2013.795110 83


Parallel Session 4 Chair: Dave Darwent Time: 16:50 - 17:20 Date: 26th June 2019 Location: Room 2 58 - Feedforward: a systematic review of a concept Ian Sadler1, Nicola Reimann2, Kay Sambell3 1 Liverpool John Moores University, Liverpool, United Kingdom. 2Durham University, Durham, United Kingdom. 3Edinburgh Napier University, Edinburgh, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Recent debates about feedback in higher education emphasise student agency: their sensemaking, uptake and action taken (Carless and Boud 2018). Some have argued that the term’ feedforward’ may distract from attempts to reconceptualise feedback or inhibit new paradigms, and that learning-oriented conceptualisations of feedback already subsume ideas associated with feedforward (Tai et al. 2017). However, feedforward is a term that is not only increasingly used and valued by practitioners but, perhaps surprisingly, also features in a wide range of educational literature. Before dismissing it as unhelpful, a further examination of this literature seems timely to help reveal the varying ways in which feedforward is understood and framed, taking account of the widest possible range of sources and paradigms. This paper will report the methodology and initial findings of a systematic review of peer-reviewed literature. This interpretivist review (Gough et al. 2017) is somewhat unusual as it focuses on the way in which the concept has been employed, rather than the ‘effectiveness’ of feedforward interventions and an aggregation of relevant empirical results, as may be expected in a systematic review. It has been guided by the following review questions: How is feedforward framed in publications which focus on pedagogic practices that authors identify as feedforward? How is feedforward conceptualised? What practices are associated with feedforward? How is their effectiveness evaluated /measured? A search of British Education Index, ERIC, Web of Science, Scopus and PsycInfo was undertaken which resulted in 543 publications between January 2007 and September 2018 (duplicates removed). In order to iteratively refine the review questions and inclusion/exclusion criteria and to calibrate the reviewers’ eligibility judgements, a random sample of 30 was selected for initial consideration. Our first reading found many different usages of the term feedforward: as a label for a specific pedagogic practice, as a label for one selected element of a practice, as an analytical category discussed in detail and frequently based on Hattie and Timperley’s (2007) feed up, feed back, feed forward model, or as a term simply mentioned in passing but not discussed in detail and with reference to relevant literature. Following stage 1 of the screening process taking account of titles, abstracts and keywords, full texts are currently screened for eligibility. The paper will focus on both the process of this ‘conceptual review’ and provisional findings derived through thematic synthesis (Thomas and Harden 2008). Key References Carless, D., and D. Boud. 2018. “The development of student feedback literacy: enabling uptake of feedback.” Assessment & Evaluation in Higher Education 43 (8): 1315–1325. Gough, D., S. Oliver and J. Thomas. 2017. An introduction to systematic reviews. London, Thousand Oaks, California, New Delhi and Singapore: Sage. Hattie, J., and H. Timperley. 2007. “The Power of Feedback.” Review of Educational Research 77 (1): 81-112.

84


Tai, J., R. Ajjawi, D. Boud, P. Dawson, and E. Panadero. 2018. “Developing evaluative judgement: enabling students to make decisions about the quality of work.” Higher Education 76 (3): 467-481. Thomas, J., and A. Harden. 2008. “Methods for the thematic synthesis of qualitative research in systematic reviews”. BMC Medical Research Methodology 8 (45). Available at: https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-8-45 [last accessed 20/1/2019]. Parallel Session 4 Chair: Silke Lange Time: 16:50 - 17:20 Date: 26th June 2019 Location: Room 3 59 - Making the Case for Mindful Assessment Design Sam Elkington Teesside University , Middlesborough, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract There is no doubt that many students and staff would prefer university assessment to be different to what they currently experience. However, if we take the position that assessment needs to be seen as an indispensable aspect of a lifewide attitude to learning, the question is then what kind of learning should be assumed in and through our teaching and assessments? Our collective attention when it comes to teaching and assessment tends to be on educational effectiveness and efficiency rather than on the extent to which our approaches and strategies of choice support diverse and sustainable ways of knowing that honour and empower learner experience and development. Increasingly important are considerations surrounding student wellbeing in curriculum design; particularly minimising the negative consequences of stress and anxiety for our students in relation to expectations around assessment on their courses – something that has very real implications for academic attainment, retention and employability. This paper is informed by ongoing interdisciplinary action research that draws on the principles of mindful learning and current thinking and evidence in promoting self-regulatory practices to reframe learner-centred assessment as a primary vehicle for authentically personalised learning. Such ‘mindful assessment’, as it is conceptualised here, necessarily embraces a more encompassing and flexible view of student learning development; one that attends to a number of affective self-regulatory elements that ought to be considered at the point of design – namely these include: student sensitivity to context and new perspectives, intentionality of attention, managing personal responses to feedback, and resilience. Finally, the paper explores a range of mindfulness interventions to prevent distress and boost performance in relation to student assessment in an attempt to shift and advance our collective attention on to the practical implications of mindful assessment design in higher education.

85


Parallel Session 4 Chair: Edd Pitt Time: 16:50 - 17:20 Date: 26th June 2019 Location: Room 4 60 - Second-class citizens? Using Social Identity Theory to explore students’ experiences of assessment and feedback Neil Lent, Jill MacKay, Kirsty Hughes, Hazel Marzetti, Susan Rhind University of Edinburgh, Edinburgh, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract This presentation is based on thematic analysis of all National Student Survey (NSS) free-text responses in one year (2016) at a Scottish pre-1992 university. The initial driver was to use an under-used data source (relative to the fixed response data from NSS) and to attempt to gain insights into the lived experience of students at the end of their undergraduate degree. What we found were accounts of experiences that could easily be understood in terms of an, ‘us versus them’ dynamic (students versus staff). We used Social Identity Theory (Tajfel and Turner, 1979) to explore these themes and concluded that assessment can act as a barrier between staff and students, especially where students are not given effective feedback. Where respondents seemed to feel that assessment practices were excluding them from a group to which both students and staff could belong (such as their discipline), they expressed dissatisfaction and frustration in ways that suggested assessment and feedback was ‘done’ to them rather than something they took part in. The feeling of ‘us versus them’ fits poorly with the evidence that assessment and feedback are best viewed as dialogic processes enabling students to be independent, self-regulated learners (eg: Nicol & Macfarlane-Dick, 2006) rather than as a commodity with a particular use value (Ajjawi and Boud, 2017). This lack of agency is at odds with self-regulation and calls into question some institutional responses to enhancing assessment and feedback. Such measures tend to focus on structure, often treat feedback as a commodity rather than a process that reinforces a consumerist model of education. This focus can easily be seen as reinforcing inter-group differences between students (consumer) and teaching staff (service providers). This study adds to the growing body of work encouraging a dialogic approach to ensure students are able to make the best use of feedback, and suggests it may also have the encouraging side-effect of improving student satisfaction through contributing a sense of belong to a new discipline or programme community. We argue that the approach taken to the NSS data can be applied more widely in terms of students’ experience of assessment and feedback and provide insights into how the wider programme / course context can be improved to enhance dialogic feedback. Key References Ajjawi, R., & Boud, D. (2017). Researching feedback dialogue: an interactional analysis approach. Assessment and Evaluation in Higher Education, 42(2), 252–265. Carless, D. (2015). Exploring learning-oriented assessment processes. Higher Education, 69(6), 963-976. Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A modeland seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. 86


Tajfel, H., & Turner, J. (1979). An integrative theory of intergroup conflict. In The Social Psychology of Intergroup Relations (pp. 33–47). Parallel Session 4 Chair: Maria Rosari Marsico Time: 16:50 - 17:20 Date: 26th June 2019 Location: Room 5 61 - Changes in Technology-assisted Assessment and Feedback in UK Universities Jinrong Fang1,2, Gwyneth Hughes2 1 University of Hertfordshire, Hatfield, United Kingdom. 2University College London, London, United Kingdom Conference Theme Integrating digital tools and technologies for assessment Abstract British higher education institutions have experienced changing expectations associated with the quality of the student learning experience and waves of technological innovations. Ferrell and Sheppard (2013) highlight the importance of the adoption of technology in the pursuit ofhigh quality learning, teaching and assessment. Technology-enhanced assessment (TEA) describes the use of technology to extend or add value to the assessment and feedback processes (JISC 2010). This longitudinal study identifies opportunities, benefits and challenges of technologyenhanced assessment and feedback to improve student academic performance in two universities between 2013 and 2018. Interviews were conducted with eleven tutors and learning technologists from two UK universities. The findings reveal a significant increase in the number of new technologies applied and that the application of technologies in assessment activities varies based on the number of students on the course. The main driver for implementing new technology in assessment activities is efficiency and improving student learning over this time. The findings also indicate that TEA could both enhance student learning and help tutors improve efficiency. Meanwhile, tutors’ awareness of the pedagogic benefits of TEA has increased over this period of time. Among the general challenges that emerged in this study are the continued resistance of some teaching staff to the adoption of new technology in teaching and assessment and the pressure on technical staff to keep on top of new developments. A major and yet unresolved challenge is the inconsistent application of technology in different modules of the same course. Furthermore, the data indicates that the two universities investigated do not have a culture which allows tutors to fail in trying new technologies in formative and summative assessment, because it may affect NSS and module feedback scores. Using new technologies in assessment is a challenge for some tutors as it requires them to take risks. The findings of this study can inform the development of policies and procedures in universities in the application of new assessment technologies so that staff can be supported and motivated to develop TEA effectively. It also argues for consistency in the choice and application of teaching technologies within modules. Key References Ferrell, G. & Sheppard, M. 2013. Supporting assessment and feedback practice with technology: a view of the UK landscape. Pobrane. 87


JISC. 2010. Effective Assessment in a Digital Age A guide totechnology-enhanced assessment and feedback [Online]. www.jisc.ac.uk/elearningprogramme: JISC.[Accessed 2nd Oct 2014] Parallel Session 4 Chair: Marie Stowell Time: 16:50 - 17:20 Date: 26th June 2019 Location: Room 7 62 - Evaluative Judgement in Chemistry Practical Project Modules Anna Bertram, Carmen Tomas University of Nottingham, Nottingham, United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment Abstract A practical module in the School of Chemistry was redesigned to incorporate a yearlong approach to the development of students’ evaluative judgement. Evaluative judgement is the students’ ability to make judgements about their own work and that of others (Boud et al 2018), it consists of an integrated approach to providing information and student engagement in assessment and is rooted in models of self-regulation (Zimmerman 2000; Panadero and Broadbent 2018). Activities designed as part of this integrated approach included:    

Providing rubrics to students Engaging students in understanding criteria Engaging students in making judgements about the work of others (co-assessment): marking of sample work or marking of others Self-assessment: engaging students in assessing themselves and making action plans

The conceptual work on evaluative judgement (Boud et al 2018) provides a useful and integrative conceptual framework for instruction and learning. Implementation of these various strategies in an integrated manner is less well understood, for example, how many activities and how often will be questions that future studies need to investigate. This case study outlines our plan for the sustained development of students’ evaluative judgement. The third year practical module involves students working in teams to undertake two miniresearch projects, one in each semester. The format and assessment of both projects is the same but the chemistry differs. Semester 1, first two weeks. Activities designed to help students prepare for the first laboratory project included:  Identifying skills already developed and those additional skills which would need further development.  Identifying appropriate assessment criteria - student generated and staff generated, followed by discussion with peers and academics.  Group peer reviews of a range of example reports and discussion with academic staff.  Peer assessed presentation for formative feedback. Semester 2, first two weeks. Activities aimed to help students review their performance, reflect, learn and plan further action that they can apply to a second project: 88


  

Students self-assess and review their feedback from the first project. Discuss feedback with the assessor. Write a development plan for Project 2

Early evaluations show positive impact on students’ ability to understand criteria and expectations. The evaluation is ongoing through the year to establish impact on learning and understanding the student experience of these various activities. Key References Boud, D., Ajjawi,R., Dawson, P., & Tai, J. (2018). Developing evaluative judgement in higher education. Abingdon Oxon, Routledge. Panadero, E., & Broadbent, J. (2018). Developing evaluative judgment: self-regulated learning perspective. In D. Boud, R. Ajjwi, P. Dawson, & J. Tai (Eds) Developing evaluative judgement in higher education. Assessment for knowing and producing quality work, pp. 81-89. London: Routledge. Zimmerman, B.J. (2000). Attaining self-regulation – a social cognitive perspective. In Handbook of self-regulation. In M. Zeidner, P.R. Pintrich and M. Boekaerts (Eds), pp. 14-19. San Diego, CA: Academic Press. Parallel Session 4 Chair: Jill Barber Time: 16:50 - 17:20 Date: 26th June 2019 Location: Room 10 64 - Family feedback: exploring the experiences of 'commuter students' Rita Headington Independent, Canterbury, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract The UK’s move to mass higher education, with rising tuition fees and accommodation costs, has increased the number of ‘commuter students’ (Donnelly and Gamsu, 2018) who, particularly in metropolitan areas, live in their parents’ home while attending university. Chua et al. (2011) argued that kinship/family formed a ‘specialised tie’ that provided longterm emotional support across a range of circumstances, with physical proximity acting to preserve close relationships. At school-level, studies have highlighted parental involvement in students’ academic achievement, identifying their high expectations and communications with students about activities (Castro et al., 2015). There is also evidence that parents are increasingly playing a role in the practical and financial aspects of students’ university choices (UPP, 2019). While feedback has been recognised as essential to learning, the availability, timeliness and personalisation of ‘formal’ tutor feedback in mass higher education remains problematic. Additionally, feedback recipients need to trust the knowledge and the accuracy of feedback givers’ judgements (Carless, 2015) and have the opportunity to enter into meaningful dialogue with them (Nicol, 2010). Headington (2018) asserted that ‘informal’ peer feedback supplemented tutor feedback. Developed through proximity, it enabled dialogue based upon students’ shared values and experiences. Whether the informal feedback of family members offers similar opportunities to ‘commuter students’ remains an unanswered question.

89


This paper reports on the findings of a longitudinal investigation of students’ informal feedback networks, based upon a cohort of c.100 student teachers across the three years of a UK primary education degree programme at a metropolitan university. Responses from three cohort surveys and the diary and interview data of several ‘commuter students’ provided insights into these students’ informal feedback relationships with family members. The research found that, while lacking expert subject knowledge, family members’ feedback was firmly based on their prior knowledge of the individual. It provided ‘commuter students’ with an accessible, trusted source that could scaffold development and aid continuity. However, it also required students to share their knowledge and understanding, identify and then nurture appropriate feedback sources within the family while managing close interpersonal relationships. Key References Carless, D. (2015) Excellence in University Assessment, Abingdon: Routledge. Castro, M., Expósito-Casas, E., López-Martin, E., Lizasoain, L., Navarro-Asencio, E. and Gaviria, J. L. (2015) ‘Parental involvement on students academic achievement: A metaanalysis’, in Educational Research Review, 14: 33-46. Chua, V., Madej, J., & Wellman, B. (2011) ‘Personal Communities: The World According to Me’, In J. Scott, & P. J. Carrington (Eds.), The SAGE Handbook of Social Network Analysis. London: SAGE. Donnelly, M. and Gamsu, S. (2018) Home and Away: Social, ethnic and spatial inequalities in student mobility, London: The Sutton Trust. Headington, R. (2018) ‘Students’ informal peer feedback networks’ in Practitioner Research in Higher Education: Special Assessment Issue, 11: 4-14. Nicol, D. (2010) ‘From monologue to dialogue: improving writing in mass higher education’, in Assessment and Evaluation in Higher Education, 35(5), 501-517. University Partnerships Programme (UPP) (2019) Annual Student Experience Survey 2017, available at https://www.upp-ltd.com/student-survey/, Last accessed 21 January 2019. Parallel Session 4 Chair: Eileen O'Leary Time: 16:50 - 17:20 Date: 26th June 2019 Location: Room 11 65 - Critical reflections on the implementation of ‘The Puzzle’, an authentic assessment task designed for academics enrolled in a professional higher education course on assessment Laura Dison, Kershree Padayachee Wits University, Johannesburg, South Africa Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract The key purpose of the Postgraduate Diploma in Higher Education at Wits University is to prepare academics to teach and assess in ways that engage, challenge and transform student learning. The Assessment Course makes use of performance-based assessments that involve ’real world application of knowledge and skills’ (McMillan 2015, 55). The curriculum of the Assessment Course covers a number of areas relevant to academics. It is based on theoretically grounded and practical strategies which cover a range of learning-oriented assessment theories and concepts (Carless 2015). Assessment strategies in the course foster 90


course participants’[1]capacity to reflect on and apply assessment concepts and tools in their own disciplinary contexts. This paper analyses a core assessment task known as ‘The Puzzle’ which draws on principles of constructive alignment (Biggs and Tang, 2011), course congruence (Ashwin et al, 2015) and sustainable feedback practices (Bloxham et al, 2016 and Carless, 2015). Participants engage with a specific assessment challenge that they face in their own teaching context and are required, through various tasks linked to the Puzzle, to constantly reflect on the difficult issues that arise as new knowledge is presented. Our evaluation of how students have responded to the Puzzle assessment over the past three years illustrates their shift to viewing assessment as an intrinsic part of the learning process, closely aligned to their everyday realities. Furthermore, by involving participants in the coconstruction of the Puzzle task rubric, the course presenters have been able to model ‘good’ teaching and assessment practices through the design of assessment criteria and rubrics using conceptual tools like the SOLO Taxonomy (Biggs and Collis, 1989). We argue that this performance-based reflective task, centred around a core assessment issue, has the potential to transform participants’ understanding of and application of assessment principles as they grapple with a disorientating assessment dilemma, viewing it through multiples lenses and formulating solutions to the problem. They experience the complexity of formative assessment as they participate in critical dialogues with their facilitator and peers to justify their assessment choices. This study explores the possibilities of the Puzzle improving the quality of assessment criteria and rubrics as well as the level of student engagement with assessment tasks in different disciplinary contexts. It has created an authentic learning experience that enhances participants’ capacity to bridge the gap between theory and practice. [1]In the PGDipHE, students are referred to as participants. Key References Ashwin, P., D. Boud, K. Coate, F. Hallet, E. Keane, K. Krause, B. Leibowitz, I. Maclaren, J.McArthur, V. McCune and M. Tooher. 2015. Reflective teaching in higher education. London: Bloomsbury. Biggs, J. and C. Tang. 2011. Teaching for quality learning at university. Great Britain: Society for Research into Higher Education & Open University Press. Bloxham S., Den-Outer, B., Hudson, J., Price, M. (2016). Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment and evaluation in Higher Education. 41: 3, 466-481. Boud, D and Malloy, E. (Eds) (2013) Feedback in Higher and Professional Education. Boud, D and Malloy, E. ch 1. What is the problem with feedback? London: Routledge. Carless, D. 2015. Excellence in University Assessment: Learning from award-winning practice. Ch 10: Promoting student engagement with feedback, p 189-207. London: Routledge. McMillan, J. 2014. Establishing high quality classroom assessments, Chap3 inClassroom Assessment: Principles and Practices for Effective standards-based Instruction, p 51-79. London: Pearson.

91


Parallel Session 5 Chair: Geraldine O'Neil Time: 17:30 - 18:00 Date: 26th June 2019 Location: Piccadilly Suite 66 - Students’ Enactment of Feedback Through a Series of Graded Low-Stake LearningOriented Presentations ZIta Stone1, Joanna Tai2, Edd Pitt1 1 University of Kent, Canterbury, United Kingdom. 2Deakin University, Melbourne, Australia Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract For feedback to be effective, students must have opportunities to take action based on information provided. We explored students’ enactment of feedback and enhanced learning behaviours through a series of graded low-stake learning-oriented presentations. 70 final year International Business undergraduate students at a UK university were assessed by peers and staff on eight group presentations each contributing 2.5% to their final grade. This allowed for dialogic peer feedback, presentation skill improvement, knowledge exchange, critique and enactment of feedback in subsequent presentations (Carless, 2013b, Bearman et al, 2014, Nicol et al, 2014). Central to the dialogic nature of the peer feedback interactions was the meaning making and influence this had upon students’ future learning behaviours (Ajjawi & Boud, 2017; Yang & Carless, 2013). Students were surveyed about their experience at the beginning, mid-point and end of the module. Data were thematically analysed. At the beginning, students were positive about the challenge of working in groups, but concerned about the amount of work required and their peers work ethic. At the midpoint students reported, their knowledge had deepened. At the end of the module, students felt their presentation skills and confidence had improved. However, they felt the presentations required a lot of work for a small weighting. The students’ perceptions of peer feedback fluctuated. At the beginning, students were positive about the potential for using peer feedback for improvement in their next presentation. At the midpoint and end of the module, the vast majority said they had used peer feedback to address identified weaknesses in their next presentation. A minority suggested peer feedback was superficial and not useable. Most students reflected that the assessment had deepened their approach to learning and their use of feedback in similar assessments in other modules. A few students reported that the feedback was not transferable to other assessments. Overall, it appears that many students welcomed the incentive to work throughout the module on multiple, low-stakes presentations, enacted the peer feedback and perceived that their learning had improved. The students’ grade outcomes support these contentions. After presentation one the average grade was 62.6 % (±8.26), at the mid-point of the module the average grade was 70.1% (±8.69) and on the final presentation the average grade was 75.1% (±9.8). These results are an indication that despite the relatively low weighting, an integrated feedback and assessment regime helps students to enact feedback, increases their learning and improves performance. Key References Bearman, M., Dawson, P., Boud, D., Hall, M., Bennett, S., Molloy, E., & Joughin, G. (2014). “Guide to the Assessment Design Decisions Framework.” http://www.assessmentdecisions.org/guide/.

92


Carless, D. (2013). Trust and Its Role in Facilitating Dialogic Feedback. In Feedback in Higher and Professional Education: Understanding It and Doing It Well, edited by Boud, D. & Molloy, E. 90–103. London: Routledge. Carless, D. (2015). Excellence in University Assessment: Learning from Award-Winning Practice. London: Routledge. Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking Feedback Practices in Higher Education: A Peer Review Perspective. Assessment & Evaluation in Higher Education, 39(1), 102–122. Yang, M., & Carless, D. (2013). The Feedback Triangle and the Enhancement of Dialogic Feedback Processes. Teaching in Higher Education, 18(3), 285–297.

Parallel Session 5 Chair: Mary McGrath Time: 17:30 - 18:00 Date: 26th June 2019 Location: Room 2 67 - Validation of a Feedback in Learning Scale; behaviourally anchoring module performance Mark Jellicoe, Alexandra Forsythe The University of Liverpool, Liverpool, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Research attention has seen a move away from feedback delivery mechanisms to those that support learners to receive feedback well (Winstone, Nash, Parker, & Rowntree, 2017). Recognising feedback and the action necessary to take the next steps are vital to selfregulated task performance (Panadero, 2017; Zimmerman, 2000). The evaluative judgements which support these mechanisms are vital forces that support academic endeavour and lifelong learning (Ajjawi, Tai, Dawson, & Boud, 2018). Whilst measuring such mechanisms is well developed in occupational settings (Boudrias, Bernaud, & Plunier, 2014), understanding how these relate to self-regulated gains in learning is less well understood (Forsythe & Jellicoe, 2018). Two groups of psychology undergraduates at a University in the north-west of England endorsed perspectives associated with feedback integration. Here we refined a measure of feedback integration from the occupational research domain (Boudrias, Bernaud, & Plunier, 2014) and considered its application to gainful learning in Higher Education. The measure examines process characteristics including message valence, source credibility, and the challenge associated with feedback interventions. Action characteristics included acceptance of feedback, awareness, motivational intentions, and the desire to make behavioural changes and undertake development activities as a result of feedback. Structural equation modelling was used to examine the nature of feedback integration. Exploratory factor analysis in the first cohort revealed that undergraduate learners endorsed a single process feedback factor, which we termed credible challenge. Message valence was not endorsed by learners and therefore was dropped. From the action characteristics, learners endorsed four factors. These include acceptance of feedback, awareness, motivational intentions. Finally, the desire to take behavioural changes and participate in development activities was collapsed into a single factor. The structure of the instrument

93


was confirmed in confirmatory factor analysis. Both models achieved mostly good, and at least acceptable fit measures endorsing the robustness of the measure in these participants. The results confirmed here in two samples of undergraduate psychology students increase understanding in a number of ways. Firstly, this increases our understanding of how students relate to feedback. These results suggest that a credible challenge may lead to greater student acceptance and awareness resulting from feedback. Together, these may lead to greater motivations to make self-regulated gains during learning. These promising results, whilst cross-sectional, also have implications for programmes. Further research employing this instrument is necessary to understand changes in learner attitudes in developing beneficial self-regulated skills that support both programmes of study and graduates in their careers. Key References Ajjawi, R., Tai, J., Dawson, P., & Boud, D. (2018). Conceptualising Evaluative Judgement for Sustainable Assessment in Higher Education. In Developing Evaluative Judgement in Higher Education (pp. 23–33). Routledge. Boudrias, J.-S., Bernaud, J.-L., & Plunier, P. (2014). Candidates’ Integration of Individual Psychological Assessment Feedback. Journal of Managerial Psychology, 29(3), 341– 359. Forsythe, A., & Jellicoe, M. (2018). Predicting gainful learning in Higher Education; a goalorientation approach. Higher Education Pedagogies, 3(1), 82–96. Panadero, E. (2017). A review of self-regulated learning: six models and four directions for research. Frontiers in Psychology, 8, 422. Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37. Zimmerman, B. J. (2000). Attaining Self-Regulation: A Social Cognitive Perspective. In Handbook of Self-Regulation (pp. 13–39). Elsevier. Parallel Session 5 Chair: Peter Holgate Time: 17:30 - 18:00 Date: 26th June 2019 Location: Room 3 68 - DADA: A toolkit to design and develop alternative assessment Silvia Colaiacomo1, Pauline Hanesworth2, Kate Lister3, Ben Watson1, Tracey Ashmore1 1 University of Kent, Canterbury, United Kingdom. 2Advance HE, Edinburgh, United Kingdom. 3 Open University, Milton Keynes, United Kingdom Conference Theme Assessment: learning community, social justice, diversity and well-being Abstract The DADA (Design and Develop Alternative Assessment) project results from the collaboration between the University of Kent, the Open University and Advance HE. DADA aims to develop and disseminate a framework and a toolkit that can support staff involved in assessment design to map assessment types, identify and develop alternative inclusive assessment formats in line with the principles of constructive alignment (Biggs and Tang, 2011). DADA offers a toolkit, guidance and examples to make informed decisions about assessment design and development across disciplines. Throughout the proposed session, participants will familiarise themselves with the toolkit and will develop a better understanding of inclusive assessment (Hanesworth et al., 2018; Keating et al., 2012; Waterfield and West, 2006) through discussion and hands-on experience. 94


DADA is based on a pilot run in collaboration with academics, academic and student support staff and students across a variety of subject areas in different HE intuitions in the UK. An essential aspect of the project is its ‘distributed leadership’ (Jones and Harvey, 2017), as it was only made possible by the cooperation of different teams and participants, each contributing with their specific area of expertise. As alternative assessment starts from an awareness and understanding of students’ needs, the role of students as partners (Healey et al., 2010) in the project has been essential. DADA brings together case studies and lived experiences of colleagues working with various student populations and within different policy contexts and regulations (including assessment accredited by professional bodies). The project is currently ongoing and open to welcome new participants.

Key References Biggs, J. and Tang, K. (2011), Teaching for quality learning at University, 4th ed. Maidenhead: Oxford University Press Hanesworth, P., Bracken, S. and Elkington, S. (2018), A typology for a social justice approach to assessment: learning from universal design and culturally sustaining pedagogy, in Teaching in Higher Education, DOI: 10.1080/13562517.2018.1465405 Healey, M., Mason O’Connor, K. and Broadfoot, P. (2010) Reflecting on engaging students in the process and product of strategy development for learning, teaching and assessment: an institutional example, in International Journal for Academic Development, 15 (1), 19–32 Jones, S. and Harvey, M. (2017), A distributed leadership change process model for higher education, in Journal of Higher Education Policy and Management, 39:2, 126-139 Keating, N., Zybutz, T. and Rouse, K. (2012), Inclusive Assessment atPoint-of-Design, in Innovations in Education and Teaching International, 49 (3): 249–256 Waterfield, J. and West. B. (2006), Inclusive Assessment in Higher Education: A Resource for Change, Plymouth: University of Plymouth Parallel Session 5 Chair: Helen Pittson Time: 17:30 - 18:00 Date: 26th June 2019 Location: Room 4 69 - Leading change with equity in mind: An institutional view of learning design Natasha Jankowski1,2, Erick Montenegro1 1 University of Illinois, Champaign, USA. 2National Institute for Learning Outcomes Assessment, Champaign, USA Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Institutions of postsecondary education throughout the United States employ systematic processes to examine the learning of individual students across their institution and at the completion of a programme, review the data collected from these processes, and then determine what changes to make to advance student learning within an individual module as well as collectively across the programme and institution. While a wide variety of data are collected from student learning in the form of assignments, tasks, and demonstrations – there is little institution-wide use of the data to make changes that lead to actual enhancements in student learning outside of individual modules. Two approaches have been undertaken to address this gap in the use of evidence of student learning to lead change – the first addresses making changes to educational and instructional design in partnership 95


with faculty development efforts, the other has been through predictive learning analytics to better filter students into the learning experiences in which they may be most successful. In this discussion, we present a brief introduction to both approaches where assessment has been utilized to lead change, and then argue that both have failed to fully address issues of equity and social justice. In leading change efforts around educational design, if equity of learning and issues of social justice are not explicitly part of the meaning-making process around data and its subsequent use, then changes that increase barriers, hinder particular students and their learning, and increase injustices can become ingrained or prevalent (or unchecked) in educational redesign efforts. We argue for inclusion of various student populations in the analysis and meaning-making of any institutional or programmatic data as a mechanism to address issues of equity and social justice. In addition, we provide examples from the landscape analysis of the National Institute for Learning Outcomes Assessment (NILOA) in the United States on how institutions are attempting to utilize learning frameworks and spaces of learning beyond the traditional curriculum to address wide validation of learning from various non-academic sources. We will then discuss implications for practice and areas of further research before moving into group discussion. Key References Aronson, B., & Laughter, J. (2016). The theory and practice of culturally relevant education: A synthesis of research across content areas. Review of Education Research, 86(1), 163206. Bal, A., & Trainor, A. A. (2016). Culturally responsive experimental intervention studies: The development of a rubric for paradigm expansion. Review of Education Research, 86(2), 319-359. Dowd, A. G., & Bensimon, E. M. (2015). Engaging the “race question”: Accountability and equity in U.S. higher education. New York, NY: Teachers College at Columbia University. Ladson-Billings, G. (1995). Toward a theory of culturally relevant pedagogy. American Educational Research Journal, 32(3), 465-491. McArthur, J. (2016). Assessment for Social Justice: the role of assessment in achieving social justice. Assessment and Evaluation in Higher Education, 41(7), 967-981. Montenegro, E., & Jankowski, N. A. (2017, January). Equity and assessment: Moving towards culturally responsive assessment.(Occasional Paper No. 29). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Parallel Session 5 Chair: Naomi Winstone Time: 17:30 - 18:00 Date: 26th June 2019 Location: Room 5 70 - Students and assessors in conversation about authentic multimodal assessment Mira Vogel King's College London, London, United Kingdom Conference Theme Integrating digital tools and technologies for assessment Abstract Responding to societal changes, research-based curricula tend to foreground students as authors and makers galvanised by potential audiences. At Lincoln the seminal 'Student as Producer' strategy (1) emphasised "the role of students as collaborators in the production of knowledge", while at UCL the 'Connected Curriculum' framework (2) envisaged "a catalyst for making better connections between academics, students and ‘real world’ communities". 96


Consequently, what is recognised as an academically valid student 'essay' has expanded beyond the typed page and now encompasses emerging forms of academic communication. High camera phone ownership and an abundance of free web platforms for user-generated content have not only enabled this expansion, they have demanded it. In 1996 the New London Group (3) presciently called for 'multiliteracy pedagogies' which would promote students' "understanding and competent control of representational forms that are becoming increasingly significant in the overall communications environment, such as visual images and their relationship to the written word". Today's students are producing assessed work in a range of digital modes and voices, including blogs, videos, infographics, and podcasts. This session shares findings from a 2016-17 UCL Connected Curriculum Fellowship project which explored digital multimodal assessment across disciplines. A literature review generated questions for semi-structured interviews which brought pairs or threes of students and their assessors together in video-recorded conversations. Transcripts were inductively analysed to yield themes addressing the following questions. How can this kind of work be both authentic and also academic? Working with new technologies in new forms, how can peers sustain each other to achieve more than each could alone? How can tutors guide students' efforts appropriately between form and content (4)? How can assessors come to judgments about diverse interpretations of the same assessment brief, often exhibiting diverse skills? How are assessment criteria evolving to recognise what is distinctively valuable in these emerging communication forms (5)? The project report took the form of a series of thematic videos in which students and assessors shed light on the these questions in their own words, illustrated with examples of their work (https://wiki.ucl.ac.uk/x/LUq_Aw). Key References Lincoln University, 2010. Student as Producer. https://studentasproducer.lincoln.ac.uk/ . Fung, D., 2017. A Connected Curriculum for Higher Education. UCL Press. https://doi.org/10.14324/111.9781911576358 . New London Group, 1996. A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review 66, 60–92. http://newarcproject.pbworks.com/f/Pedagogy+of+Multiliteracies_New+London+Gro up.pdf DePalma, M.-J., Alexander, K.P., 2015. A Bag Full of Snakes: Negotiating the Challenges of Multimodal Composition. Computers and Composition 37, 182–200. https://doi.org/10.1016/j.compcom.2015.06.008 Sorapure, M., 2006. Between modes: assessing student new media compositions. Kairos 10. http://english.ttu.edu/kairos/10.2/coverweb/sorapure/between_modes.pdf Parallel Session 5 Chair: Pete Boyd Time: 17:30 - 18:00 Date: 26th June 2019 Location: Room 7 71 - ‘Excited’ yet ‘Paralysing’: The highs and lows of the feedback process Emma Medland University of Surrey, Guildford, United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment

97


Abstract Feedback is fundamental to student learning, yet is consistently identified as the most problematic aspect of the HE experience internationally. Research has identified misaligned perceptions of feedback between markers and students, and tends to focus on espoused rather than actual experiences of feedback. Evidence tends to concentrate on students’ or markers’ perceptions of feedback, rather than the interaction between them and how they interpret the same piece of feedback. Consequently, research focusing on actual feedback practices within the natural pedagogic environment has been called for, as a means of illuminating the ‘decentralised, subject-specific decision-making processes’ (Bloxham et al., 2011: 655) underpinning the construction of meaning through written feedback. In response, a naturalistic inquiry utilising between-methods triangulation to generate three sources of qualitative data was adopted. Three participant markers worked with three students (n=9; N=12). Data were collected using: 1. Think aloud to illuminate the lived experiences of the markers producing the feedback and students receiving it; 2. Individual semi-structured interviews with markers; 3. Stimulated recall-based joint interviews (Arksey, 1996) between the researcher, marker and a student. Thematic analysis of the data resulted in two overarching but interrelated themes: 1. Affective Filter; and 2. Expectations. Affective Filter focused on student and marker emotional reactions to the feedback process, and the impact these emotions have on behaviour. The Expectations theme was typified by the ‘struggle’ that students and markers have in understanding each other’s expectations, and how feedback can help and hinder this process, dependent upon whether expectations are met. In an extension to Carless and Boud’s (2018) conceptualisation of feedback literacy, this research calls for greater emphasis on the affective dimension as a filter for engagement with the feedback process for both students and markers. This affective filter can prevent dialogic feedback due to disengagement with written feedback and student concerns to detach emotions from the learning process. The research encouraged students to engage with their feedback and future development, and supported dialogue around the feedback process. This supported the translation of feedback comments, development of shared understanding, and externalisation of tacit knowledge underpinning the experiences of producing and receiving feedback. This adds weight to the call for a shift from transmission-based views of feedback, towards dialogic feedback in which the relational and affective dimensions of the relationship are central to the sustainability of the feedback process (e.g. Ajjawi & Boud, 2018; Dawson et al., 2018; Pitt & Norton, 2016). Key References Ajjawi, R., & Boud, D. (2018) examining the nature and effects of feedback dialogue. Assessment and Evaluation in Higher Education, 43(7), 1106-1119. Arksey, H. (1996) Collecting Data through Joint Interviews. Social Research Update, Issue 15 [online]. Available: http://sru.soc.surrey.ac.uk/SRU15.html Bloxham, S., Boyd, P., & Orr, S. (2011) Mark my Words: the role of assessment criteria in UK higher education grading practices. Studies in Higher Education, 36(6), 655-670. David Carless & David Boud (2018) The development of student feedback literacy: enabling uptake of feedback, Assessment & Evaluation in Higher Education, 43:8,13151325 https://doi.org/10.1080/02602938.2018.1463354

98


Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., Molloy, E. (2018) What makes for effective feedback: staff and student perspectives. Assessment and Evaluation in Higher Education: https://doi.org/10.1080/02602938.2018.1467877 Pitt, E., & Norton, L. (2016) ‘Now that’s the feedback I want’ Students’ reactions to feedback on graded work and what they do with it. Assessment and Evaluation in Higher Education, 42(4), 499-516. Parallel Session 5 Chair: Jane Headley Time: 17:30 - 18:00 Date: 26th June 2019 Location: Room 9 72 - Inclusive assessment, a response to the experience of Students with Dyslexia John Morrow University of Chester, Chester, UAE Conference Theme Assessment: learning community, social justice, diversity and well-being Abstract This paper is based on research conducted within the School of Law of one higher education institution on the experience students with Dyslexia have of assessment. It will consider current academic assessment methods used within Higher Education and the implications for students with Dyslexia. This will lead to reflection on inclusive assessment and the growing discussion around it. Qualitative data from the research will be used to argue that traditional assessment practices (Timed examinations and written coursework), are placing students with Dyslexia at a disadvantage. These forms of assessment still dominate many degrees programmes of assessment. Accordingly section 23 of the Equality Act 2010 is engaged, it creates a duty that were disadvantage is created for students with Dyslexia (or any disability) reasonable adjustments must be provided to alleviate it. The paper will consider the current approach to reasonable adjustments commonly taken by Higher Education Institutions, in light of interview data. It will take the view that adjustments are provided in a standardised manner designed to meet the requirements of the law without meaningful consideration of the needs of the student (Williams et al., 2014). Therefore the adjustments are failing to overcome the disadvantage that the students with Dyslexia are experiencing (Mortimore, 2012). Similarly, it will be argued that traditional assessment is not in the best interest of the overall student collective (Potter & Williams, 2007) or reflective of best pedagogic practice (Phillips et al. 2010). Based on these concerns, consideration will be given to a growing body of academic research which considers inclusive assessment, arguing that higher education institutions need to go beyond providing reasonable adjustments. Instead focusing on the need to formulate assessment appropriate to all students in a manner which does not distinguish or penalise students with Dyslexia (Waterfield &West, 2010). Which can be done whilst maintaining academic standards and making reliability (Irwin & Hepplestone, 2012). Practical options for achieving this will be discussed, centre on the principle a move away from the assumption of the necessity for traditional assessment routes of examination and written coursework will be of benefit to both students with and without Dyslexia. Key References Irwin, B., & Hepplestone, S. (2012). Examining increased flexibility in assessment formats, Assessment & Evaluation in Higher Education, 37(7), 773-785. 99


Mortimore, T. (2012). Dyslexia in Higher Education: Creating a Fully Inclusive Institution. Journal of Research in Special Educational Needs, 13(1), 38–47. Phillips, E., Clarke S., Crofts, S., & Laycock, A. (2010). Exceeding the boundaries of formulaic assessment: innovation and creativity in the law school, The Law Teacher, 44(3), 334364. Potter, G., & Williams, C. (2007) Two birds, one stone: Combining student assessment and socio‐legal research, The Law Teacher, 41(1), 1-18. Waterfield, J., & West, B. (2010). Inclusive Assessment, Diversity and Inclusion, the Assessment Challenge. Programme Assessment Strategies. Williams, P., Wray, J., Farrall H., & Aspland, J. (2014) Fit for purpose: traditional assessment is failing undergraduates with learning difficulties. Might eAssessment help?, International Journal of Inclusive Education, 18(6), 614-625.

Parallel Session 5 Chair: Jack Walton Time: 17:30 - 18:00 Date: 26th June 2019 Location: Room 11 73 - Feedback, feedforward: evaluating the effectiveness of an oral peer review exercise amongst postgraduate students Hannah Dickson, Joel Harvey, Nigel Blackwood Kings College London, London, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Assessment for learning approaches such as peer review exercises may improve student performance in summative assessments and increase their satisfaction with assessment practices. We conducted a mixed methods study to evaluate the effectiveness of an oral peer review exercise among postgraduate students. We asked former students to give their oral presentations to a new student cohort so that they could see the ‘typical’ structure and content of this assessment. In order to encourage a dialogue between peers, we asked the new students to give the oral presentations by former students a grade and to provide feedback using the standardised marking criteria. This was followed by the former students discussing the feedback and grades that they had received from the new student cohort, and the feedback and grade that they had received from examiners in the previous year. We examined: (1) final assessment grades among students who did and did not take part in the peer review exercise; (2) student perceptions of the impact of the peer review exercise and (3) student understanding of, and satisfaction with, this new assessment practice. Results indicated that students who took part in the exercise had a significantly higher mean grade in a subsequent summative oral presentation assessment than students who did not take part in the exercise. Students gained a better understanding of assessment and marking criteria and expressed increased confidence and decreased anxiety about completing the summative assessment. This study is one of the few to use a ‘no peer feedback’ comparison group which helps to address a major limitation of existing research in this field. In this study, we addressed two issues associated with the successful implementation of peer 100


review exercises in higher education settings. First, students observing the peer presenters were able to give their feedback anonymously by completing a structured feedback sheet. Ensuring that peer feedback is anonymous avoids the problems of social desirability bias and students feeling apprehensive and unsure about reviewing fellow students’ work. Second, the time-consuming nature of peer review exercises for both students and academic staff is well-documented. The peer review exercise in the present study was run as a group session and was quick to run, required limited preparation from peer presenters only and can easily be adapted for large student cohorts. Overall, our findings show that adopting an assessment for learning approach improves academic attainment and the learning experience in postgraduate students. Key References Davies, P. 2006. "Peer assessment: judging the quality of students’ work by comments rather than marks." Innovations in Education and Teaching International 43 (1):69-82. Mitchell, V.-W., and C. Bakewell. 1995. "Learning without Doing: Enhancing Oral Presentation Skills through Peer Review." Management Learning 26 (3):353-66. Nicol, D., A. Thomson, and C. Breslin. 2014. "Rethinking feedback practices in higher education: a peer review perspective." Assessment & Evaluation in Higher Education 39 (1):102-22. Rust, C., M. Price, and B. O'Donovan. 2003. "Improving Students' Learning by Developing their Understanding of Assessment Criteria and Processes." Assessment & Evaluation in Higher Education 28 (2):147-64. Snowball, J. D., and M. Mostert. 2013. "Dancing with the devil: formative peer assessment and academic performance." Higher Education Research & Development 32 (4):64659. Strijbos, J.-W., and D. Sluijsmans. 2010. "Unravelling peer assessment: Methodological, functional, and conceptual developments." Learning and Instruction 20 (4):265-9. Parallel Session 6 Chair: Laura Dison Time: 9:30 - 10:10 Date: 27th June 2019 Location: Piccadilly Suite 74 - ‘We learned to control what students read rather than what they said!’ Practitioners’ shifting views of their role in the feedback process: an action research project Kay Sambell1, Linda Graham2 1 Edinburgh Napier University, Edinburgh, United Kingdom. 2Sunderland University, Sunderland, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Practitioner-researchers (Arnold and Norton, 2018) on a first-year undergraduate programme developed pre-emptive formative assessment (Carless 2007) opportunities to help students gain sightings of their own progress in relation to an important threshold concept (Land et al, 2016). This paper focuses on the outcomes of their enquiry into an exemplars-based workshop which they embedded in the content of the taught curriculum. The workshop was designed to act as a means of creating enabling environments for active learner-participation in the feedback process (Carless and Boud, 2018) in the first few weeks of undergraduate study. In the first iteration of the workshop, the teaching team were confident they had established the conditions for coaching their students (n=91) to make effective qualitative judgments 101


about their own work in this domain. Informed by analysis of the growing literature on exemplars, they devised complex, tightly-structured workshop activities which revolved around supporting learners to analyse student samples of formative work. Processes involved engaging students with a bank of teacher feedback-comments and scaffolding student peer review of the samples by using the teacher-developed criteria and applying the feedback comments. Subsequent analysis of the student-completed individual worksheets during the workshop activities revealed a surprising gap between what the teachers assumed was happening and the reality of the students’ approaches, which the paper will illuminate. Reflecting on data gathered as part of the action-research process, the teachers radically reviewed and reconsidered their role in the feedback process. Better appreciating the challenges faced by students in this specific context raised questions, challenged and changed practitioners’ underpinning assumptions for practice development. They reconfigured the workshop to exercise much less control of the criteria and removed the emphasis on teacher feedback commenting practices. Instead, they exercised greater control over the range of exemplars and how they were used as the basis for scaffolding students’ judgments and feedback generation. In this iteration workshop activities became based on student-generated criteria and student-generated feedback comment-banks. This had a marked improvement on the students’ capacity to make evaluative judgments and, hence, to recognise the importance of threshold concepts to their learning. Key findings will be related to salient concepts in the literature. Particular attention will be drawn theoretically to developing the notion of inner feedback, student agency and the value of peer review processes (Nicol, 2018) for student learning. Implications of the emerging feedback-related counter-narrative will be explored and opened up to debate during the session. Key References Arnold, L. and Norton, L. (2018) HEA action research: sector case studies. HEA. Carless, David, & Boud, David. (2018). The Development of Student Feedback Literacy: Enabling Uptake of Feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325. Carless, David. (2007). Conceptualizing Pre-emptive Formative Assessment. Assessment in Education: Principles, Policy & Practice, 14(2), 171-184. Land, R., Meyer, J., & Flanagan, M. (2016). Threshold concepts in practice (Educational futures: rethinking theory and practice ; volume 68). Rotterdam, The Netherlands: Sense. Nicol, D. (2018) Unlocking generative feedback via peer reviewing. In Grion, V. and Serbati, A. (eds) Assessment of Learning or Assessment for Learning? Towards a culture of sustainable assessment in HE. Italy: Pensa. 73-85 Parallel Session 6 Chair: Natasha Jankowski Time: 9:30 - 10:10 Date: 27th June 2019 Location: Room 2 75 - Assessment as a process, looking at an optional assessment retake strategy as a learner-centred approach to feedback, learning & assessment Suzanne McCarthy1, Eileen M O'Leary2 1 University College Cork, Cork, Ireland. 2Cork Institute of Technology, Cork, Ireland Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment 102


Abstract Undertaking assessment tends to focus the mind and gives students the dedicated time to become aware of their own knowledge and their knowledge gaps. A learner-centred approach to assessment can promote the sense of ownership in learning. 1 Enabling students to regularly check their progress, to receive feedback and to have the opportunity to improve is a powerful tool to improve student learning. 2 We hypothesised that early-stage low-stakes assessment where students have an opportunity to repeat the test would encourage students to engage with curriculum content. We proposed that the opportunity to repeat was important as the students’ attention to detail and engagement with the concepts and ideas would be heightened in an exam-based situation. Empowering the students by letting them self-assess their knowledge, and use feedback through the re-assessment process would enhance their engagement and buy-in. Effectively this would lead to the evolution of the self-created 'zone of proximal development' 3. The assessment for learning is inherent in the combination of ‘knowledgegap identification’ followed by focused, directly useful study to better conquer the content. This fulfils Knowles’ Principles relating to adult learners, that experience (including mistakes) provides the basis for the learning activities. Knowles’ also alludes to adult learners as being ‘relevancy oriented’. 4 The relevancy comes with application of the core knowledge, mastered through repeat testing opportunities, in authentic role-based assessment. Establishing assessment as a process, allows us to structure the assessment in an incremental, holistic fashion, establishing strong foundations on which to build in the often more challenging multifaceted authentic assessment in a safe, supportive and rewarding manner. The structured approach is essential to encourage confidence so that the deeper learning and connection making can occur in the high-stakes authentic assessment. This mastery of learning approach, enabled by the retesting strategy, creates opportunities for students to grow and perform at different rates and students value and benefit from the option. 5 Herein, we will share our assessment process and some preliminary results. We will discuss why we offered a repeat assessment early in term, how many students availed of the opportunity and how many improved in the second assessment. We will discuss and demonstrate how we built the holistic assessment as a process and why we believe the repeat assessment opportunity enhances the foundations on which we structured the authentic culminating performance based assessment. Key References Rich,J.(2011).An experimental study of differences in study habits and long-term retention rates between take-home and in-class examinations. Int. J. Univ.Teach.FacultyDev. 2, 1–10. Lowry, S.(1993). Assessment of students. BMJ. 306: 51-54 Pea, R. D. (2004). The Social and Technological Dimensions of Scaffolding and Related Theoretical Concepts for Learning, Education, and Human Activity. The Journal of the Learning Sciences, 13, 423-451. Knowles, M. S., Holton III, E. F., & Swanson, R. A. (2005). The Adult Learner. The Definitive Classic in Adult Education and Human Resource Development (6th ed.). Amsterdam;Boston: Elsevier. Paff, B.A. (2012). The Effect of Test Retakes on Long-Term Retention, A Masters paper, University of Wisconsin – River Falls

103


Parallel Session 6 Chair: Rita Headington Time: 9:30 - 10:10 Date: 27th June 2019 Location: Room 3 76 - Gaining Faculty and Staff Buy In to New Assessment Feedback Quality Expectations Tara Lehan, Ashley Babcock, Theresa Meeks Northcentral University, San Diego, CA, USA Conference Theme Addressing challenges of assessment in mass higher education Abstract Lovitts (2005) argued that faculty members often assess student learning based on their own implicit standards, making holistic judgments after reviewing the work instead of evaluating it using a rubric (or even mental checklist). A lack of explicit standards relating to meaningful feedback in the context of assessment of and for learning might negatively impact student success. Student dissatisfaction often is related to insufficient guidance in their feedback (Al Wahbi, 2014). As a result, increased discourse, standardization of psychometrically sound measures assessing feedback quality, and ongoing training might be warranted (Lehan, Hussey, & Mika, 2016). Nevertheless, convincing faculty and staff members to change the way that they teach can be difficult due to a reported lack of time, incentives, and training (Brownell & Tanner, 2012). However, even when these factors are in place, many faculty and staff members are unwilling and/or unable to change the way that they provide feedback (Brownell & Tanner, 2012). After surveying provosts to explore the state of assessment of student learning, Kuh and Ikenberry (2006) reported that “[g]aining faculty involvement and support remains a major challenge…”, adding that “about four-fifths of provosts at doctoral research universities reported greater faculty engagement as their number one challenge” (p. 24). In general, individuals tend to be reluctant to implement changes that require them to challenge their currently held assumptions and develop new skills (Swanger, 2016). If school and department leaders want efforts to improve the quality of assessment feedback provided by faculty and staff members to be successful, they should take into account the aforementioned tendencies, which must be overcome to increase the likelihood of buy in. At one completely online university, the processes and outcomes associated with efforts to increase the consistency and quality of assessment feedback (first among dissertation reviewers in the Graduate School, then among academic coaches in the university learning center) were examined with the goal of promoting student learning and success more effectively. Furthermore, quality assurance protocols were developed to examine the extent to which faculty and staff met expectations and inform future continuous improvement initiatives. In this micro-presentation, we will focus on the challenges and opportunities, lessons learned, and various strategies that we used to overcome reluctance, which allowed us to gain buy in among most faculty and staff members to change the way that they deliver assessment feedback, even though it required them to challenge their assumptions and learn new skills. Key References Al Wahbi, A. (2014). The need for faculty training programs in effective feedback provision. Advances in Medical Education and Practice, 5, 263-268. Brownell, S. E., & Tanner, K. D. (2012). Barriers to faculty pedagogical change: Lack of training, time, incentives, and…tensions with professional identity? CBE—Life Sciences Education, 11(4), 339-346. Caligiuri, P., & Thomas, D. C. (2013). From the editors: How to write a high-quality review. Journal of International Business Studies, 44, 547-553.

104


Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. National Institute for Learning Outcomes Assessment. Retrieved from http://learningoutcomesassessment.org/NILOAsurveyresults09.htm Lovitts, B. E. (2005). How to grade a dissertation. Academe, 91(6), 18-23. Swanger, D. (2016). Innovation in higher education: Can colleges really change? Retrieved from http://www.fmcc.edu/about/files/2016/06/Innovation-in-Higher-Education.pdf. Parallel Session 6 Chair: Geraldine O'Neil Time: 9:30 - 10:10 Date: 27th June 2019 Location: Room 4 77 - Peer feedback to support student learning in large classes with the use of technology Anna Steen-Utheim, Haley Threlkeld, Olaug Gardener BI Norwegian Business School, Oslo, Norway Conference Theme Addressing challenges of assessment in mass higher education Abstract At BI Norwegian Business School our first year base courses, nearly 4000 students can complete a course in the first semester. Our lecturers are known for being dynamic and engaging and many students choose this school based on that reputation. At the same time Norway’s yearly survey on students’ satisfaction, shows that BI students are dissatisfied with the amount of individual feedback they receive during the run of a course. Considering the positive effect feedback can have on student learning (Evans 2013; Black and Wiliam 1998; Hattie and Timperley 2007), we wanted to explore how we could support students’ learning not only with individual feedback but also through a peer assessment activity. Therefore, designed a pilot project focusing on addressing this issue. However, implementing these forms of formative assessment activities for large classes is a challenge our faculty face (Broadbent, Panadero, & Boud, 2018). For this purpose, we used an app called PeerGrade. PeerGrade facilitates peer feedback by taking uploaded assignments, anonymously distributing them amongst peers, and enabling each student to provide feedback on each others’ assignments based on a rubric of feedback criteria. During the autumn of 2018, a course of 4200 students registered across four campuses (online and on campus), and taught by seven faculty members was chosen as an empirical case. The course consists of one obligatory course requirement essay and one final exam. We designed the course requirement to include a mandatory peer feedback activity. We included guidance on effective feedback practices (Carless,Salter, Yang and Lam 2011) with support of the TAG-feedback model inspired by Hattie and Timperely (2007). In line with suggestions from previous research (Broadbent, et.al, 2018), we utilized the use of rubrics to facilitate assessment criterias. After the students completed the assignment, we conducted group interviews with a sample of students. We were interested in their reflections concerning their experiences and perceptions of receiving and giving feedback, including how and if they had applied the feedback to improve their work. Using thematic analysis (Braun and Clarke 2014), our preliminary findings show that students value the activity as they say it makes them reflect upon their work, and that it enables opportunities for the students to engage with notions of quality in the discipline. Going forward we are looking to implement peer feedback activities within the institution to explore how peer feedback can support and enhance student learning in more courses at BI.

105


Key References Black, P & Wiliam, D. (1998): Assessment and Classroom learning. Assessment in Education: Principles, Policy and Practice, 5 (1): 7-74 Braun, V., Clarke, V., & Terry, G. (2014). Thematic analysis. Qual Res Clin Health Psychol, 24, 95-114. Broadbent, J., Panadero, E. Boud, D. (2018): Implementing summative assessment with a formative flavour: a case study in a large class. Assessment and Evaluation in Higher Education, 43:2, 307 – 322. Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in higher Education, 36(4), 395-407 Evans, C. (2013). Making sense of assessment feedback in higher education. Review of educational research, 83(1), 70-120. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of educational research, 77(1), 81-112. Parallel Session 6 Chair: Fiona Meddings Time: 9:30 - 10:10 Date: 27th June 2019 Location: Room 5 78 - Assessment of Developmental Students' Work in the Era of Learning Management Systems: Professors' Experiences with Benefits, Limitations, and Institutional Support Amy Lewis Community College of Philadelphia, Philadelphia, USA Conference Theme Integrating digital tools and technologies for assessment Abstract Universities and colleges are embracing and utilizing technology to a rapidly increasing extent, responding to its cost-effectiveness and efficiency as well as the regularity with which 21st century students rely upon it in their everyday lives. Chief amongst the technology used in higher education are Learning Management Systems (LMS), such as Blackboard, Sakai, and CANVAS. Urban community colleges have also embraced LMS, but with student bodies that often lack regular access to or extensive experience with using technology for socio-economic or generational reasons, the outcomes from using LMS can be very different to those experienced at four-year institutions that generally serve a more affluent, traditionally-aged demographic. In particular, students in developmental courses, or those courses designed for individuals who could not test into college-level courses, can be particularly challenged when it comes to using LMS in their studies as it is an additional component to which they must acculturate in higher education whilst attempting to rectify their skills deficiencies. For faculty teaching developmental students, questions of how to best serve this demographic raise many questions about the role of LMS in the classroom, and effective assessment practices are chief amongst these concerns. This qualitative, interpretivist, grounded theory study uses interviews with urban community college professors who use or reject the college’s LMS (i.e. CANVAS) to varying extents in their assessment practices, non-participant observations of the course components those professors who use CANVAS post online, and course artifacts to examine and reflect upon professors’ experiences with employing or eschewing LMS in their assessment of student coursework. In the end, the constructivist and pragmatist lenses shed light on professors’ responses about transparency, interpersonal connection, growth facilitation, and institutional support in the digital age.

106


Key References Bennett, S. & Oliver, M. (2011). Talking back to theory: The missed opportunities in learning technology research. Research in Learning Technology, 19(3), 179-189. Ertmer, P.A. & Ottenbreit-Leftwich (2010). Teacher technology change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42(3), 255-284. Girvan, C. & Savage, T. (2010). Identifying an appropriate pedagogy for virtual worlds: A communal constructivism case study. Computers and Education, 55, 342-349. Murphy, E., & Rodriguez-Manzanares, M.A. (2008). Using activity theory and its principle of contradictions to guide research in educational technology. Australasian Journal of Educational Technology, 24(4), 442-457. Rogers, E.M. (2003). Diffusions of innovations: Fifth edition. New York, NY: Simon & Schuster, Inc. Russell, D. L. & Schneiderheinze, A. (2005). Understanding innovation in education using activity theory. Educational Technology & Society, 8(1), 38-53. Parallel Session 6 Chair: Karen Gravett Time: 9:30 - 10:10 Date: 27th June 2019 Location: Room 7 79 - Assessment shock: Chinese international students' first year in Australian universities Jiming Zhou1, Christopher Deneen2, Phillip Dawson3, Joanna Tai3 1 Fudan University, Shanghai, China. 2The Centre for the Study of Higher Education at The University of Melbourne, Melbourne, Australia. 3Deakin University, Melbourne, Australia Conference Theme Addressing challenges of assessment in mass higher education Abstract First year in undergraduate and postgraduate programs is a transitional period; students must adapt to new learning cultures (Scott et al., 2014). Transitions can be especially challenging for international students (Baik, Naylor, and Arkoudis, 2015). They may enter with specific, culturally-informed understandings of assessment. For students coming from Confucian-heritage cultures such as Chinese international students (CIS), perceptions and practices of assessment may manifest in ways quite different from those encountered in Western universities (Zhou & Deneen, 2016). CIS comprise a large percentage of international student populations in the UK, USA, Australia and New Zealand. Most research and approaches centering on assisting CIS focus on language gaps. Few studies and programs directly confront the ‘shock’ experienced transitioning from one assessment culture to another. Considering the power of assessment and its importance to students in the curriculum (Boud, 2010), it is imperative universities better understand CIS’ negotiation of ‘assessment shock’ during their first year. This study addresses three research questions: 1) What patterns emerge around CIS’ experience with assessment in their first year in Australian universities? 2) How do CIS interpret and negotiate expectations associated with assessment tasks? 3) In what way do CIS’ prior understandings of assessment interact with their experiences and negotiations? Students were recruited from the four broad disciplinary fields most frequently studied by CIS in Australian universities: 1) management and commerce, 2) engineering and related technologies, 3) information technology and 4) society and culture. Six focus groups were conducted with five first-year CIS (6x5). Four tutors were interviewed, one from each

107


discipline unit in which student participants were enrolled. Data were analyzed using an inductive, sequential coding procedure (Miles and Huberman, 1999). Findings suggest prior assessment experience powerfully mediates participants’ perception of assessment tasks and culture. Students described learning as accumulating discreet ‘bricks’ of knowledge; assessment was perceived as a system of checking these bricks. Tutors reported assessment interactions centered around clarification of instructions and ‘revealing’ correct paths for achievement. Axial coding of student-tutor data demonstrated participants’ resistance to approaching assessment tasks as dialogic, formative and selfregulated. Participants’ prior cultural conceptions seemingly overrode intended acculturation processes. Findings are discussed in terms of how understanding assessment shock carries the potential to inform CIS development in a transitional period. Also addressed is the potential to inform broader mandates of serving international student needs, enhancing first-year experience for all students and better focusing programs for academic literacy. Key References Baik, C., Naylor, R., & Arkoudis, S. (2015). The First Year Experience in Australian Universities: Findings from Two Decades, 1994-2014. Melbourne Centre for the Study of Higher Education, The University of Melbourne. Boud, D., & Associates. (2010). Assessment 2020: Seven propositions for assessment reform in higher education. Sydney, Australia: Australian Learning and Teaching Council. Miles, M. B. and A. M. Huberman. (1999). Qualitative Research: An Expanded Sourcebook. 2nd ed. Thousand Oaks CA: Sage Publishing. Scott, D., Hughes, G., Burke, P., Evans, C., Watson, D., & Walter, C. (2014). Learning transitions in higher education. England: Palgrave MacMillan. Zhou, J., & Deneen, C. C. (2016). Chinese award-winning tutors’ perceptions and practices of classroom-based assessment. Assessment & Evaluation in Higher Education, 41(8) 1144-1158. Parallel Session 6 Chair: Pete Boyd Time: 9:30 - 10:10 Date: 27th June 2019 Location: Room 9 80 - Developing a coherent assessment strategy from established research findings: from model-building to practical implementation Claire Moscrop1, Peter Hartley2 1 BPP University, London, United Kingdom. 2Edge Hill University, Ormskirk, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract If we were to design a coherent programme assessment strategy based on all the current research evidence available on assessment feedback, what would it look like? What would be the important underpinning principles? And what would practical implementation involve? This presentation aims to answer these questions, offering both principles and practical suggestions for course/programme teams. We start from a summary thematic analysis of the literature on assessment feedback, undertaken in order to identify the key factors affecting feedback engagement. Synthesis of

108


the issues identified in this thematic analysis identified eight overarching factors affecting feedback engagement. These were: 1. Students not being unable to make sense of feedback or to apply it. 2. Problems with feedforward. 3. Problems with assessment criteria and feedback. 4. Students and tutors not being explicitly trained to develop and use criteria and apply feedback. 5. Lack of dialogue around feedback 6. Impacts of modularisation and course design on feedback engagement 7. Psychological factors affecting feedback engagement 8. Lack of student self-assessment and self-regulation and its effect of feedback use and engagement. These issues are pervasive throughout the literature but research studies have tended to focus on one or only a few of them. However, as Handley and Williams (2011) suggests, the problems with assessment feedback are closely related – they overlap and affect each other. This means, as suggested by Price et al (2011), that separating them into individual ‘problems’ and ‘solutions’ is not likely to be effective. We offer an integrated approach – conceptualising the problems or ‘factors’ through their inter-relationships and generating a concept map. This offers a ‘roadmap’ which can point towards the ‘holy grail’ of self-regulated student assessment and feedback engagement. We show how the map was developed and demonstrate how it identifies both underlying principles and practical steps for programmes to consider. We suggest that this approach can deliver outcomes identified as most critical by major researchers (e.g. Carless et al, 2011; Evans, 2013; Orsmond et al, 2013; Price et al, 2011a). They do not see the lack of selfregulation as the ‘student problem’, but argue that the main issue is lack of the development and empowerment of students to be able to self-regulate through pedagogic processes (Winstone, 2018). Our concept mapping reflects this principle and generates suggestions for practical changes at institutional and programme level, with emphasis on how assessment practices need to develop over the life cycle of a programme. Key References Carless, D; Salter, D; Yang, M and Lam, J (2011) Developing sustainable feedback practices, Studies in Higher Education, 36:4, 395-407. Evans, C (2013) Making Sense of Assessment Feedback in Higher Education. Review of Educational Research. 83:1, 70–120. Handley, K and Williams, L (2009) From copying to learning: using exemplars to engage students with assessment criteria and feedback. Assessment & Evaluation in Higher Education, 36:1 95-108. Orsmond, P; Maw, S; Park, K;Gomez, S & Crook A (2013) Moving feedback forward: theory to practice. Assessment & Evaluation in Higher Education. 38:2, 240-252. Price, M; Handley, K and Millar, J (2011) Feedback: focusing attention on engagement. Studies in Higher Education. 36:8, 879-896. Winstone, N (2018) From transmission to transformation: Maximising the impact of assessment feedback through staff-student partnerships. Focus On: Feedback from Assessment, QAA Scotland. 22nd March 2018. Glasgow.

109


Parallel Session 6 Chair: Sally Jordan Time: 9:30 - 10:00 Date: 27th June 2019 Location: Room 10 81 - Assessment and Feedback strategies: An evaluation of academic and student perspectives of various assessment and feedback tools piloted as part of the LEAF project in TU Dublin Louise Bellew1, Greg Byrne1, Geraldine Gorham1, Leanne Harris2, Natalie Hopkins2, Anne Hurley2, Ziene Mottiar1 1 TU Dublin, Co. Dublin, Ireland. 2TU Dublin, Dublin, Ireland Conference Theme Integrating digital tools and technologies for assessment Abstract As Sadler (2010, p. 536) asserts, ‘feedback is central to the development of effective learning, partly because assessment procedures play a key role in shaping learning’, and while much time and effort is vested in assessment and feedback on the part of both students and academics, challenges remain. They include issues such as timeliness, frequency (summative or formative) and quality of the feedback, students may find the academic terminology of feedback difficult to understand (Carless 2015), fail to act on feedback received (Pitt & Norton 2017) or fail to feed-forward for future learning and close the feedback loop (Boud & Molloy 2013). Furthermore in the context of the massification of higher education, increased student numbers and an increasingly diverse student population (Boud & Molloy 2013; Evans 2013; Carless 2017), the time and effort required by staff in the provision of feedback may create barriers to the feedback process. It is in this context that the LEAF (learning from and Engaging with Assessment and Feedback) was conducted in Technological University Dublin. This project used literature, interviews with key informants, staff surveys and analysis of programme quality documentation and student surveys to identify the key issues in terms of assessment and feedback in the institution and then piloted more than 10 tools in an attempt to address some of these challenges. The challenges included dealing with larger student groups, ensuring consistency in assessment feedback, shifting from a monologue to dialogue approach, ensuring quality feedback, encouraging effective ‘use’ of feedback, providing students with early indication of their performance and student management of assessment requirements. The tools used included peer assessment, early feedback and low weighted early assessments, audio and video feedback, assessment calendars, rubrics, TESTA and successive assessment weighting. This paper presents the results of the mixed methods research which evaluated both student and academic perspectives of their experiences. The findings are important for academics considering new approaches to assessment and feedback and also in terms of identifying how assessment and feedback can be used to address key challenges in third level learning in the current times. Key References Boud, D. & Molloy, E. (2013a) Rethinking models of feedback for learning: the challenge of design Assessment and Evaluation in Higher Education 38 (6) pp. 698-712 Carless, D. (2015) Student feedback: can do better - here's how Times Education Supplement 14 October 2015 Evans, C. (2013) Making Sense of Assessment Feedback in Higher Education Review of Educational Research 83 (1) pp.70-120 110


Pitt, E. & Norton, L. (2017) ‘Now that’s the feedback I want!’ Students’ reactions to feedback on graded work and what they do with it Assessment & Evaluation in Higher Education 42 (4) pp.499-516 Parallel Session 6 Chair: Peter Holgate Time: 9:30 - 10:10 Date: 27th June 2019 Location: Room 11 82 - Literature reviews as formative learning for PhD students in Education Hilary Constable Univ of Cumbria, Carlisle, United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment Abstract Reviewing relevant literature is part of almost all aspects of higher education and one which makes high demands on tutors for pertinent feedback in both formative and summative assessment. In the case of PhDs, what the literature review entails and how it should be guided and assessed is fraught with variation, often unacknowledged, both within and between areas of scholarship (Bengsten and Barnett, 2017). There are numerous published guides, including tailored course materials, for students on this aspect of higher education however approaches to supervision mirror wider variations in what constitutes a PhD (Kandiko and Kinchin, 2013). Literature reviews therefore pose challenges for supervisors and students in all aspects of assessment and especially in the provision of formative feedback as part of the supervisory process of guiding learning (Wisker, Robinson and Bengsten, 2017). Surprisingly, and somewhat oddly, whilst texts give advice on what (graduate) students should do, they are less explicit in relation to what learning is envisaged as taking place as a result or in the levels of learning achievable. In particular the work of a literature review in framing enquiry is not always problematised. Hence the criteria and methods of assessing student work can become mired in what has been done rather than what has been learned. This paper presents initial findings from a meta-review study which examines texts supporting PhD students in education in writing their appraisals of literature. A preliminary framework of analysis has been elaborated with which to scrutinise the reviews following related to the teaching and assessment of literature reviews in masters and PhD supervision (Arksey and O’Malley, 2005). Preliminary findings suggest that although there is superficial agreement about the common goals of literature reviews, deeper exploration of the texts reveals that they vary in their aims, their procedural advice and recommendations. Furthermore, the learning that is expected, can happen and can be assisted to happen is not prominent in these texts. The conclusions of this study are that there is a need for students and supervisors to consider the nature of the work of literature reviews as intrinsically problematic and to integrate such discussion explicitly within a process of formative assessment as part of the supervision process. This presentation will argue that the formative assessment of literature reviews is an important component of the types of transformative learning which is expected in PhD work.

111


Key References Arksey, H. and O'Malley, L., 2005. Scoping studies: towards a methodological framework. International journal of social research methodology, 8(1), pp.19-32. Bengtsen, S. and Barnett, R., 2017. Confronting the dark side of higher education. Journal of Philosophy of Education, 51(1), pp.114-131. Carr, W. and Kemmis, S., 2003. Becoming critical: education knowledge and action research. Routledge. Kandiko, C.B. and Kinchin, I.M., 2013. Developing discourses of knowledge and understanding: longitudinal studies of Ph. D. supervision. London Review of Education, 11(1), pp.46-58. Wisker, G., Robinson, G. and Bengtsen, S.S., 2017. Penumbra: Doctoral support as drama: From the ‘lightside’to the ‘darkside’. From front of house to trapdoors and recesses. Innovations in Education and Teaching International, 54(6), pp.527-538. Parallel Session 7 Chair: Jill Barber Time: 10:10 - 10:40 Date: 27th June 2019 Location: Piccadilly Suite 83 - Marginal gains to enhance feedback processes Naomi Winstone1, David Carless2 1 University of Surrey, Guildford, United Kingdom. 2University of Hong Kong, Hong Kong, Hong Kong Conference Theme Addressing challenges of assessment in mass higher education Abstract Within a modularised, marketised, and massified higher education system, the design of effective assessment and feedback processes poses a significant challenge (Rand, 2017). Students are vociferous in expressing their dissatisfaction with feedback comments they perceive to be insufficiently personalised and returned too late to be of use. In parallel, staff express frustration when the effort they expend on the provision of comments does not seem to be reciprocated by student engagement and uptake of their advice. Effective transformation of feedback processes requires us to look beyond teacher-driven transmission-focused models of feedback, to student-focused approaches where student engagement and action are of primary importance (Carless, 2015). However, many staff perceive that they do not have the time or expertise to enact student-focused feedback practices (Winstone & Carless, 2019). In this paper, we argue that facilitating a shift towards student-focused feedback practices does not necessarily require substantial shifts, but positive outcomes are likely to come from the combined overall impact of multiple small changes (Carless & Zhou, 2016). We draw upon the aggregation of marginal gains to make this case, which is most famously illustrated by the approach of Sir David Brailsford, the performance director for British Cycling. In the 2012 London Olympics, the exceptional performance of Team GB was ascribed not to dramatic changes to their preparation, but the combination of multiple marginal improvements in training and athlete care. In this paper we present a case for marginal gains in feedback processes, based upon two sources of data. We conducted thematic analysis of semi-structured interviews with 28 UK academics, and use these data to illustrate how an approach based on marginal gains overcomes perceived barriers to the development of student-focused feedback practices. We also collated eight cases of feedback designs from educators in the UK, Hong Kong, Australia, and Taiwan. Through inductive analysis of these cases we have identified 112


significant examples of marginal gains in feedback processes, and enablers to successful innovative practice. Synthesising these two sources of data, we propose a framework for marginal gains in feedback processes that offers evidence-based, realistic practices across eight dimensions: developing feedback literacy; fostering engagement with feedback; technology-enabled feedback processes; assessment and feedback design; facilitating dialogue; interweaving internal and external feedback; peer feedback; and the relational dimension of feedback. We argue that significant shifts towards a student-focused feedback culture can be realised through the aggregation of marginal and manageable changes to practice. Key References Carless, D. (2015). Excellence in university assessment: Learning from award-winning practice. Abingdon, UK: Routledge Carless, D., & Zhou, J. (2016). Starting small in assessment change: short in-class written responses. Assessment and Evaluation in Higher Education, 41(7), 1114-1127. Rand, J. (2017). Misunderstandings and mismatches: The collective disillusionment of written summative assessment feedback. Research in Education, 97(1), 33-48. Winstone, N., & Carless, D. (2019, in press). Designing for student uptake of feedback in higher education. Routledge. Parallel Session 7 Chair: Charlie Smith Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 2 84 - Using dialogic feedback and feedback action plans to develop professional literacies in undergraduate speech and language therapy students Alexandra Mallinson, Lynsey Parrott, Jonathan Harvey Plymouth Marjon Univeristy, Plymouth, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Following the body of research into the development of academic and feedback literacies in university students, we have expanded our definition to consider the concept of professional literacies on an undergraduate pre-registration speech and language therapy programme in the UK. Professional literacy necessarily encompasses overlapping aspects of assessment and academic literacies as described elsewhere in the literature (Lea, 2004), Sutton, 2012) but seeks to address these skills as foundational for professional practice beyond graduation with roots in the concept of sustainable assessment feedback (Boud & Molloy, 2013; Carless, Salter, Yang, & Lam, 2011). Supporting students to not only develop their feedback-receiving and –seeking behaviours, will inform their developing clinical (and academic) skills of feedback-giving. Our hypothesis is that in order to develop effective skills of facilitating behaviour change with others through giving and receiving feedback, individuals need to be consciously and explicitly engaged with their own experiences of feedback (Sutton, 2009) and that the undergraduate environment is a prime opportunity to foster that engagement. Additionally, given the constraints of programme design and most especially in the first year, students frequently require specific and explicit links to be made between what can (erroneously) be seen as purely academic elements of the course and the potential for transference of skills and knowledge into professional and clinical environments. 113


Accepting this “challenge of critical pedagogy” (McArthur, 2013; 88), a pilot study is to be undertaken with a group of first year students in one module from January 2019. The design of the module aims to explicitly address the students’ awareness and development of critical inquiry skills alongside novel content teaching and assessment. Through use of feedback action plans (Ivers et al., 2014) students are to be offered the opportunity to reflect on previous feedback on assessed work and develop their engagement with and use of feedback in a series case-based learning activities. 3 dialogic feedback opportunity points are built into the module in which students will bring written feedback on a series of seminar-linked formative pieces of work designed to support not only their summative assessment for this module, but their developing professional and clinical skills. Key References Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment and Evaluation in Higher Education, 38(6), 698–712. https://doi.org/10.1080/02602938.2012.691462 Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36(4), 395–407. https://doi.org/10.1080/03075071003642449 Ivers, N. M., Sales, A., Colquhoun, H., Michie, S., Foy, R., Francis, J. J., & Grimshaw, J. M. (2014). No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implementation Science, 9(1), 14. https://doi.org/10.1186/1748-5908-9-14 Lea, M. R. (2004). Academic literacies: a pedagogy for course design. Studies in Higher Education, 29(6). https://doi.org/10.1080/0307507042000287230 McArthur, J. (2013). Rethinking Knowledge within Higher Education. Sutton, P. (2009). Critical and Reflective Practice in Education Volume 1 Issue 1 2009 Towards dialogic feedback 1 . The Academic Literacies approach Key words : Feedback ; academic literacy ; dialogic ; power ; identity . Critical and Reflective Practice in Education Volum. Reflective Practice, 1(1), 1–10. Sutton, P. (2012). Conceptualizing feedback literacy: knowing, being, and acting. Innovations in Education and Teaching International, 49(1), 31–40. https://doi.org/10.1080/14703297.2012.647781 Parallel Session 7Chair: Ana Remesal Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 3 85 - Dialogic feedback and the development of professional competence among further education pre-service teachers Justin Rami, Francesca Lorenzi Dublin City Univeristy, Dublin, Ireland Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Improving the students learning experience is closely connected with the promotion an implementation of an assessment strategy whose effectiveness relies on the quality of the formative aspect. Assessment can promote or hinder learning and it is therefore a powerful force to be reckoned within Education. The literature on assessment makes it quite clear that assessment shapes and drives learning in powerful, though not always helpful, ways (Ramsden, 1997).

114


A number of authors (Steen-Utheim et al. 2017; Merry et. Al, 2013; Careless, 2013 and 2016; Hyatt, 2005; Juwah & al., 2004; Bryan & Clegg; 2006; (Swinthenby, Brown, Glover, Mills, Stevens & Hughes, 2005; Nicol, 2010; Torrance & Prior 2001) have advocated the encouragement of dialogue around learning and assessment as a means to enhance the formative aspect of assessment. Pedagogical dialogue and formative assessment share common principles such as the emphasis on the process (MacDonald, 1991); the need for negotiation of meaning and shared understanding of assessment criteria (Boud, 1992)(Chanok 2000)(Harrington & Elander 2003; Harrington & al. 2005) (Sambell & McDowell 1998) (Higgins Hatley& Skelton, 2001; (Norton, 2004; Price & Rust, 1999; O’Donovan, Price & Rust 2000; Rust, Price & O’Donovan, 2003) and the development of reciprocal commitment between assessors and assesses (Hyland 1998; Taras, 2001) based on trust (Careless, 2016) We argue with Kopoonen et al. (2016) that a strong dialogic feedback culture together with the developmental role of feedback a part of future working life skills. and their importance warrants greater integration into the higher education curricula as part of the development of expertise. This paper presents the outcomes of the introduction of an assessment portfolio for module “Curriculum Assessment” informed by dialogical principles and aimed at the development of professional competence among pre-service Further education teachers. The evaluation of the module led to the identification of three key outcomes: firstly the development of a shared understanding of assessment criteria secondly the establishment of a mutual relationship between assessors and assesses based on commitment and trust and thirdly a heightened self-awareness both in personal (efficacy) and professional (competence) terms. This study demonstrates that a dialogical assessment model that enables students to make sense of knowledge through reflection, professional decision-making and engagement. Furthermore it demonstrates how a dialogical approach to assessment and feedback can initiate a reflective processes which may equip student teachers with knowledge transferable to professional practice. Key References Merry, S., Price, M., Carless, D., & Taras, M. (Eds.) (2013). Reconceptualising feedback in higher education: Developing dialogue with students. London: Routledge. Carless, D. (2013). Trust and its role in facilitating dialogic feedback. In D. Boud & L. Molloy, Effective Feedback in Higher and Professional Education. London: Routledge. Carless, D. (2016). Feedback as dialogue. Encyclopedia of Educational Philosophy and Theory, p. 1-6, http://link.springer.com/referenceworkentry/10.1007/978-981-287532-7_389-1. Nicol, D. (2010). From monologue to dialogue: improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education, 35(5), 501–517. doi:10.1080/02602931003786559 Steen-Utheim A. and Wittek A. (2017) Dialogic feedback and potentialities for student learning Learning, Culture and Social Interaction Volume 15, December 2017, Pages 18-30

115


Parallel Session 7 Chair: Andy Lloyd Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 4 86 - Enquiring Practices: Leading Institutional Assessment Enhancement at University of the Arts London (UAL) Susan Orr, Silke Lange University of the Arts London, London, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract The assessment of creative practices raises interesting challenges for staff in universities and art colleges (Orr and Shreeve, 2018). There is an underlying tension between making the assessment requirements clear and allowing for unanticipated, open-ended, creative outputs (Kleiman, 2017). ‘Assessment for Learning at UAL’ refers to the ongoing review process of UAL’s assessment practices and its generic assessment criteria at undergraduate and postgraduate level. The review of our assessment criteria is a direct response to the need to develop inclusive teaching, assessment and curriculum to challenge exclusivity and address differential attainment across diverse student profiles. Assessment is a key influencing factor in student behaviour (Brown and Race, 2012) and consequently our review seeks to ensure that our assessment criteria set out the values, knowledges, behaviours and practices we seek to inculcate in our students. The criteria need to tell the students what our institution values as an arts university with a commitment to diversity at the centre of creativity. As a creative arts university, we wanted to underpin this change project with co-design approaches that are commonly employed in the creative industries. Working with staff and students, the principles of the iterative design process itself were applied to identify criteria which complement learning in our arts disciplines and ultimately help to create discursive spaces. Carless (2015) observes that assessments are often poorly designed; we decided to situate design at the heart of this work. We employed a graphic designer to design our criteria to promote strong visual messages that support student learning. We are carefully considering the look and feel of the digital interface for students and staff. This underlines for us that the project of assessment change goes beyond the students and academics - at UAL we have involved colleagues from academic support, digital learning, language, quality and registry. In our presentation, we will explore how co-design methodologies were applied during the review of our assessment criteria and how they can be used as models for future development and reviews of policies and practices in other educational contexts. Carless (2015) and Price et.al. (2012) underline the importance of meaning making and active engagement with learning outcomes and assessment criteria for both students and staff. Their research has informed our work, which places dialogue and debate at the centre of the work. Key References Brown, S and Race, P (2012) Using Effective Assessment to Promote Learning, in University Teaching in Focus: a learning-centred approach, Hunt, L and Chalmers, D (eds), Australian Council for Educational Research and Routledge, pp.74-91. Carless, D. (2015). 'Promoting student engagement with quality' in Excellence in University Assessment. Routledge, London. 116


Kleiman, P. (2017). Transforming Assessment in Higher Education – A Case Study Series. Case study 2: ‘We don’t need those learning outcomes’: assessing creativity and creative assessment. (p 21-29). Higher Education Academy, UK. Orr, S., Shreeve, A. (2018). Art and Design Pedagogy in Higher Education. London: Routledge. Price, M., Rust, C., O’Donovan, B., Handley, K., and Bryant, R. (2012). Assessment literacy: The foundation for improving student learning. Oxford, UK. Oxford Centre for Staff and Learning Development. Parallel Session 7 Chair: Kimberly Ondo Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 5 87 - Goals, Benefits and Challenges: Implementing Digital Assessment at Brunel University London Steffen Lytgens Skovfoged, Rasmus Tolstrup Blok UNIwise, Aarhus, Denmark Conference Theme Integrating digital tools and technologies for assessment Abstract For progress within digital assessment and using BYOD strategy for digital exams, Brunel University London is currently one of the leading universities in the UK. Since the 2015/2016 exam period, the College of Engineering, Design and Physical Sciences at Brunel University has gradually digitised their exam modules within different departments one by one, approaching a fully digitised exam and assessment process at rapid speed. Implementing a digital assessment platform alters procedures for many stakeholders and introduces the need for an entirely new skillset for many. It requires minute care to the existing practices of the existing exam and assessment processes of the institution (Vergés Bausili, 2018), while having clear-cut objectives to strive and plan for become keystones in implementing digital assessment successfully. Brunel University chose a gradual implementation with an initial pilot programme, rather than a “big-bang” implementation. Studies suggest that this procedure can be beneficial in succesful implementation of learning technology (Deeley, 2018). Mapping their progress from initial implementation to current status on their digital assessment project, this research project attempts to deduce some best practices and describe the benefits received and challenges experienced in implementing digital assessment at their educational institution. Brunel University met the challenge of implementation with a defined set of goals:  Improving the student experience  Making marking easier  Standardisation and consistency of managing assessments  Increasing security  Providing opportunities for analytics Brunel University experienced “immediately identifiable and compelling” benefits for both their students and the staff. Foremost was ease-of-use in procedures, high security and administrative advantages for invigilators, exam administrators and markers. The project also indicated challenges pertaining to the nature of certain subjects and the state of educational technology at this point in time, as complex text, such as mathematical formulae 117


and chemical equations, is currently more time-consuming to type out digitally than writing by hand. To accommodate this specific challenge, we incorporated an electronic solution that allows the administrative benefits of digital assessment to continue, while not impeding the academic working speed of students sitting exams. Key References Deeley, Susan J. "Using technology to facilitate effective assessment for learning and feedback in higher education." Assessment & Evaluation in Higher Education 43.3 (2018): 439-448. Vergés Bausili, Anna. "From piloting e‐submission to electronic management of assessment (EMA): Mapping grading journeys." British Journal of Educational Technology49.3 (2018): 463-478. Parallel Session 7 Chair: Susanne Voelkel Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 7 88 - Peer Assessment in Irish Medical Science Education – the experiences and opinions of educators Mary Mc Grath1,2, Lloyd Scott2, Pauline Logue-Collins1 1 Galway Mayo Institute Technology, Galway, Ireland. 2Technological University Dublin, Dublin, Ireland Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract The approach to and the type of assessment(s) that a Higher Education (HE) programme employs can be a key factor in the effectiveness of assessment as a tool of learning. Peer assessment (PA) has the potential to develop the evaluative competence of students in HE. In the Republic of Ireland there are three Institutes that each deliver a professionally validated honours degree programme in Medical /Biomedical Science. The aim of this paper is to report on the experiences and opinions of the academic staff involved in these 3 programmes with respect to assessment. Presented here is one aspect of a larger study into assessment practices in the education of Irish Medical Scientists with the overall aim being the development of a framework for the structured inclusion of PA. An insight into the current practices, experiences and opinions of staff is an essential step in the development of an effective framework. All academic staff (n=80) involved in the three programmes were invited to complete an online anonymous survey. Employing a mixed methods design, the survey included closed questions e.g. subject area, years of experience and formal teaching qualifications, and open questions including staff’s understanding of terms associated with assessment, if they use PA, their reasons for choosing PA and the challenges they may have encountered. Thirty-five staff responded to the survey; all three institutes were represented. The thematic analysis of the qualitative data demonstrated that staff generally see assessment as a ‘measure’ (grade or mark) of understanding and knowledge. The distinction between formative and summative assessment was not clear for all staff; 19/33 staff described summative assessment as an ‘end of module’ exam and 13/33 staff referred to formative assessment as being ‘continuous’ or ‘ongoing’. There was an obvious lack of use of terms associated with assessment; such as “assessment as, of and for learning”. Twelve of the respondents use PA in their module(s), they reported the positives and challenges of PA as they experienced e.g. increased student engagement, importance of student preparation. A key finding from this phase of the research confirms a gap in ‘assessment literacy’ among respondents. The experiences of staff with respect to the use 118


of PA demonstrate there is much to be done in regard to building a framework to support academics. The next phase of the research will explore in more depth the challenges experienced by academics with a view to establishing best practice. Key References Adachi, C., Tai, J.H.-M. and Dawson, P. (2018). Academics’ perceptions of the benefits and challenges of self and peer assessment in higher education. Assessment & Evaluation in Higher Education, 43 (2), 2018/02/17, 294-306. Brown, S. (2004). Assessment for learning. Learning and teaching in higher education, 1 (1), 81-89. Brunton, J., Brown, M., Costello, E. and Walsh, E. (2016). Designing and developing a programme-focused assessment strategy: a case study. Open Learning: The Journal of Open, Distance and e-Learning, 31 (2), 176-187. Deneen, C. and Boud, D. (2014). Patterns of resistance in managing assessment change. Assessment & Evaluation in Higher Education, 39 (5), 577-591. Jessop, T. and Tomas, C. (2017). The implications of programme assessment patterns for student learning. Assessment & Evaluation in Higher Education, 42 (6), 990-999. Mc Grath, M.F., Scott, L. and Logue-Collins, P. (2017). Peer Assessment in Medical Science: An exploration of one programme's approach to peer assessment, including staff and student perceptions. AISHE-J: The All Ireland Journal of Teaching and Learning in Higher Education, 9 (2). Tai, J., Ajjawi, R., Boud, D., Dawson, P. and Panadero, E. (2017). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 1-15. Parallel Session 7 Chair: Jess Evans Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 9 89 - Rebels with a cause – can we disrupt assessment practice in professional education? Emma Gillaspy, Joanne Keeling University of Central Lancashire, Preston, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Professional education allows learners to meet the required standards of proficiency in their chosen future profession, for example as a nurse, social worker or teacher. Therefore, much of the content and associated proficiencies are defined by the relevant professional body. In this session, we will share our experiences of developing the assessment strategy for Preregistration Nursing programmes at the University of Central Lancashire and initiate round table discussions on disruptive innovation of assessment practice. Our newly designed programme reflects an active learning approach in which students are engaged in meaningful activity to put them at the centre of their own learning. Assessment for/as learning rather than merely testing memory (Medland, 2016) is a key feature of this design and has required us to reflect on what the future nurse will be required to do in a professional context. To develop the assessment strategy, we have intensively engaged with a wide range of students, staff, patients and user/carers which has been a messy complex process. By bringing a range of stakeholder groups together to make collective decisions, tensions 119


will inevitably surface and require attention to resolve (Lock et al., 2018). We have taken a coaching, co-creative approach to inspire the culture shift from traditional to contemporary approaches to learning and teaching. As Flavin and Quintero (2018) demonstrate in their strategy review, universities are “more likely to pursue sustaining or efficiency than disruptive innovation”. This is similarly true for professional bodies therefore creating a range of flexible, inclusive assessments has challenged us to find and walk the line between creativity and constraints. We are optimistic that the resulting authentic assessments will prepare graduates for working and living in their global, digitally enabled future. We also believe this change process has increased staff confidence and capability in disrupting assessment practice. Key References Flavin, M., & Quintero, V. (2018). UK higher education institutions’ technology-enhanced learning strategies from the perspective of disruptive innovation.Research in Learning Technology, 26. Lock, J., Kim, B., Koh, K., & Wilcox, G. (2018). Navigating the Tensions of Innovative Assessment and Pedagogy in Higher Education. The Canadian Journal for the Scholarship of Teaching and Learning, 9:1. Medland, E. (2016) Assessment in higher education: drivers, barriers and directions for change in the UK, Assessment & Evaluation in Higher Education, 41:1, 81-96. Parallel Session 7 Chair: Patrick Flynn Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 10 90 - The Efficacy of Audio Feedback: An inter-institutional investigation Phillip Miller1, Mark Clarkson2, David Murray2 1 New College Durham, Durham, Uruguay. 2Newcastle College, Newcastle, United Kingdom Conference Themes Leading change in assessment and feedback at programme and institutional level Abstract It is widely agreed that high quality assessment feedback can have a large impact upon student learning, however for this impact to be fully realized provision of feedback alone is not enough as students must also engage with it. Both anecdotally from our own practice, and evidenced within the literature is the common situation of students not collecting, reading or acting upon feedback (Jollands et al. 2009; Buswell & Matthews, 2004). A variety of reasons for this lack of engagement have been proposed including what Carless (2006) has described as a mismatch between teachers' intentions and students' perceptions of the value of feedback. The provision of audio feedback has often been proposed to be an effective solution to the issue of engagement with students being 10 times more likely to open the audio files compared to reading a written script (Lunt & Curran, 2009), and several studies including Cann (2014) reporting improved engagement with the feedback itself. This presentation will report on an inter institutional study ran by lecturing staff and a student research intern aimed at assessing the efficacy of audio feedback on summatively assessed work with cohorts of students who have not previously received this kind of feedback. One hundred sports science undergraduate students were targeted across two mixed economy colleges to receive audio assessment feedback for a minimum of one piece of assessed work from a semester one module. Feedback was provided in line with standardised guidance influenced both by the recommendations of Cann (2014) and 120


previous experience within one of the participating colleges. Two weeks following the provision of this feedback a survey was distributed to students to assess their engagement and satisfaction with the audio files in comparison to other feedback methods. This initial survey will then be followed up within semester two to allow data to be gathered as to whether these audio files are revisited in order to benefit further assessed work. Key References Buswell, J. and Matthews, N. (2004) Feedback on feedback! Encouraging students to read feedback: A University of Gloucestershire case study. Journal of Hospitality, Leisure, Sport & Tourism Education 3 (1). Cann, A. (2014) Engaging Students with Audio Feedback, Bioscience Education, 22:1, 31-41. Carless, D. (2006) Differing perceptions in the feedback process. Studies in Higher Education 31 (2), 219–233. Jollands, M., McCallum, N. and Bondy, J. (2009) If students want feedback why don't they collect their assignments? In Proceedings of the 20th Australasian Association for Engineering Education Conference 2009, University of Adelaide, pp735–740. Lunt, T. and Curran, J. (2009) Are you listening please? The advantages of electronic audiofeedback compared to written feedback. Assessment and Evaluation in Higher Education 35 (7), 759–769. Parallel Session 7 Chair: Amanda Chapman Time: 10:10 - 10:40 Date: 27th June 2019 Location: Room 11 91 - How can we transform peer-assessment into a formative and self-regulative process? An experience of assessment criteria transparency Maite Fernández-Ferrer1, Nati Cabrera1, Laura Pons-Seguí2, Elena Cano3 1 Universitat Oberta de Catalunya, Barcelona, Spain. 2Universitat de Barcelona, Barcelona, Spain. 3Universitat de Barcelon, Barcelona, Spain Conference Theme Addressing challenges of assessment in mass higher education Abstract Peer-assessment is a strategy to develop students’ evaluative judgment (Boud et al., 2018) and self-regulation capacity (Panadero et al., 2018). However, students need to understand and appropriate the task’s aims and assessment criteria to provide quality feedback (Carless & Boud, 2018). This contribution is framed within the project “XXX” in which participate six higher education institutions. The aim of this contribution is to analyse how the strategies of assessment criteria transparency contribute to the development of students’ evaluative judgement. The experiences developed within this project consist of elaborating a complex task of which different drafts are submitted. Students have to peer-assess these drafts providing qualitative comments aligned to the assessment criteria. In order to help students to understand the assessment criteria, different appropriation strategies are planned depending on students’ evaluative judgement level. The feedback provided has to meet the characteristics of quality feedback. Students have to reflect on the feedback received and state how they have integrated it within a week-time. Concurrently, teachers provide qualitative comments on the feedback provided to peers and the final version of the task. The impact of this experience is analysed through Pintrich’s (1991) MSLQ questionnaire, students’ feedbacks, reflections and marks.

121


In this contribution, the results obtained in the subject XXX (2nd year, 1st semester) from the XXX degree of the University of XXX are presented. A total of 56 students participated in the experience. In this subject, assessment criteria were presented in a written format and were discussed with the group. The task developed was an innovation project with three submissions (2 drafts and the final version). This task weighted 40% (of which 60% was the task and 40% the feedback provided) of the final mark of the subject. In line with previous studies (Boud et al., 2018; Panadero et al., 2018), the preliminary results indicate that students perceive that providing feedback helps them to be more critical with their task (x=4.33 out of 5), more aware of the processes they should keep and foster (x=4.27) and provide based judgements (x=4.24). In general, students believe that this experience has contributed to the development of learning to learn (x=4.44) and teamwork (x=4.36) competences. The analysis of the type of feedback provided, how it has been used and the qualifications will allow to know whether the strategies of assessment criteria transparency had a positive impact on students evaluative judgement and learning to learn competence. Key References Boud, D., Ajjawi, R., Dawson, P., Tai, J. (Eds.) (2018). Developing Evaluative Judgment in Higher Education. Assessment for Knowing and Producing Quality Work. New York: Routledge. Carless, D. & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2018.1463354 Dunlosky, J., Rawsom, K. (2015). Do students use testing and feedback while learning? A focus on key concept definitions and learning to criterion. Learning and Instruction, 39, 32-44. Retrieved from: http://www.sciencedirect.com/science/article/pii/S0959475215300037[E2] Panadero, E., Andrade, H., & Brookhart, S. (2018). Fusing self-regulated learning and formative assessment: a roadmap of where we are, how we got here, and where we are going. The Australian Educational Researcher, 45(1) 13-31 DOI: 10.1007/s13384018-0258-y Pintrich, P. R. (2003). A Motivational Science Perspective on the Role of Student Motivation in Learning and Teaching Contexts. Journal of Educational Psychology, 95 (4), pp. 667686. Price, M., Handley, K., Millar, J. & O'Donovan, B. (2010). Feedback: all that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3)277-289. Micro Presentations Chair: Pete Boyd Time: 11:00 - 12:00 Date: 27th June 2019 Location: Piccadilly Suite 92 - Academic feedback and performance of students in institutions of higher education: Who is in control? How does our feedback impact students? Amy Musgrove, Dave Darwent Sheffield Hallam University, Sheffield, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level

122


Abstract Attending higher education institutions (HEI’s) and achieving academic success is associated with positive outcomes which are valued, both at the individual and societal levels, such as reduced levels of unemployment and poverty and increased levels of civic participation. Thus, many studies have focused on enhancing learning by examining the factors that affect students' performance. One such factor is academic feedback, which is the teaching behaviour most strongly related to academic success (Bellon et al., 1991). Within this research framework, in a unique population of students of different degrees, from the Faculty of Social Sciences and Humanities, this research explores the relationship between academic performance and different components of academic feedback. The feedback students received was taken from the Dissertation module which had two assessment points. The Interim Framework Report (IFR) feedback was assessed using SPSS to analyse changes in the grade point average of students between their first summative submission of the IFR (T1) and the final dissertation (T2). Hierarchical linear and logistic regressions assess the impact of students' performance on the structure and content of feedback, as well as the extent to which the content of the T1 feedback can affect T2 assessment improvements and grade increases. We measured the quality of the feedback against the Stanford model to see how many elements of the model were used. We have to strong preliminary findings to suggest that certain types of feedback might be more successful in increasing grades between T1 and T2 and we have found that particular ways of structuring T1 feedback are more successful for increasing T2 grades. This presentation will give lecturers and the wider academic community the opportunity to reflect on their own practice and self-empower not only to create and promote academic excellence among students, but also to shape their own professional futures. Key References Bellon, J.J., Bellon, E.C. & Blank, M.A. (1991). Teaching from a Research Knowledge Base: a Development and Renewal Process. Facsimile edition. Prentice Hall, New Jersey, USA. Bloxham, S. and Boyd, P. (2007) Developing Effective Assessment in Higher Education: a practical guide. Open University Press, England. Brookhart, S.M. (2008) How to give Effective Feedback to you Students. Association of Supervision and Curriculum Development. Alexandria, VA, USA. 93 - Move over Prometheus: Reconceptualising feedback through the flame metaphor Philip Denton, Casey Beaumont, David Mcilroy Liverpool John Moores University, Liverpool, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Tutors’ efforts to refine the feedback information that they return to students have not led to sweeping enhancements in learning, national surveys continuing to record attendant dissatisfaction (Nicol, Thomson & Breslin, 2014). Feedback is characterised as information used, misconceived as information received, and it should be an integral consideration within programme design (Boud & Molloy, 2013). Disparities between student interpretations and lecturers’ intentions of feedback have provoked collective disillusionment (Rand, 2017) and it remains a bothersome issue. This presentation argues 123


that the academy will be receptive to a unified conception of feedback expressed through analogy, an evocative teaching instrument that is as old as the hills. Thibodeau and Boroditsky (2011) have demonstrated how metaphors in particular can have a profound influence on both information gathering and thinking during opinion forming. The flame metaphor follows from a rudimentary analogy drawn between the prerequisites for creating fire and feedback (Denton & McIlroy, 2018). Feedback is a flame and therefore has an associated challenge. The academy’s recognised fixation with the performance information returned to students reflects our natural preoccupation with the material over the intangible, flames being untouchable. The metaphor characterises this information as learning fuel that is insufficient in itself to constitute feedback. It also recognises how a purposefully designed curriculum can provide the oxygen of opportunity for students to use this information, relayed efficiently within an atmosphere geared for learning. Accordingly, even high-quality performance information will fail to illuminate within hypoxic learning environments. The final component of feedback embodied within the metaphor is ignition: students applying their knowledge, skills and attributes to take optimal advantage of the opportunities afforded to them. This intuitive spark is students’ assessment literacy: their understanding of the purposes of assessment and application of associated methodologies. The flame metaphor is readily extended to sustainable feedback practices (Carless et al., 2011), an exclusive reliance on performance information from tutors in contemporary higher education being redolent of a fossil fuel economy. An ongoing two-year study into the analogy’s impact on students’ perceptions of assessment for learning will be outlined. It is concluded that the flame metaphor has a number of merits, elegantly rebuffing the classical myth that feedback/fire is a material gift presented from the one to the many. Key References Boud, D., Molloy, E. (2013) Rethinking models of feedback for learning: the challenge of design, Assessment & Evaluation in Higher Education, 38:6, 698-712. Carless, D., Salter, D., Yang, M., Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36, 395–407. Denton, P., Mcilroy, D. (2018) Response of students to statement bank feedback: the impact of assessment literacy on performances in summative tasks, Assessment & Evaluation in Higher Education, 43:2, 197-206. Nicol, D., Thomson, A., Breslin , C. (2014) Rethinking feedback practices in higher education: a peer review perspective, Assessment & Evaluation in Higher Education, 39:1, 102122. Rand, J. (2017) Misunderstandings and mismatches: The collective disillusionment of written summative assessment feedback, Research in Education, 97:1, 33–48. 94 - EvalCOMIX®: a web-based programme to support assessment as learning and empowerment María Soledad Ibarra-Sáiz, Gregorio Rodríguez-Gómez University of Cadiz, Cadiz, Spain Conference Theme Integrating digital tools and technologies for assessment Abstract For many years assessment strategies and practices have emphasized on the one hand the importance of integrating assessment and learning and, secondly, the need to develop technological tools that facilitate this relationship and integration.

124


The authors of this article are currently focused on the development and application of the concept of assessment as learning and empowerment (Rodríguez-Gómez and Ibarra-Sáiz, 2015) characterized by three main challenges: achieving the participation of students in the assessment process, incorporating self-assessment, peer assessment and shared assessment; feedforward, understood as strategies that provide proactive information on students’ progress and results so that they can participate in their improvement; and high quality tasks, i.e., challenging tasks that are motivational and related to daily life. The implementation of these three challenges allows university students to self-regulate their learning process and provides empowerment within their personal, professional and working environments. According to the concept of assessment as learning and empowerment, the technological tools must be integrated within high quality tasks, encourage the participation of students in their own assessment process and provide useful and relevant information on their progress so they can take appropriate decisions in order to improve their work and performance. It is with the express intention of connecting and consolidation these various propositions that the EVALfor Research Group has been developing the by EvalCOMIX® web-based programme which we present in this micro-presentation. EvalCOMIX® is a web-based programme (http://evalcomix.uca.es) that supports the creation and implementation of assessment tools (rubrics, grading scales, mixed instruments, etc.) and their use in assessment process both by tutors and students. Consecuently, it demands their active participation in the assessment process. In this micro-presentation, firstly we will describe the EvalCOMIX® web service and then we will present the opinions of university tutors and students that have used this web service in their courses. We conclude that EvalCOMIX® is actually more than a just a web-based programme for assessment. Through its use, on the one hand, it can encourage student participation in the assessment process, by selecting or defining criteria, building tools and processes used in self-assessment and peer assessment. In addition, students receive valuable and relevant information about their performance and progress, so that improvements can be incorporated both in their learning process and the results they achieve. Key References Dawson, P., & Henderson, M. (2017). How does technology enable scaling up assessment for learning? In D. Carless, S. M. Bridges, C. K. Y. Chan, & R. Glofcheski (Eds.), Scaling up Assessment for Learning in Higher Education (pp. 209–222). Singapore. Bennett, S., Dawson, P., Bearman, M., Molloy, E., & Boud, D. (2017). How technology shapes assessment design: Findings from a study of university teachers. British Journal of Educational Technology, 48(2), 653–671. Ibarra-Sáiz, M. S., & Rodríguez-Gómez, G. (2017). EvalCOMIX®: A web-based programme to support collaboration in assessment. In T. Issa, P. Kommers, T. Issa, P. Isaías, & T. B. Issa (Eds.), Smart Technology Applications in Business Environments (pp. 249– 275). Hershey, PA: IGI Global. Rodríguez-Gómez, G., & Ibarra-Sáiz, M. S. (2015). Assessment as learning and empowerment: Towards sustainable learning in higher education. In M. Peris-Ortiz & J. M. Merigó Lindahl (Eds.), Sustainable learning in higher education. Developing competencies for the global marketplace (pp. 1–20). Cham: Springer International Publishing.

125


Rodríguez-Gómez, G., & Ibarra-Sáiz, M.S. (2016). Towards Sustainable Assessment: ICT as a Facilitator of Self- and Peer Assessment. In M. Peris-Ortiz, J. Alonso-Gómez, F. VélezTorres & C. Rueda-Armengot (Eds.), Education Tools for Entrepreneurship. (pp. 55–71). Cham: Springer International Publishing. 95 - Supporting Asynchronous, Multi-Institution, Student Learning, through PeerAssessment and Feedback, Using PeerWise in Third-Level Chemistry Eileen M O'Leary1, Barry Ryan2, Gary Stack3, Elaine O'Keeffe1, Laura Crowe1, Cormac Quigley4 1 Cork Institute of Technology, Cork, Ireland. 2Technological University Dublin, Dublin, Ireland. 3 Athlone Institute of Technology, Athlone, Ireland. 4Galway-Mayo Institute of Technology, Galway, Ireland Conference Theme Integrating digital tools and technologies for assessment Abstract PeerWise is a freely available online platform that allows students to interact through creating Multiple Choice Questions (MCQs) for peers, as well as answering, commenting-on and rating these MCQs. Previous research concludes that engaging with PeerWise has numerous benefits for students, including; a deeper fundamental subject knowledge through question creation, a greater enjoyment of learning through elements of gamified learning, including badges and peer-rating, and a shared community of practice that encompassed a safe and supported on-line learning environment. Previous PeerWise studies focused entirely on a single case; in our study we sought to investigate if these benefits could be extended outside a single classroom or Institution; our research question was: Can PeerWise be used to support asynchronous, multi-institution, student learning in third-level foundation chemistry? An action research methodology was utilized to address this primary research question and aligned sub-questions relating to the benefits and concerns that varied with multiinstitutional, and hence multi-cohort, engagement. This presentation will outline the findings from the first action research cycle and how these informed the second cycle, focusing primarily on the impact on student learning. The triangulated data set includes the lived experience of the academics involved in the process and their learning through the design, execution and assessment phases. The value of peer feedback and the first year students’ perceptions of peer learning emerged as key themes, and will thus be explored. Emerging from the two action research cycles, we will present lessons learned that concentrate primarily on the differences in expectation between academics and students, and outline a new approach to managing students’ expectations. The creation of a multi-institutional collaborative PeerWise space was based on the perceived academic benefits of PeerWise, including; ‘just in time learning’, student centred revision, using student authored questions to gauge understanding and allowing us to tailor our teaching to advance student progress. However, these benefits may not have been fully realized in this study due to the number of variables simultaneously presenting; different lecturers and hence different teaching styles and emphasis, different student cohorts, some focusing solely on chemistry and some taking chemistry as a service module (i.e. they are on a biology, pharmacy, agricultural or engineering stream), and students from different social and economic backgrounds. The question we will pose in our discussion is why did we do it and did it retain the advantages of individual PeerWise classes and accentuate them?

126


Key References ‘The Contribution of Action Research to Development in Social Endeavours: a position paper on action research methodology’, Bridget Somekh; British Educational Research Journal, Vol. 21, No. 3, Pg. 339, 1995 ‘Understanding Participatory Action Research: A Qualitative Research Methodology Option’, Cathy McDonald; Canadian Journal of Action Research, Volume 13, Issue 2, 2012, pages 34-50 ‘Learning From Peer Feedback On Student-Generated Multiple Choice Questions: Views Of Introductory Physics Students’, Kay, A.E; Hardy, J And Galloway, R.K., Phys. Rev. Phys. Educ. Res. 14, 010119, 2018 ‘Asking and Answering Questions: Partners, peer Learning, and Participation’, Rivers, J., Higgins, D., Mills, R., Maier, A.G., Howitt, S.M.; In International Journal for Students as Partners, volume 1(1), 2017 ‘Student Attitudes to an Online, Peer-instruction, Revision Aid in Science Education’, Ryan, B, Mac Raighne, Casey, M., Howard, R.; accessed 21/01/2019 https://arrow.dit.ie/cgi/viewcontent.cgi?article=1175&context=schfsehar t ‘Line Up, Line Up: Using Technology to Align and Enhance Peer Learning and Assessment in a Student Centred Foundation Organic Chemistry Module’, Ryan, B.; accessed 21/01/2019 doi: 10.14297/jpaap.v3i1.135. 96 - Developing students’ evaluative judgement: A challenge to our assessment practices? David Boud University of Technology, Sydney, Australia Presentation Micro Presentation Abstract Courses naturally focus on particular learning outcomes associated with disciplinary or professional subject matter, along with addressing a range of generic learning outcomes (working with others, etc.). However, can we regard a course that meets all these requirements sufficient preparation for a graduate in the 21st century? This presentation suggests not. In addition to these substantive outcomes, a graduate needs also to have the capacity to know what they know and don’t know and what they can do and not do. Without this achievement at a reasonable level, we would regard a graduate as a poor specimen. Why would anyone want to employ someone who wasn’t aware of the scope of their expertise? Such graduates would be dangerous as they wouldn’t know what they could rely on. They need the capacity to make such evaluative judgements of their own performance in order to both work and learn effectively. They need also to make able to make such judgements of others. Without this capability they wouldn’t know who or what to trust. Evaluative judgement is defined as: “the capability to make informed decisions about the quality of work of oneself and others” Tai et al, (2017). Such a viewpoint challenges conventional notions of assessment. It is not sufficient for us to determine what a student can and cannot do. There is a risk that if we focus attention on making our judgements of students more effective, we could in the process de-skill them from making effective judgements for themselves. They may become dependent on official assessment judgements and their own ability can atrophy if they are not given sufficient opportunity to develop these skills for themselves.

127


It will be suggested that not only should it be a goal of all courses to develop students’ evaluative judgement, but that we need to ensure that assessment practices do not inadvertently undermine the achievement of this goal through an excessive emphasis on unilateral judgement by others. It is not sufficient to develop this capacity through teaching and learning activities, but the assessment regime needs also to contribute to achieving such a fundamental purpose of higher education. Key References Boud D., Ajjawi R., Dawson P., Tai J. (Eds.). (2018). Developing Evaluative Judgement in Higher Education, London: Routledge. Tai, J., Ajjawi, R., Boud, D., Dawson, P. and Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work, Higher Education, 76:467–481. 97 - Improving assessment by aligning with better learning outcomes Sally Brown Independent Consultant, Leeds, United Kingdom Abstract Learning outcomes have become the ‘go-to’ building blocks of curriculum design and the basis of effective assessment. But they are not universally popular and often do not support effective assessment. In earlier times what students were to be assessed on was presented in the form of reading list and later the term’ syllabus began to be used widely, outlining what was to be taught. Subsequently, building particularly on the work of Mager (1962) on preparing instructional objectives, learning objectives began appearing in very structured formats. Through this, and the work of Biggs (1996) advocating constructive alignment, learning outcomes became the norm through which we describe what we want students know and can do, hence shaping assessment tasks and judgments. Stefani (2019) proposes that meaningful learning outcomes should encompass ‘a wide range of student attributes and abilities both cognitive and affective’ (p42) but too often only the former are assessed or considered assessable. Phil Race (2018) would suggest that we need to go beyond learning outcomes when considering criteria by thinking also about Learning incomes: (all the things learners are bringing to the learning situation), Emergent learning outcomes (those things we didn’t (and maybe couldn’t) predict and Intended learning outgoings: helping them link the intended learning outcomes to the wider world of future learning and employment. We are often urged to base assessment on learning outcomes that are Specific, Measurable, Achievable, Realistic and Time-constrained, but I propose designing criteria linked to VASCULAR learning outcomes which are:   

128

Verifiable: Can we tell when they’ve been achieved? And can students? Action orientated: Do they lead to real and useful activity? Singular: i.e. not portmanteau outcomes combining two or more into one, making it difficult to assess if differently achieved, but readily matchable to student work produced?


   

Constructively aligned? (so that there is clear alignment between aims (What do students need to be able to know and do?), what is taught/ learned, how these are assessed and evaluated); Understandable i.e. using language codes that are meaningful to all stakeholders? Level-appropriate? Suitable and differentiable between1st year, 2nd year, 3rd year, Masters, other PG? Affective-inclusive i.e. not just covering actions but capabilities in the affective domain? Regularly reviewed? Not just stuck in history and always fit-for-purpose.

Key References Biggs, J., 1996. Enhancing teaching through constructive alignment. Higher education, 32(3), pp.347-364. S Mager, R.F., 1962. Preparing instructional objectives. Race, P. (2018) https://phil-race.co.uk/2018/05/beyond-learning-outcomes/ (accessed March 2019) Stefani, L. (in ‘Curriculum Design and Development’ in Fry, H., Ketteridge, S. and Marshall, S. eds., 2009. A handbook for teaching and learning in higher education: Enhancing academic practice. Routledge. AHE Conference 2019: Presentation https://oxabs-file-hosting-production-us-west2.s3.amazonaws.com/e85573a8-d280-4d04-9e17-99e6c105b333.pdf 98 - Feedback designs for large classes David Carless University of Hong Kong, Hong Kong, Hong Kong Abstract The main challenges for feedback processes are that one-way transfer of information seldom leads to student engagement or uptake; students often lack strategies to use feedback; and the structural barrier of end-of-module assessment tasks results in information coming too late for student action. A way forward lies in feedback designs: conceptualizing feedback processes as part of course design, not an episodic mechanism delivered by teachers to learners (Boud & Molloy, 2013). This short presentation proposes that effective feedback designs involve the judicious implementation of some, but not necessarily all, of the following features:     

interactivity and bi-directionality; peer tutoring and peer feedback; technology use predicated on student involvement; student participation in making and refining academic judgments; student development of feedback literacy.

Well-designed feedback processes make large classes less of a barrier than might first be thought because they mainly emphasize activating the student role. A key point is to avoid approaches that are based predominantly on teacher telling (Sadler, 2010, 2015). From a small collection of feedback designs with large classes of 200-500 students at the University of Hong Kong, five main strategies are identified from a range of different disciplines: 129


1. Peer tutoring and peer learning e.g. the deployment of senior students as teaching assistants and facilitators of tutorials; 2. The strategic use of exemplars to clarify expectations, provide guidance and illustrate the nature of quality in the discipline; 3. Automated feedback systems to provide instant feedback, including hints and teaching materials for tackling problems; 4. Group projects as sites for peer interaction and peer feedback, aligned with strategies to minimize free-riding; 5. Sequences of rich assessment tasks that motivate students to learn independently and develop capacities for evaluative judgment. Practices for a specific module usually focused on two or three of these strategies selected according to contextual and disciplinary factors. A possible inference is that the challenge of large class sizes promotes careful consideration of assessment and feedback designs. Implications for the development of teacher and student feedback literacy are also discussed. Key References Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment and Evaluation in Higher Education, 38(6), 698-712. Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment and Evaluation in Higher Education, 35(5), 535-550. Sadler, D. R. (2015). Backwards assessment explanations: Implications for teaching and assessment practice. In D. Lebler, G. Carey, & S. Harrison. (Eds.), Assessment in Music Education: From Policy to Practice (pp. 9-19). Cham: Springer. 99 - Future-proofing our assessment strategies: challenges and opportunities Peter Hartley Independent Education Consultant, United Kingdom Abstract In the final round of NTFS Group Projects some ten years ago, two projects explored issues of assessment across courses/programmes rather than focusing on modules or assignments: PASS and TESTA (links to current websites below). The PASS workshop at this conference explores detailed issues and developments arising from these very different initiatives, both of which have achieved lasting impact. This session will raise a number of ‘higher-level’ questions about assessment strategies. These keep surfacing at PASS workshops and we need to resolve them if we are going to develop/implement assessment strategies which are genuinely future-proof: 1. Do our assessment traditions help or hinder progress? 1. If we interrogate specific ‘taken-for-granted’ assessment practices in HE then we find a number of ‘traditions’ which have origins and underlying assumptions which seem to be ‘lost-in-time.’ For example, we use percentage marks and degree classifications which have a complex and sometimes mysterious history. What are the key traditions which we need to challenge/update? 2. How can we integrate what we know about assessment to achieve a more coherent and future-proof approach?

130


3. We know a great deal about assessment at modular level (e.g. David Carless, 2015) and we now have innovative strategies to improve some of the essential components of assessment such as feedback (e.g. the FEATS project and the work of Naomi Winstone at University of Surrey; Winstone and Carless, 2019). But how can/do we combine these ‘micro-initiatives’ into a comprehensive integrative approach for the whole course/programme? 4. Where do/should assessment strategies figure in the curriculum design process? 5. The literature on curriculum development provides various recipes to achieve ‘effective’ curriculum design, including approaches which are challenged by some sessions at this conference (e.g. see Sally Brown on learning outcomes). But we also have evidence of a significant gap between what we ‘should be doing’ according to the established literature and what actually happens on the ground in the curriculum design process (e.g. Binns, 2017). How do we resolve these disparities? Key References Binns, C. (2017) Module Design in Changing Era of Higher Education: academic identity, cognitive difference and institutional barriers. Palgrave MacMillan. Carless, D.(2015). Excellence in University Assessment: Learning from award-winning practice. Routledge. FEATS - https://www.surrey.ac.uk/department-higher-education/learning-lab/feedbackengagement-tracking-surrey PASS - https://www.bradford.ac.uk/pass/ TESTA - http://www.testa.ac.uk Winstone, N., & Carless, D. (2019). Designing effective feedback processes in higher education: A learning-focused approach. Routledge. Parallel Session 8 Chair: Linda Graham Time: 12:10 - 12:40 Date: 27th June 2019 Location: Piccadilly Suite 100 - ‘What do they do with my feedback?’ A study of how undergraduate and postgraduate Architecture students perceive and use their feedback Charlie Smith Liverpool John Moores University, Liverpool, United Kingdom Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract It has been argued that comments made about students’ work only become feedback when students use them to make improvements to their work or their learning strategies (Carless and Boud, 2018). However, research suggests that students often do not understand their feedback, or struggle with utilising it to improve their work (Boud and Molloy, 2013a; Hattie and Timperley, 2007). The entire Architecture pedagogy is structured around iterative feedback dialogue, with students receiving comments in a variety of formats: verbal formative comments in weekly tutorials, verbal formative comments in design reviews, and written summative comments following assessments. Yet the NSS shows satisfaction with assessment and feedback to be lower than average for Architecture programmes, indicating that shortcomings still exist despite this rich combination of commentaries.

131


This presentation discusses the outcomes of a primary research project investigating how students perceive, comprehend and utilise feedback about their coursework. It questions which type of feedback (e.g. verbal, written, peer) they find most and least useful (and why), what makes them more likely to act on feedback, and if they ever revisit feedback. A questionnaire distributed to all cohorts in the undergraduate and postgraduate Architecture programmes captures a broad cross-section of the student voice at all levels of study. Boud and Molloy (2013a) highlight that feedback evaluations should consider what actions students take as a result of receiving it, and that without understanding how feedback has been used, teachers are blind to the consequences of their actions and cannot act to improve learning (2013b). The objective of this study was to better understand how students make sense of their feedback. It will provide delegates with an awareness of the factors that influence whether, and how, students act on commentary they receive, and which sources of comments they consider to be most useful. It will be of interest to those looking to understand how comments are utilised by students, to become meaningful feedback. Key References Boud, D. and Molloy, E. (2013a), ‘What is the problem with feedback?’, in Feedback in Higher and Professional Education, edited by David Boud and Elizabeth Molloy, pp. 90-103, Abingdon: Routledge. Boud, D. and Molloy, E. (2013b), ‘Rethinking models of feedback for learning: the challenge of design’, Assessment & Evaluation in Higher Education, 38 (6), pp. 698-712. Carless, D. and Boud, B. (2018), ‘The development of student feedback literacy: enabling uptake of feedback’, Assessment & Evaluation in Higher Education. Hattie, J. and Timperley, H. (2007), ‘The power of feedback’, Review of Educational Research, 77 (1), pp. 81-112. Parallel Session 8 Chair: Tina Harvey Time: 12:10 - 12:40 Date: 27th June 2019 Location: Room 3 101 - Reflecting on quality with first-year undergraduate students Emma Whitt, Carmen Tomas, Jonathan Halls University of Nottingham, Nottingham, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract We are establishing a framework to structure activities that aim to develop assessment literacy of first-year psychology students. In particular we are attempting to develop students’ evaluative judgement; the ability to understand quality and to judge self and others’ work and learn from those judgements (Ajjawi, Tai, Dawson, & Boud, 2018). Firstyear university students have experience of being assessed; however, the criteria used for assessment at university are much more open than they may have had previously, and this freedom can either increase students’ sense of agency or lead them to feel uncertain (Ritchie, 2016). It is important, therefore, to ensure that first-year students are supported through this transition. Understanding quality is a key feature of evaluative judgment (Ajjawi et al., 2018) and as such is something that is focused on in this work. One of the very first activities that was run with students as part of the framework was to ask them to reflect on their thoughts of what makes an excellent piece of work. Even in the first few weeks of their first year, students 132


were able to articulate what makes good quality work. We followed this activity with a marking activity, the aim of which was to give students practise in assessing others’ work. After this, students were asked again to reflect on their thoughts around excellent work. Student comments on the reflection activity were collected and coded according to the criteria outlined in marking rubric. Data from two cohorts of students were combined (N = 269) and the change in response after the marking activity was analysed. The results show a significant increase in the proportion of responses relating to each criterion outlined in the rubric, with changes being between 17 – 30%. A large number of responses focused on demonstration of knowledge and understanding, with critical reflection included in only about half of the responses. These initial analyses show a positive impact of the activities on students’ articulation of criteria and highlight areas (i.e., critical reflection) in which we can concentrate to further develop students’ understanding. Key References Ajjawi, R., Tai, J., Dawson, P., & Boud, D. (2018). Conceptualising evaluative judgement for sustainable assessment in higher education. In D. Boud, R. Ajjawi, P. Dawson, J. Tai (Eds.), Developing evaluative judgment in higher education: Assessment for knowing and producing quality work (pp. 7-17). Abingdon, UK: Routledge. Ritchie, L. (2016). Fostering self-efficacy in higher education students. London, UK: Palgrave. Parallel Session 8 Chair: Hilary Constable Time: 12:10 - 12:40 Date: 27th June 2019 Location: Room 4 102 - Effecting change in feedback practices across a large research intensive institution Teresa McConlogue, Jenny Marie UCL, London, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract This presentation reports on the evaluation of an initiative to change feedback practices across an institution. The impetus for the initiative was student concerns about the quality, quantity, consistency, timeliness and usefulness of feedback. The vehicle for change is a 2 hour workshop on ways of giving good quality feedback. The workshop is evidence led, drawing on current literature and research on student perspectives in the institution. The workshop explores the value of dialogic feedback, the importance of developing students’ evaluative judgement and developing understandings of academic standards (Boud et al 2018, Carless 2016, Sadler 2010). A feedback profiling tool (Hughes et al 2015) is used to analyse samples of feedback and ways of helping students understand feedback are discussed. From 2017, the workshop has been offered to staff across the institution in order to achieve saturation Continuing Professional Development (CPD) (Bloxham 2016, Forsyth et al 2015). The institution is a large research-intensive university in London with around 40,000 students and over 7,000 academic and research staff. In the 2014 REF exercise 43% of the institution’s submitted research was classified as 4*; the institution scored silver in the 2017 TEF. Our target was to reach 1000 staff in the first two years (we plan to continue offering the workshop for four years in total) in order to widely disseminate the workshop messages.

133


To date, 650 staff have attended the workshop, a mix of senior teachers, departmental and faculty directors of education, programme leaders, lecturers, researchers, professional services staff and teaching assistants. We want to discover what, if any, impact this saturation CPD has on staff and student attitudes towards feedback. The impact of the workshop is being evaluated through mixed methods longitudinal research (several iterations over four years) using a paper evaluation at the end of the workshop, a follow up impact survey, interviews with staff and focus groups with students in 3 sample departments which represent a range of disciplines, sizes (e.g. large/small departments) and high and low student satisfaction with feedback. This presentation will report on an analysis of the data set, focussing on lessons learned so far, the effectiveness of the evaluation strategy and how we intend to take this work forward. Key References Bloxham, S. (2016) Central challenges in transforming assessment at departmental and institutional level. Keynote. AHE Seminar Day, 30th June, Manchester. Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (Eds.). (2018). Developing Evaluative Judgement in Higher Education: Assessment for Knowing and Producing Quality Work. Routledge. Carless, D. (2006) Differing perceptions in the feedback process. Studies in Higher Education 31, no. 2: 219–33. Forsyth, R., Cullen, R., Ringan, N. & Stubbs, M (2015) Supporting the development of assessment literacy of staff through institutional process change. London Review of Education 13(3) Hughes,G., Smith, H. & Brian Creese (2015) Not seeing the wood for the trees: developing a feedback analysis tool to explore feed forward in modularized programmes, Assessment & Evaluation in Higher Education, 40:8, 1079-1094, Sadler, R. (2010) Beyond feedback: developing student capability in complex appraisal, Assessment & Evaluation in Higher Education, 35:5, 535-550 Parallel Session 8 Chair: Sally Jordan Time: 12:10 - 12:40 Date: 27th June 2019 Location: Room 5 103 - An International Comparison on the Use of New Technologies in Teaching Economics Dimitrios Paparas Harper Adams University, Newport, United Kingdom. British University Egypt, Cairo, Egypt Conference Theme Integrating digital tools and technologies for assessment Abstract According to literature of educational economics, thinking like an economist requires not only analytical and problem solving skills but also creative skills. Motivated by the current trend of adopting technology in teaching and learning, we did a trial to “flip” our classroom with the help of an online tool called “EDpuzzle” to track students’ preparation prior to the class. Meanwhile, a student response system called “Kahoot!” was adopted to engage students in in-class activities. Furthermore, online quizzes were also used to check students’ understanding after the class. However, the sample of students is very homogenous and according to the literature there should be a comparison between those technologies used by the students of different cultures in order to be able to generalise our empirical results. The aim of this project is to make an International comparison between the students of two Institutions (Harper Adams University and British University in Egypt) in order to investigate the impact of those technologies on the students’ attendance, engagement, attention, 134


learning outcomes and perception of learning in economics classes. In order to ensure the validity and reliability of our results we will deploy a number of different data collection methods, the mixed methods research. Mixed methods research is a methodology for conducting research that involves collecting, analysing and integrating quantitative (e.g., experiments, surveys) and qualitative (e.g., focus groups, interviews) research. This approach to research is used when this integration provides a better understanding of the research problem than either of each alone. Key References Dill, E. (2008). Do clickers improve library instruction? Lock in your answers now. The Journal of Academic Librarianship. 34, 6, 527–529. Elliott, C. (2003) Using a personal response system in economics teaching. International Review of Economics Education, 1(1). El-Rady, J. (2006). To click or not to click: That’s the question. Innovate Journal of Online Education, 2(4). Freeman, M., Bell, A., Comerton-Forder, C., Pickering, J., & Blayney, P. (2007). Factors affecting educational innovation with in class electronic response systems. Australasian Journal of Educational Technology, 23(2), 149–170. Gibbs, G., & Jenkins, A. (1992).Teaching large classes in higher education: How to maintain quality with reduced resources. London: Kogan Page. Greer, L., & Heaney, P. J. (2004). Real-time analysis of student comprehension: An assessment of electronic student response technology in an introductory earth science course. Journal of Geoscience Education, 52(4), 345–351. Parallel Session 8 Chair: Geraldine O'Neil Time: 12:10 - 12:40 Date: 27th June 2019 Location: Room 7 104 - Student peer feedback and assessment: progress using adaptive comparative judgement Jill Barber, Steven Ellis University of Manchester, Manchester, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract There is a wealth of literature pointing to the value of student peer assessment and feedback. Nicol et al (2014) are among those who note that students benefit from both giving and receiving feedback. The language used by fellow students tends to be accessible (Falchikov 2005). Bloxham and West (2004) indicate that peer feedback can help students to understand the assessment process. Peer assessment is, however, rather difficult to implement. Students are reluctant to pass judgement on their peers and are inclined to award indistinguishable and usually high marks, regardless of the quality of work. Adaptive comparative judgment (ACJ) provides an alternative to conventional marking (Pollitt, 2012); assessors (judges) are required merely to make repeated judgements as to which of a pair of scripts (or artwork, videos etc) is better. A suitable sorting algorithm provides a rank order from which marks can be calculated. The simple process of judging between two scripts was predicted to be less daunting for students than the assignment of marks. An initial trial of peer assessment by ACJ in a formative assessment with 70 students submitting work and acting as judges was very encouraging, yielding a “reliability” score (a measure of agreement between assessors) of 0.92 (above 0.9 being regarded as excellent). 135


Feedback was, however, quite superficial. We have therefore provided detailed guidelines on the preparation of peer feedback in subsequent exercises. Over a total of seven further assessments (some formative, some low stakes summative) feedback has been rich and detailed, and in broad agreement with staff feedback. In one unit, students requested additional assignments to judge (Barber 2018; Demonacos et al., 2019). Reliability scores have, however, been very variable, ranging from 0.4 to above 0.9. Hierarchical marking schemes, in which one aspect of the work is judged and other aspects considered only if there is a tie, are helpful, but do not provide a complete solution to the inconsistency of peer judgements. In conclusion, software to permit ACJ has been available for only a few years and we have not yet mastered its full potential, especially for peer assessment. We are moving towards the development of robust protocols to yield the known advantages of student peer assessment and feedback. Key References Barber, J. (2018) Five go marking an exam question: the use of Adaptive Comparative Judgement to manage subjective bias. Practitioner Research in Higher Education, 2018, 11, 94-100. Bloxham, S., West, A. (2004) Understanding the rules of the game: marking peer assessment as a medium for developing students' conceptions of assessment, Assessment & Evaluation in Higher Education, 29: 721-733. Demonacos, C., Ellis, S., Barber, J. (2019) Student peer assessment using adaptive comparative judgment: Grading accuracy versus quality of feedback. Submitted for publication. Falchikov N (2005) Improving assessment through student involvement. London: RoutledgeFalmer. Nicol, D., Thomson, A., Breslin, C., (2014) Rethinking feedback practices in higher education: a peer review perspective. Assessment and evaluation in higher education, 39, 102122. Pollitt, A. (2012). The method of Adaptive Comparative Judgement. Assessment in Education: Principles, Policy & Practice, 19, 281-300. Parallel Session 8 Chair: Naomi Winstone Time: 12:10 - 12:40 Date: 27th June 2019 Location: Room 9 105 - Implications of social legitimation for changing assessment practices: Learnings from the Australian higher music education sector Jack Walton Griffith University, Brisbane, Australia Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract The purpose of this presentation is to consider how learnings from a doctoral research project about assessment in Australian higher music education might productively inform the wider discussion around changing assessment practices in higher education. In particular, I explore some ways in which framing assessment as a social practice (Broadfoot, 2012; Filer, 2000) can be used to generate practicable insights about the social realities of practising assessment. 136


While much theoretical progress has been made within the field of educational assessment over the last 30 years, scholars agree that assessment practices in higher education have been comparatively slow to change (Boud et al., 2016): This is particularly true for higher music education, where assessment continues to be heavily influenced by traditional practices, and the emphasis often remains on the assessment of learning, rather than assessment for or as learning (Partti, Westerlund & Lebler, 2015). In the interest of facilitating change, scholars such as Boud et al. (2016) have identified a need for alternative ways of framing assessment as an object of research, where the emphasis is shifted from what assessment ‘should do’ to what those involved in it actually ‘do’. One approach that has been advocated for is to foreground assessment as a social practice, where the focus is on the knowledge and experiences of those who actually participate in assessment (see for example, Boud et al., 2016; Broadfoot, 2012; Shay, 2008). The central premise of the project I discuss here was to unpack how legitimacy could be distributed to ‘ways of practising’ assessment, as well as the ‘knowledge’ and ‘dispositions’ that underlie such practices. To this end, a sociological framework—Legitimation Code Theory (LCT, see Maton, 2014)—was used to explore interview data collected from a diverse group of 25 participants, which included a range of faculty and students involved in assessment at 6 different institutions of higher music education in Australia. The findings elucidated ways in which the distribution of legitimacy to particular ways of knowing and acting reflected different codes of practice. These codes were seen to proclaim rules for succeeding in the assessment, where ‘success’ was not limited to students’ achievement, but also to the ways in which assessment is enacted and otherwise participated-in. Using examples from this project, I consider how the distribution of legitimacy to different practices, knowledge, and dispositions by those actors involved in assessment meaningfully frames the challenge of changing practices in higher education. Key References Boud, D., Dawson, P., Bearman, M., Bennett, S., Joughin, G., & Molloy, E. (2016). Reframing assessment research: Through a practice perspective. Studies in Higher Education, 43(7), 1107–1118. doi:10.1080/03075079.2016.1202913 Broadfoot, P. (2012). Assessment, schools and society. Abingdon, England: Routledge. Filer, A. (Ed.). (2000) Assessment: Social Practice and social product. London, England: RoutledgeFalmer. Maton, K. (2014). Knowledge and knowers: Towards a realist sociology of education. Abingdon, England: Routledge. Partti, H., Westerlund, H., & Lebler, D. (2015). Participatory assessment and the construction of professional identity in folk and popular music programs in Finnish and Australian music universities. International Journal of Music Education, 33(4), 476– 490. doi:10.1177/0255761415584299 Shay, S. (2008). Researching assessment as social practice: Implications for research methodology. International Journal of Educational Research, 47(3), 159–164. doi:10.1016/j.ijer.2008.01.003

137


Parallel Session 8 Chair: Serafina Pastore Time: 12:10 - 12:40 Date: 27th June 2019 Location: Room 10 106 - Assessment as learning and empowerment: A formative mediation model María Soledad Ibarra-Sáiz, Gregorio Rodríguez-Gómez Univeristy of Cadiz, Cadiz, Spain Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract In recent years many differing approaches to assessment in higher education have been developed. Among the key ones have been sustainable assessment (Boud & Soler, 2016), assessment as learning (Carless, 2007) and assessment as advice for action (Whitelock, 2010). Based on these preceding approaches Rodríguez-Gómez & Ibarra-Sáiz (2015) have developed the concept of assessment as learning and empowerment. Assessment as learning and empowerment encompasses a series of variables that interact to enhance students' strategic learning and their ability to transfer this learning to other contexts. The seven variables that interact in this theoretical model are: the participation of students, feedback, the quality of assessment tasks, self-regulation, empowerment, strategic learning and transference. The key aim of this study is to verify empirically, based on students' perceptions, the interrelations between the set of variables that characterise assessment as learning and empowerment and the nature of mediation exercised by some of these variables. The specific objectives of the study are: a) To design and validate an instrument that can collate the perceptions of university students about the various elements that characterize assessment as learning and empowerment. b) To assess the interrelationships of these variables that characterise assessment as learning and empowerment through the verification of the relational hypotheses that are established between them. Students in the final year of a Business Administration and Management degree were invited to voluntarily complete the ALEC_Q (Assessment as Learning and Empowerment Climate Questionnaire). It was decided from the start to consider a formative measurement model, so that each of the indicators captures a specific aspect of the domain of the construct. The study involved 464 students from the Faculty of Economic and Business Sciences of a Spanish university. 56.3% were female and 43.7% male. These students took modules in Human Resources Management (HR), Operations Management (OP), Project Management (PM) and Market Research (MR), taught in the fourth year of the Business Administration and Management degree. The Partial Least Squares Structural Equation Modelling (PLS-SEM) technique was used when analysing the data (Hair, Sarstedt, Ringle, & Gudergan, 2018). This micro-presentation presents an evaluation of the formative measurement model, and an evaluation of the structural model of the causal relationships between the set of seven variables. The results confirm the relationship between the variables and how feedback, 138


empowerment, participation and self-regulation all play a role as mediating variables between assessment tasks and strategic learning and transference. Key References Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), 400–413. Carless, D. (2007). Learning oriented assessment: conceptual bases and practical implications. Innovations in Education and Teaching International, 44(1), 57–66. Hair, J. F., Sarstedt, M., Ringle, C. M., & Gudergan, S. P. (2018). Advanced Issues in Partial Least Squares Structural Equation Modeling. London: Sage. Rodríguez-Gómez, G., & Ibarra-Sáiz, M. S. (2015). Assessment as learning and empowerment: Towards sustainable learning in higher education. In M. Peris-Ortiz & J. M. Merigó Lindahl (Eds.), Sustainable learning in higher education. Developing competencies for the global marketplace (pp. 1–20). Cham: Springer International Publishing. Whitelock, D. (2010). Activating assessment for learning: Are we on the way Web 2.0? In M. J. W. Lee & C. McLoughlin (Eds.), Web 2.0- based-e-learning: Applying social informatics for tertiary teaching (pp. 319–342). Hershey, PA: IGI Global. Parallel Session 8 Chair: Amy Lewis Time: 12:10 - 12:40 Date: 27th June 2019 Location: Room 11 107 - Reflection, realignment and refraction: Using Bernstein’s evaluative rules to support the markers of the summative assessment of reflective practice Jenny Gibbons University of York, York, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract Reflective practice is an essential aspect of experiential learning and teaching approaches and, as such, it is increasingly being incorporated into undergraduate and postgraduate programme design (Ashwin, 2015). However, the extent to which reflective writing tasks should be set as summative assessments is contentious (Hobbs, 2007). For many students a reflective writing style is unfamiliar and reflective writing tasks are perceived as insufficiently ‘academic’ to be a legitimate component of their suite of assessments (Clarkeburn & Kettula, 2012). For the markers of reflective writing assessments, the potential breadth of content can be overwhelming leading to a lack of clarity about the specifics of what should be awarded credit (Dyment & Connell, 2011). Despite this, constructive and supportive conversations between the markers of reflective writing tasks in different disciplines are rare. This practice exchange is an opportunity for the markers of reflective writing tasks to discuss the challenges they face and the strategies they use when awarding credit for reflective practice. The presentation will be based on research conducted at York Law School, where the undergraduate law programme is delivered using a problem-based learning model that incorporates a range of summative reflective writing tasks. The presenter will explain the utility of using Bernstein’s evaluative rules as a theoretical frame in a research project she undertook to analyse qualitative data from a survey of the markers of reflective writing tasks (Bernstein, 2000; Gibbons, 2018). Following an explanation of the findings that ‘reflection’, ‘realignment’ and ‘refraction’ are three of the potential effects of the summative assessment 139


of reflective practice, she will facilitate a discussion about the utility of these terms and the potential for future collaborate research into this issue. Key References Ashwin, P. (Ed.) (2015). Reflective Teaching in Higher Education: Bloomsbury. Bernstein, B. (2000). Pedagogy, Symbolic Control and Identity: Theory, Research, Critique Revised Edition. London: Routledge. Clarkeburn, H., & Kettula, K. (2012). Fairness and Using Reflective Journals in Assessment. Teaching in Higher Education, 17(4), 439-452 Dyment, J., & Connell, T. (2011). Assessing the quality of reflection in student journals: A review of the research. Teaching in Higher Education, 16(1), 81-97 Gibbons, J. (2018). Reflection, realignment and refraction: Bernstein's evaluative rules and the summative assessment of reflective practice in a problem-based learning programme. Teaching in Higher Education, 1-16 Hobbs, V. (2007). Faking it or hating it: can reflective practice be forced? International and Multidisciplinary Perspectives, 8(3), 405-417 Parallel Session 9 Chair: Pete Boyd Time: 13:30 - 14:00 Date: 27th June 2019 Location: Piccadilly Suite 108 - Making students and instructors aware of authentic emotional and metacognitive processes underlying assessment: a first year pre-grads experience Ana Remesal, Sareh Attareivani, Zahra Parham, Abolfazl Khanbeiki Universidad de Barcelona, Barcelona, Spain Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Assessment of and for learning is an emotionally loaded event for students, especially when entering university, as the new context presents a new assessment culture. Students need to take thoughtful, strategic decisions by the second in order to offer their best selfimage, their best performance in assessment situations. In this research, we designed a specific research strategy to bring emotional and metacognitive processes during classroom assessment to the front and make them accessible to inquiry. Prior research around the impact of emotions onto assessment has mainly focused on post-hoc reported data questionnaires, establishing either negative emotions in anticipation and both negative and positive emotions in backwards reflection. In contrast with this prior work in the research field, we present results of 158 freshmen teacher-students (from Kinder-garden through to primary and secondary education levels), who participated in this new assessment proposal. The students had to decide in sito about a grading strategy resulting from an adaptation of CBM (confidence based marking) and criterion-based grading, which led them to evaluate their own self-competence and declare their feelings about it. We collected data in three consecutive steps. At the beginning of the course, the students’ responded a questionnaire on their conceptions of assessment and individual learning approaches (motivation and learning strategies). By the end of the term (trimester), the students sat an exam with open-ended complex questions aiming at the evaluation of their developed competence in the curricular area. After the respective instructor had marked the exams and the students got their results, a sub-sample of 12 students entered the third and last data collection phase, consisting in an in-depth interview, which was later on transcribed and content-analysed. 140


Results are deeply informative with respect to a great variety of emotional processes underlying the assessment process. In addition, students’ explained decisions show a huge variety of cognitive processes and high metacognitive awareness. Both reactions, on the affective and on the cognitive arenas, appeared to be narrowly linked to conceptions about assessment and learning approaches of the students. Certainly, these new open window to assessment-linked individual –cognitive and emotional- processes provide a whole new perspective on how feedback should be best adjusted to students learning process. Key References Barr, D. A., & Burke, J. R. (2013). Using confidence-based marking in a laboratory setting: A tool for student self-assessment and learning. Journal of Chiropractic Education, 27(1), 21-26. Brown, G., Gebril, A., Michaelides, M., & Remesal, A. (2018). Assessment as an Emotional Practice: Emotional Challenges Faced by L2 Teachers Within Assessment. In J.D. Martínez Agudo (Ed.). Emotions in Second Language Teaching. Theory, Research, and Teacher Education. Springer. Gardner-Medwin, A. (2008). Certainty-Based Marking: rewarding good judgment of what is or is not reliable. In: (Proceedings) Innovation 2008: The Real and the Ideal. London. Kember, D., Biggs, J., & Leung, D. Y. (2004). Examining the multidimensionality of approaches to learning through the development of a revised version of the Learning Process Questionnaire. British Journal of Educational Psychology, 74(2), 261-279. Morris, J. D. (1995). Observations: SAM: the Self-Assessment Manikin; an efficient crosscultural measurement of emotional response. Journal of advertising research, 35(6), 63-68. Panadero, E., Klug, J. & Järvelä, S. (2016) Third wave of measurement in the self-regulated learning field: when measurement and intervention come hand in hand, Scandinavian Journal of Educational Research, 60(6), 723-735. Parallel Session 9 Chair: Rita Headington Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 2 109 - Investigating teachers’ use of exemplars: Difficulties in managing effective dialogues Philip Smyth, David Carless The University of Hong Kong, Hong Kong, Hong Kong Conference Theme Addressing challenges of assessment in mass higher education Abstract Exemplars are sample texts chosen to illustrate levels of quality or competence (Sadler, 2005). They are typically used to help develop student understanding of criteria and standards of assessment (Hendry, Bromberger, & Armstrong, 2011) and have been shown to improve the quality of subsequent work (Rust, Price, & O'Donovan, 2003; Wimshurst & Manning, 2013). A common issue cited in the literature, however, is that students often see exemplars as models to follow and there are teacher fears that students might copy unproductively (Handley & Williams, 2011). Despite the benefits of exemplars, there is still relatively little known about how teachers share and use exemplars in their teaching. The aim of this research was to explore how teachers managed the use of exemplars with their students. The study adopted a constructivist grounded theory methodology to theorize how teachers manage the use of exemplars. The study involved observations and pre/post interviews with twelve 141


participants who were all teaching English for Academic Purposes (EAP) at a university in Hong Kong. The findings revealed a wide range of practices with regard to exemplar sharing. There were noticeable differences in exemplar decisions, including the source of the exemplars, whether or not to modify them, and the number used. There were also differences in the extent and manner in which assessment criteria were explicitly highlighted before or after students’ analysed exemplars. Iterative cycles of data analysis enabled the construction of a typology of three exemplar sharing approaches; a structured approach, an exploratory approach and a dialogic approach. Irrespective of approach, all the participants found dialogues difficult to manage; the most dominant practice in the findings being dialogue as closed questioning. The implications for practice revolve around the issue of managing dialogue about exemplars. In order to reduce teachers’ concerns about copying, a dialogic approach is recommended (Carless & Chan, 2017). Yet, there are tensions that exist between the effective utilization of pre-set assessment criteria and the promotion of co-constructed criteria with students and tensions around the choice and sequencing of exemplars. Dialogue is likely to be enhanced if preset criteria are withheld until after students have coconstructed criteria with their teachers, and if students have already written an outline or draft of an assignment before they analyze and discuss exemplars. Key References Carless, D., & Chan, K. K. H. (2017). Managing dialogic use of exemplars. Assessment & Evaluation in Higher Education, 42(6), 930-941. Handley, K., & Williams, L. (2011). From copying to learning: Using exemplars to engage students with assessment criteria and feedback. Assessment and evaluation in higher education, 36(1), 95-108. Hendry, G., Bromberger, N., & Armstrong, S. (2011). Constructive guidance and feedback for learning: The usefulness of exemplars, marking sheets and different types of feedback in a first year law subject. Assessment and evaluation in higher education, 36(1), 1-11. Rust, C., Price, M., & O'Donovan, B. (2003). Improving students' learning by developing their understanding of assessment criteria and processes. Assessment and evaluation in higher education, 28(2), 147-164. Sadler, D. R. (2005). Interpretations of criteria-based assessment and grading in higher education. Assessment & Evaluation in Higher Education, 30(2), 175-194. Wimshurst, K., & Manning, M. (2013). Feed-forward assessment, exemplars and peer marking: Evidence of efficacy. Assessment and evaluation in higher education, 38(4), 451-465. Parallel Session 9 Chair: Mira Vogel Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 3 110 - Evaluating students' self-assessment in large classes Johanna Rämö1, Jokke Häsä1, Viivi Virtanen1,2 1 University of Helsinki, Helsinki, Finland. 2Aalto University, Espoo, Finland Conference Theme Addressing challenges of assessment in mass higher education Abstract This study is part of an ongoing larger project concerning student self-assessment skills in university courses. We have developed a model called DISA (Digital Self-assessment) that 142


enables large cohorts of students to assess their own learning outcomes and to give their own course grades with the help of an automatic verification system. This paper explores the question of accuracy, namely, whether the self-assessed grades correspond to the students’ actual skills, and how well the automatic system can pick up issues in the selfassessment. The ability to judge the quality of one’s own work is one of the core skills that should be developed during university studies. Self-assessment has been viewed as a valuable assessment process through which student can learn to understand the criteria used in assessment, and further, to be able to regulate own learning and acquire skills for lifelong learning (Falchikov & Boud, 1989; Kearney, Perkins, & Kennedy-Clark, 2016). In the DISA model, students evaluate the quality of their learning outcomes frequently, receive feedback on their performance, and finally decide their own grades according to particular criteria. The intended learning outcomes are made transparent through a detailed rubric. The emphasis is in developing student capability in making evaluative judgements (Ajjawi et al., 2018) and building their metacognition skills (Mok et al., 2006). We fill the gap in research by showing how the problems aroused by large class setting were resolved by using digital and automatic verification and feedback. The research questions are:  

How do the students’ evaluations compare with evaluations performed by the automatic verification system? How does an expert judge the student’s acquired skills in cases where the automatic verification disagrees with student’s self-assessment?

The participants were 158 students taking an undergraduate mathematics course following the DISA model. We compared the grades students gave themselves in the final selfassessment with the results of the automatic verification of that self-assessment. For Research question 2, two students were chosen for closer inspection. The students’ self-assessment agreed well with the automatic verification. This is not surprising, as previous studies have shown that explicit criteria support self-assessment, as does frequent practice and feedback (Andrade & Du, 2007; Kearney et al., 2016). Based on an expert’s evaluation of the skills of two students, we conclude that although for large part the model works as intended, there are some cases where neither the self-assessment nor the computer verification seem to be accurate. Key References Ajjawi, R. Tai, J., Dawson, P. & Boud, D. (2018). Conceptualising evaluative judgement for sustainable assessment in higher education. In Developing Evaluative Judgement in Higher Education (pp. 23–33). Routledge. Andrade, H. & Du, Y. (2007). Student responses to criteria-referenced self-assessment. Assessment & Evaluation in Higher Education, 32(2), 159–181. Falchikov, N. & Boud, D. (1989) Student Self-Assessment in Higher Education: A MetaAnalysis. Review of Educational Research, 59(4), 395–430. Kearney, S., Perkins, T. & S. Kennedy-Clark (2016). Using self-and peer-assessment for summative purposes: analysing the relative validity of ASSL (Authentic Assessment for Sustainable Learning) model. Assessment & Evaluation in Higher Education, 41(6), 843–861.

143


Mok, M.M.C., Lung, C.L., Cheng, D.P.W., Cheung, R.H.P. & Ng, M. L. (2006). Self‐assessment in higher education: experience in using a metacognitive approach in five case studies. Assessment & Evaluation in Higher Education, 31(4), 415–433. Parallel Session 9 Chair: John Dermo Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 4 111 - Developing the self-regulation capacity of learners in a competence-based Masters’ program Nati Cabrera, Maite Fernández-Ferrer, Montse Vall-llovera Universitat Oberta de Catalunya, Barcelona, Spain Conference Theme Addressing challenges of assessment in mass higher education Abstract This paper presents the results of a reflective assessment activity on the students’ competences development. An experience carried out in the competence-based program Master’s Degree in Quality Management and Evaluation in Higher Education at the Open University of Catalonia. This transversal activity aims that students reflect on whether their learning process in every subject of the programme have contributed to develop the expected competences and to what extent. This activity have two goals: (1) to improve the accuracy of students’ competences assessment providing valuable information to teachers from the perspective of the students themselves. And (2) to develop the capacity of students to develop their ability to reflect on their own learning. The activity for self-regulation "My competence development" is carried out throughout all a semester and by all the subjects of the Program. Through this the student reflects on the competence level reached with every activity and focuses on those evidences (resolution of cases, problems, projects, publications, contributions in the classroom, in a debate, among others) which, in his or her opinion, prove that he or she has developed them. During all the self-regulated learning process student has teachers’ support and feedback throughout this activity and especially in the final period of preparation of the reflection and assessment document. Moreover, at the end of the semester, the same teacher assess this activity using a rubric for evaluation, which contains all three key elements that are being assessed: a) the student reflection, b) the competences acquisition and development, and c) the evidences. At the end, this reflective learning and assessment activity has a specific weight in the whole qualification of the subject that is a 10% on the final grades. To assess the experience two questionnaires have been elaborated, one for students and other for teachers, in order order to collect qualitative and quantitative information on the objectives proposed. These questionnaires have been administered online during February 2019. The analysis of the data and the results obtained are framed in an exploratory study in order to identify the perception that teachers and students have in relation to self-regulated learning processes with teachers’ support and feedback. As well as to describe the possible divergences and convergences in this perception between the two groups surveyed. This scenario takes into account two distinct advantages: (a) to encourage critical selfreflective lifelong learning; and (b) to gather evidence of broad competences that may enhance for future employment prospects.

144


Key References Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & Vleuten, van der, C. P. M. (2007). Evaluating assessment quality in competence-based education: A qualitative comparison of two frameworks. Educational Research Review, 2, 114-129. DOI: 10.1016/j.edurev.2007.06.001 Boekaerts, M., Pintrich, P. R., & Zeidner, M. (Eds.). (2000). Handbook of self-regulation. San Diego: Academic Press Kim, Y., & Yazdian, L. (2014). Portfolio assessment and quality teaching. Theory into Practice, 53(3), 220-227. Pintrich, P. R. (2000). Multiple Goals, Multiple Pathways: The role of Goal Orientation on Learning and Achievement. Journal of Educational Psychology, 92(3), 544-555. Viechnicki, K., Barbour, N., Shaklee, B., Rohrer, J., & Ambrose, R. (1993). The impact of portfolio assessment on teacher classroom activities. Journal of Teacher Education, 44(5), 371-377. Zimmerman, B. J. (2001). Theories of self-regulated learning and academic achievement: An overview and analysis. En B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theoretical perspectives (pp. 1-37). Mahwah, NJ: Erlbaum. Parallel Session 9 Chair: Amanda Chapman Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 5 112 - Theorising Alternative Pathways for Feedback in Assessment for Learning; The TripleF Approach George Kehdinga Mangosuthu University of Technology, Durban, South Africa Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract Assessment for learning is critical in determining the kind of educational encounters students have and how these encounters shape their learning. Feedback is a critical part of assessment for learning (Angelo & Cross, 2012), and how the feedback is delivered, determines whether or not learning actually takes place. Price, Handley, Millar, and O'Donovan (2010) argues that the five broad roles attributed to feedback play critical role in determining how learning actually takes place both in the now and the future. Since assessment is an inferential process (Bennett, 2011), feedback needs to be rigorous if the inference is to drive learning as intended. McCarthy (2017) supports this when she argues that feedback can only be useful when it is sufficient in frequency and detail. This means that the more rigorous feedback is, the more effective it would be in driving learning. Stiggins (2002) takes this further when he argues that assessment practices often assume all students are the same and provide generic feedback which fails to take into consideration the individuality of the students. This paper reports on the findings of a qualitative case study of a module in a South African University. The Triple-F approach emerged as a theoretical frame from this study which details a three tier approach to feedback. The first tier, sees feedback provided by students on their peers’ work. The second tier, is feedback provided by an alternative student on the feedback provided by their peers and the third tier is that provided by the lecturer on the feedback provided by other students and his assessment on the students’ work. Black and William (1998) argue that improvements in classroom assessment practices will contribute to the improvement of learning. The triple feedback approach improved classroom assessment and gave students the opportunity of 145


providing the kind of feedback they want from the lecturer, have another student judge the depth and criticality of the feedback before the lecturer provides an alternative feedback and judgement or assessment of the feedback provided. The level of participation in the teaching and learning process improved drastically and student performance as well improvement since every student were keen on understanding the issues discussed so as to provide critical feedback or judge the feedback provided by their peers. The paper concludes that these three levels of feedback gave students the opportunity of understanding and driving their learning from alternative perspectives and this greatly impact their performance Key References Angelo, T. A., & Cross, K. P. (2012). Classroom assessment techniques. Jossey Bass Wiley. Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5-25. Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74. McCarthy, J. (2017). Enhancing feedback in higher education: Students’ attitudes towards online and in-class formative assessment feedback models. Active Learning in Higher Education, 18(2), 127-141. Price, M., Handley, K., Millar, J., & O'donovan, B. (2010). Feedback: all that effort, but what is the effect?. Assessment & Evaluation in Higher Education, 35(3), 277-289. Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758-765. Parallel Session 9 Chair: Alexandra Mallinson Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 7 113 - Academic Integrity through e-authentication and authorship verification for eassessment: impact study Dr. Alexandra Okada, Prof. Dr. Denise Whitelock The Open University, Milton Keynes, United Kingdom Conference Theme Developing academic integrity and academic literacies through assessment Abstract This work provides the final findings of a large-scale study about e-authentication and authorship verification for e-assessment. Our aims were to identify the stakeholders’ views about the TeSLA platform and to provide a set of recommendations. The EU-funded TeSLA project - Adaptive Trust-based e-Assessment System for Learning (http://tesla-project.eu) offers a suite of instruments for future usage, such as face recognition, voice recognition, keystroke dynamics, forensic analysis and plagiarism detection. This study draws primarily on pre- and post-intervention questionnaires completed by a total of 4,058 students, including 330 SEND students, and 54 educators from the seven TeSLA pilot universities from six European countries. In addition, seven pilot coordinators, seven technical professionals and 7 institutional leaders from the partner universities provided extra data through three questionnaires. The overall experience with the TeSLA instruments was positive for more than 50% of the students from all partner universities, with more than 70% considering the key advantages of e-assessment with e-authentication to be: “to ensure that my examination results are 146


trusted” and “to prove that my essay is my own original work”. Two recommendations focused on clarifying academic malpractices about plagiarism and cheating to foster academic integrity and promoting discussion about data security and privacy to increase students’ willingness to share personal data Various teaching staff also agreed they were satisfied with the TeSLA experience. Pedagogical guidance was useful for them being able to help students during the process of e-authentication and authorship verification. The course coordinators presented some benefits of using e-authentication such as new types of assessments and the opportunity for increasing the trust of e-assessment. The recommendations were to provide access to the results of e-authentication and authorship verification. The expectations of Institutional leaders and their technical teams were to obtain a userfriendly system, a usable product, information about how the tools work and guidelines for interpreting results and detecting cheating. The recommendations were: provide good communication with end-users, documentation, sufficient capacity, cloud solution, and technical support. In summary, this large scale cross-national study provided evidence-based impact underpinned by responsible research and innovation. The positive effects of TeSLA were to promote new kinds of assessments with alternatives for students including SEND. Students consider that their degree is of value with e-authenticated exams and respected by external stakeholders such as employers. Teaching staff can produce a wider variety of authentic assessments. Policy makers will be able to promote academic integrity and quality assurance. Key References Okada, Alexandra; Noguera, Ingrid; Aleksieva, Lyubka; Rozeva, Anna; Kocdar, Serpil; Brouns, Francis; Whitelock, Denise and Guerrero-Roldán, Ana-Elena. (2019). Pedagogical approaches for e-assessment with authentication and authorship verification in Higher Education. British Journal of Educational Technology (In Press). Edwards, Chris; Whitelock, Denise; Brouns, Francis; Rodríguez, M. Elena; Okada, Alexandra; Baneres, David and Holmes, Wayne. (2019). An embedded approach to plagiarism detection using the TeSLA e-authentication system. In: TEA 2018 Technology Enhanced Assessment Conference, 10-11 Dec 2018, Amsterdam, the Netherlands. Okada, Alexandra; Whitelock, Denise; Holmes, Wayne and Edwards, Chris. (2018). eAuthentication for online assessment: a mixed-method study. British Journal of Educational Technology (Early Access). Edwards, C.; Whitelock, D.; Okada, A. and Holmes, W.. (2018). Trust in online authentication tools for online assessment in both formal and informal contexts. In: ICERI2018 Proceedings, 12-14 Nov 2018, Seville, Spain. Okada, Alexandra; Whitelock, Denise and Holmes, Wayne. (2017). Students’ views on trustbased e-assessment system for online and blended environments. In: The Online, Open and Flexible Higher Education Conference, 25-27 Oct 2017, Open University, Milton Keynes. Okada, Alexandra; Whitelock, Denise; Holmes, Wayne and Edwards, Chris. (2017). Student acceptance of online assessment with e-authentication in the UK. In: The 2017 International Technology Enhanced Assessment Conference (TEA 2017, 5-6 Oct 2017, Barcelona, Spain.

147


Parallel Session 9 Chair: Justin Rami Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 9 114 - PASSES at Durham: Perceptions of Assessment from Students and Staff in Earth Sciences at Durham Matthew Funnell, Christopher Saville Durham University, Durham, United Kingdom Conference Theme Leading change in assessment and feedback at programme and institutional level Abstract Feedback is a critical part of the learning process (Boud & Molloy, 2012). Evaluating previous work is necessary for students to use their previous performance to inform and improve their achievement in future tasks (Nicol & Macfarlane-Dick, 2006). The act of producing feedback is also an effective way for an instructor to gauge how well students have understood the content presented to them. Despite feedback’s importance in the learning process, the quality and effectiveness of the feedback provided as part of the teaching in Durham’s Earth Sciences department is observed and shared far less than other aspects of teaching, such as classroom activity and student perceptions of modules. PASSES (Perceptions of Assessment from Students and Staff in Earth Sciences) aims to develop a comprehensive understanding of the feedback practises that are going on within the Durham University department of Earth Sciences through student interviews (e.g. Orsmond, 2005) and questionnaires (e.g. Bohnack-Bruce, 2013; Mulliner & Tucker, 2017). This information can then be used to ensure teaching staff know of the range of assessment and feedback activities undertaken by the department and can provide a first step towards further improving these. Current results indicate that students find formal written feedback more effective than other forms including verbal feedback, which is at odds with both departmental staff perceptions of the most effective form of feedback for students and results from comparable studies (Bohnacker-Bruce, 2011; 2013; Mulliner & Tucker, 2017). Interview evidence suggests that this preference for written feedback stems from students’ perceptions of their ability to more readily integrate this into their pre-identified learning mechanisms and repeatedly revisit it (as in Orsmond, 2005) when preparing for future assessments. Even when provided with verbal feedback, students reportedly convert this into the written form to enable further revision. This research has highlighted discrepancies between the perceptions of students and staff on feedback, most notably the apparent importance of written feedback in students’ learning. Reducing these discrepancies across the taught programme is a way of achieving better alignment between student satisfaction and effective learning of subject material. This talk will describe the research conducted throughout this project and discuss the findings of PASSES. Further, we will detail how we have integrated this research with departmental feedback strategy through engagement with all stakeholders. We particularly welcome dialogue regarding comparable studies and insight on better aligning student and staff perceptions to help evolve and apply this project’s findings.

148


Key References Bohnacker-Bruce, S., 2011. What is effective feedback: The academic perspective. Capture 3 (1), 7-14. Bohnacker-Bruce, S., 2013. Effective Feedback: The Student Perspective. Capture 4 (1), 2536. Boud, D., & Molloy, E., 2012. Feedback in higher and professional education: understanding it and doing it well. London: Routledge. Mulliner, E., & Tucker, M., 2017. Feedback on feedback practice: perceptions of students and academics. Assessment & Evaluation in Higher Education, 42 (2), 266-288. Nicol, D.J., & Macfarlane-Dick, D., 2006. Fromative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199-218. Orsmond, P., Merry, S. & Reiling, K., 2005. Biology students’ utilization of tutors’ formative feedback: a qualitative interview study, Assessment & Evaluation in Higher Education, 30 (4), 369-386, DOI: 10.1080/02602930500099177. Parallel Session 9 Chair: Jenny Gibbons Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 10 115 - 10 Things No Student Wants to Hear from their Instructor Kimberly Ondo Purdue University Global, Chicago, USA Conference Theme Assessment for learning: feedback and the meaning and role of authentic assessment Abstract At the core of teaching is a passion to see learners succeed. However, instructors do not always see the impact that their comments have on student success (Hawk & Lyons, 2008). Instructors are oftentimes focused on their justification for communicating their message rather than understanding their students’ perception about receiving the message (Planar 2016). As a result, many students are disappointed and frustrated with the feedback they receive from their instructors (Alfehaid, Qotineh, Alsuhebany, Alharbi, S., & Almodaimegh, 2018). Students who have a negative perception of instructor feedback display a lack of motivation and determination than students who have a positive perception of instructor feedback (Hawk & Lyons, 2008; Kauffman, 2015). Conversely, positive perception yields an increase in classroom participation, motivation, satisfaction, and a willingness to implement recommended revisions into their work (Hawk & Lyons, 2008; Kauffman, 2015; Patchan, Schuun, & Correnti, 2016). Continuous improvement is a sought after characteristic that instructors have for their students because it demonstrates advancement between bridging the gap between the student’s achieved competency and the professor’s objective (Planar et al.). This topic is important to discuss because student retention and graduation rates are commonly used as a measurement of student success (Millea, Wills, Elder, & Molina, 2018). Instructors can improve these rates by (1) identifying the impact that their comments have on student success, (2) differentiating between effective and ineffective communication, (3) and transforming existing feedback into positive opportunities that promote student success. Central to these improvements is the ability to alter student perception.

149


If you have ever wondered how instructor comments impact student success, what students think of comments made by instructors, or how to improve your communication skills with students, then pay attention. Online graduate students disclosed their deepest fears, annoyances, and challenges posed by comments received from their instructors. Dr. Kimberly Ondo shares her top 10 list of comments that no student wants to hear from their instructor. Comments can act as demotivators, skewing learner perception thereby potentially impacting their individual success. To help improve student motivation, success, and even retention, Dr. Ondo offers practices that can benefit all faculty, especially those who are faced with struggling or at-risk students. Learn how to combat ineffective comments and replace them with practical and motivational messages that may improve student success. Key References Alfehaid, L. S., Qotineh, A., Alsuhebany, N., Alharbi, S., & Almodaimegh, H. (2018). The perceptions and attitudes of undergraduate healthcare sciences students of feedback: A qualitative study. Health Professions Education, 4, 186–197. https://doi.org/10.1016/j.hpe.2018.03.002 Hawk, T. F. & Lyons, P. (2008). Learner’s perception of care and respect offered by instructors. Industrial & Commercial Training, 40(4):197-205. doi:10.1108/00197850810876244. Kauffman, H. (2015). A review of predictive factors of student success in and satisfaction with online learning. Research in Learning Technology, 23. https://doi.org/10.3402/rlt.v23.26507 Millea, M., Wills, R., Elder, A., & Molina, D. (2018). What matters in college student success? Determinants of college retention and graduation rates. Education, 138(4), 309–322. Patchan, M. M., Schunn, C. D., & Correnti, R. J. (2016). The nature of feedback: How peer feedback features affect students’ implementation rate and quality of revisions. Journal of Educational Psychology, 108(8), 1098–1120. https://doi.org/10.1037/edu0000103 Planar, D. & Moya, S. (2016). The effectiveness of instructor personalized and formative feedback provided by instructor in an online setting: Some unresolved issues. Electronic Journal of E-Learning, 14(3), 196–203. Parallel Session 9 Chair: Jess Evans Time: 13:30 - 14:00 Date: 27th June 2019 Location: Room 11 116 - What underlies students’ relative difficulties in recalling future-oriented feedback? Robert Nash1, Naomi Winstone2, Samantha Gregory1 1 Aston University, Birmingham, United Kingdom. 2University of Surrey, Guildford, United Kingdom Conference Theme Addressing challenges of assessment in mass higher education Abstract In the burgeoning research literature on feedback, attention has increasingly shifted toward understanding the ways in which students engage with—and act upon—the feedback information they receive (Price, Handley, Millar, & O’Donovan, 2010; Winstone, Nash, Parker, & Rowntree, 2017). Whereas much of this research aims to explore students’ and educators’ subjective beliefs about this engagement (e.g., Dawson et al., 2019), there is a 150


need for research that more directly assesses the cognitive processes and behaviour involved in receiving feedback. Our own research in recent years has sought to address one aspect of this need, by using an experimental method to investigate students’ memory for the feedback they receive (Nash, Winstone, Gregory, & Papps, 2018). In a series of these experiments, participants read written feedback that ostensibly related either to essays they had written, or to essays another person had written. The feedback contained critical comments that, through small differences in wording, were cast either in the past tense (i.e., evaluative feedback, describing aspects of the work they had produced) or in the future tense (i.e., directive feedback, describing aspects they could do differently next time). After a short delay, participants were given a surprise memory test, in which they were asked to reproduce as much of the feedback as possible. Even though feedback is typically construed as a future-oriented process, in these experiments we consistently find that students are in fact better at recalling evaluative, past-oriented feedback as compared with directive, future-oriented feedback. In this presentation I will describe a series of new similar studies building upon our earlier work, with the overall goal of explaining why this ‘evaluative recall bias’ occurs. Our findings conclusively point to the likelihood that students’ understandings of the nature and purpose of feedback can shape the cognitive strategies they use when attempting to recall it. These memory biases, consequently, lead to the relative neglect of feedback information that is delivered in a future-oriented manner. After discussing these findings I will outline the results of a field study, which took our research question about memory for feedback outside of the lab, and into real classrooms. Our findings have important implications for education practice. In particular, they point to basic cognitive processes that make feedback concerning future improvement difficult for students to process, and they underscore the pivotal role of feedback literacy in determining whether students benefit from receiving this feedback (Carless & Boud 2018). Key References Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43, 1315-1325. Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: staff and student perspectives. Assessment & Evaluation in Higher Education, 44, 25-36. Nash, R. A., Winstone, N. E., Gregory, S. E., & Papps, E. (2018). A memory advantage for pastoriented over future-oriented performance feedback. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44, 1864-1879. Price, M., Handley, K., Millar, J., & O'Donovan, B. (2010). Feedback: all that effort, but what is the effect?. Assessment & Evaluation in Higher Education, 35, 277-289. Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52, 17-37.

151


In memory of Liz McDowell (1954-2018) Former member of the AHE Executive Committee Liz McDowell was one of the first proponents of ‘Assessment for Learning’ in UK Higher Education and was in the vanguard of thinking about how to operationalise the approach into workable and authentic practices that enhance student learning. Liz was one of those involved in the ‘Enterprise in Higher Education’ initiative in the early nineties which had a profound impact on the way universities regard positive assessment as fostering students’ capabilities, in particular through innovative assessment approaches including oral, self-, peerand group assessment. The impact of her work is apparent in much of the global literature on assessment in the twentieth and early twenty-first centuries, for example, she was a key member of the team who sustained the Higher Education Academy project aiming to transform UK assessment practice. At her funeral, many commented on the significant contribution Liz made to scholarship of assessment and feedback, including: ‘Liz made such a phenomenal impact on the assessment community and was such a force for change’ (Karen Clegg University of York). ‘Through her work so many students will have benefited’ (Julie Hall, DVC Southampton Solent University). ‘The assessment community, as well as me personally, will sorely miss Liz's important contributions and sage advice’ (Kay Sambell, Edinburgh Napier University). ‘Liz’s work on assessment has been influential to so many in academic development’ (Kate Exley, Nottingham University). ‘She was someone who was deeply committed to the student experience of assessment: lots of insights from her approach and research. Above all, though, she was a lovely person, genuine and sincere’ (Lin Norton, Liverpool John Moores University). Liz’s work has impacted on hundreds of academics and thousands of students in numerous nations: her memory lives on through the legacy of her pioneering and thought-provoking work’ Sally Brown Colleague to Liz since the mid-1980s and her friend to the end

152


Dates for Your Calendar Transforming Assessment Webinar Wednesday 17 July 2019 Making technology enhancement effective: what works? Panel Chair - Prof Sally Jordan (Open University, UK)

This 1 hour session will feature selected speakers from the Assessment in Higher Education (AHE) conference 26 & 27 June 2019 Manchester, UK. The session theme is ‘Making technology enhancement effective: what works?’ This webinar session will take the form of a panel style review of some of the key messages from the AHE2019 conference. Each panel member will contribute to the discussion with a short overview of their presentation from the conference followed by questions and discussion between the panel and the webinar participants related to the theme. Panel Presenters: 1. Mira Vogel (King's College London) on "Students and assessors in conversation about authentic multimodal assessment" 2. Maria Rosaria Marsico (University of Exeter) on "Online tools to enhance students experience: assessment" Sessions are hosted by Professor Geoffrey Crisp, DVCA Education, University of Canberra and Dr Mathew Hillier, University New South Wales, Australia. Please note all sessions are recorded and made public after the event. When: 17 July 2019 07:00 AM through GMT): See your equivalent local time

08:00 AM Coordinated Universal Time (UTC /

To register for this session please login (or create an account) on-line here http://transformingassessment.com/user/login then click the ‘register now’ button. Sessions are hosted by Professor Geoffrey Crisp, PVC Education, University New South Wales and Dr Mathew Hillier, Monash University Office of Learning and Teaching, Monash University, Australia. Please note all sessions are recorded and made public after the event. This is a joint webinar organised with the Assessment in Higher Education conference secretariat. 153


International AHE Conference 2 July 2020 in Manchester UK Thursday 2 July 2020 in Manchester is our international Assessment in Higher Education one day conference. This research and academic development event provides a forum for critical debate of research and innovation focused on assessment and feedback practice and policy. We have a provocative, forward looking keynote from world-leading Professor David Boud. There will be a call for papers sent out across the AHE network in October 2019 to generate a choice of cutting edge presentations by leading researchers and academic developers.

Keynote Speaker Professor David Boud Assessment for future needs: Emerging directions for assessment change Over the past fifty years remarkable changes have occurred, not just in assessment practice, but the ways in which we conceptualise assessment. Some of these shifts include: From a focus on simple performance on final examinations, to a diversity of approaches in different modes at different times; from assessment as comparing students to judgement of outcomes against standards. Most importantly, there has been a conceptual shift from the single purpose of certifying students to multiple purposes including aiding learning and building the capacity of students to make their own judgements; and from judging students with respect to each other to judging them against standards and criteria. What is now commonplace in assessment was, if conceived of at all, once strange and radical. What will scholars in the future notice about assessment today? What will they regard as quaint and old-fashioned and what will they see as having provided the foundations for more effective practice? While some things are unlikely to change—universities will still have certifying functions, there will be forms of external accountability and assessment will still contribute, for good or bad, to student learning—there is far more scope for flexibility and change than we normally imagine. The presentation will consider current practices that, looking back, will be recognised as strange or counterproductive, and consider what will replace them. It will include some or all of the following:

154


       

Certifying student performance that has been superseded by later performance in the same unit or course. Recording student performance by a grade/mark for each subject/course unit, rather than in terms of learning outcomes that have been met. Over-emphasising a limited range of inauthentic assessment practices and thus learning outcomes (e.g. tests, exams and essays). Believing that the form of assessment is more important than the effects it produces Expending effort on feedback processes that correct or classify students’ work rather than provide them with the means and opportunities to improve it. Emphasising unilateral assessments in which students are solely judged by others, creating patterns of dependency and lack of confidence in their own judgements. Assessing all students identically when they have different aspirations. Certifying and portraying students in ways that do not recognise the distinctiveness of their achievements.

How rapidly can we move from comfortable and familiar assessment practices that are becoming increasingly indefensible? What is needed to do so? Biography David Boud is Alfred Deakin Professor and Director of the Centre for Research in Assessment and Digital Learning at Deakin University, Melbourne and Emeritus Professor at the University of Technology Sydney. He is also Professor of Work and Learning at Middlesex University. Previously, he has held positions of Head of School, Associate Dean and Dean of the University Graduate School at UTS. He has published extensively on teaching, learning and assessment in higher and professional education. His current work focuses on the areas of assessment for learning in higher education, academic formation and workplace learning. He is one of the most highly cited scholars worldwide in the field of higher education. He has been a pioneer in developing learning-centred approaches to assessment across the disciplines, particularly in building assessment skills for long-term learning (Developing Evaluative Judgement in Higher Education, Routledge 2018) and designing new approaches to feedback (Feedback in Higher and Professional Education, Routledge, 2013). Re-imagining University Assessment in a Digital World, will appear from Springer later in 2019.

The call for proposals will be sent out across the network in October 2018.

Further information For further information go to: http://aheconference.com/ Conference manager queries: Linda Shore linda.shore@cumbria.ac.uk

The AHE conference is leading the development of assessment for learning in higher education 155


PRHE Journal Practitioner Research in Higher Education publishes research and evaluation papers that contribute to the understanding of theory, policy and practice in teaching and supporting learning. The journal aims to disseminate evaluations and research of professional practice which give voice to all of the participants in higher education and which are based on ethical and collaborative approaches to practitioner enquiry. The on-line, open access journal has recently published a standard issue comprising six papers. Four of the studies involve students in professional fields, three of the studies make innovative use of technology to support learning, and all of the papers are connected to aspects of assessment and feedback. Previous special issues on assessment in higher education based on AHE conference papers were published in 2014, 2016 and 2018 and may be viewed at: http://ojs.cumbria.ac.uk/index.php/prhe/issue/archive .

We would welcome papers for our next Assessment Special issue of the journal comprising papers from the AHE Conference 2019. All those presenting their work at the conference via a research paper, practice exchange discussion or poster are invited to submit an article for this special edition. Please check and work to the PRHE author guidelines at https://ojs.cumbria.ac.uk/index.php/prhe/about/submissions and email your proposed article to linda.shore@cumbria.ac.uk The deadline for submission is Friday 27 September 2019. Should you like further information or to discuss a proposed paper, please contact the journal guest editors Rita Headington ritaheadington@aol.com or Jess Evans jessica.evans@open.ac.uk

156


Venue Floor Plan: 1st Floor

157


Notes



Lear

ningEd

ucat

Asse

ion

Aca

ssm

ing

adem ent y

d e m Lear y n

ent ssm

Asse

Š University of Cumbria 2018 (UOC 1293)

http://aheconference.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.