0
Contents Page no Delegate Information
2
Welcome
3
Conference Executive Committee
5
List of Reviewers
6
Programme Summary
8
Programme Session by Session
9
Day 1 – Keynote: Dr Maddalena Taras
23
Day 1 – Abstracts
24
Day 2 – Keynote: Professor Jo-Anne Baird
66
Day 2 – Abstracts
67
Parallel Sessions by Theme
129
Presenter Index
138
Conference Exhibitors & Main Sponsors
149
Delegate List
150
Floor Plan
156
1
Delegate Information Networking We hope the conference will provide you with an excellent opportunity to make connections and discover shared interests in higher education assessment with colleagues from across the world. Our conference dinner, on the 24th of June, is an informal social event to support networking. This meal needs to have been pre-booked at the point of registration. We are also holding a free networking breakfast in the conference centre on the second day so, whatever hotel you are staying in, you can get together to discuss the day’s events and enjoy breakfast with like-minded colleagues.
Evaluation and commentary We actively encourage you to make use of the conference hashtag #AssessmentHEconf on Twitter in order to share ideas, respond to sessions, ask questions and make connections. There will be an on-line evaluation after the conference but please feel free to share any comments or suggestions with members of the Organising Group whilst you are here.
Interesting places to visit Birmingham is a revitalised city with many interesting places to visit. Go to http://visitbirmingham.com/ for plenty of ideas.
Wi-Fi Access For WiFi connection at Maple House please use the following login details: User name: etcvenues Password: wifi4065
AHE Conference Stand For assistance during the conference please visit the AHE Conference stand located opposite the Accelerate Suite
2
Welcome It is with great pleasure that I welcome you to Birmingham for the fifth Assessment in Higher Education Conference. The conference is a unique opportunity to bring together researchers, practitioners and policymakers working in the field of higher education assessment. Over the 7 years since the AHE conference was established, we have built a focused, friendly and collaborative atmosphere and we intend to continue that in the 2015 event. To this end, we have kept the conference relatively small and have returned to the same intimate conference venue. We have developed a more challenging and inclusive range of themes this year, designed to critique and extend many of the taken for granted ‘theories’ of good assessment practice in third level education. These include Learning and contemporary higher education assessment, Student responses to assessment, Assessment literacies, Assessment research: theory, method and critique, Diversity and inclusion, Institutional change and assessment and Overcoming specific assessment challenges. Whilst these themes allow for the continuation of existing debates, they also encourage new views and controversial standpoints including the opportunity to explore how we research assessment and how change at the micro level can (or can’t) lead to change in institutional assessment policies and practices. We are hoping that the exciting range of papers, practice exchanges and posters will provide the chance to share innovations, evidence-based change, and reflections on practice, research and evaluative studies. We have a strong international representation at the conference drawing on research and practice from 20 national higher education systems. Conference Highlights We have chosen our excellent keynote speakers for their background in assessment research and capacity to challenge some existing theories and areas of ‘good practice’ in assessment. Dr Maddalena Taras from the University of Sunderland will be speaking on the first day and Professor Jo-Anne Baird, Director of the Oxford University Centre for Educational Assessment, will bring the conference to a conclusion on Thursday afternoon. We will be continuing one of the highlights of the last AHE conference with our unique poster session. All poster owners will have the opportunity to make a 3 minute ‘pitch’ to the delegates on the substance and importance of their poster. This process will take place in different rooms depending on the theme of the posters and will enable each author to provide a brief overview of their topic so that delegates can decide which they are interested in following up. We have added a little extra frisson to the poster session by offering the prize of an iPad to the best poster as judged by the Conference Executive Committee. So get those pitches zinging! We are hosting a drinks reception at 7pm prior to the Conference Dinner. We recognise that these social events can feel intimidating for first timers or people who do not know others at the conference. So the Executive Committee will be
3
hosting a ‘first timer’ group/s at the drinks reception. Let them know on the door that you are new to the conference or have come on your own and we’ll connect you to that group. As the conference chair, I wish you an excellent, stimulating and friendly conference and a good time in Birmingham. Whatever you are looking for in a conference, I hope you find it. There is no doubt that the sector is in clear need of transformed assessment practice and this conference community has an important role to play in building the critical mass of research and innovation needed to support that transformation.
Sue Bloxham Conference Chair on behalf of the Executive Committee
4
AHE Conference Executive Committee The conference is managed by an organising group of researchers and practitioners in higher education assessment and is sponsored by the University of Cumbria.
5
List of Reviewers Adie
lenore.adie@qut.edu.au
Cecilie
Asting
cecilie.asting@bi.no
Judy
Barker
judy.barker@uea.ac.uk
Patrick
Baughan
p.baughan@city.ac.uk
David
Bell
d.bell@qub.ac.uk
Alison
Bettley
Alison.bettley@open.ac.uk
Michaela
Borg
michaela.borg@ntu.ac.uk
Anke
Buttner
a.c.buttner@bham.ac.uk
Gladson
Chikwa
Chikwagladson@gmail.com
Wajee
Chookittikul
wajee@hotmail.com
Lynda
Cook
lynda.cook@open.ac.uk
Leanne
de Main
l.demain@coventry.ac.uk
Calum
Delaney
cdelaney@cardiffmet.ac.uk
John
Dermo
j.dermo@bradford.ac.uk
Martin
Dixon
martin.dixon@staffs.ac.uk
Frank
f.eperjesi@aston.ac.uk
Rachel
Eperjesi Fernandez Caparros Forsyth
Jane
Freeman
jane.freeman@eastonotley.ac.uk
Fiona Inger Carin Sarah
Gilbert
f.gilbert@brookes.ac.uk
Grøndal
inger.c.grondal@bi.no
Hamilton
sarahhamilton@bpp.com
Bridget
Hanna
b.hanna@napier.ac.uk
Janet
Haresnape
Jennifer
Harvey
Janet.Haresnape@open.ac.uk jen.harvey@dit.ie
Jane
Headley
jheadley@harper-adams.ac.uk
Stuart
Hepplestone
s.j.hepplestone@shu.ac.uk
Tim
Hunt
T.J.Hunt@open.ac.uk
Natasha
Jankowski
njankow2@illinois.edu
joanna
Leach
joanna.leach@buckingham.ac.uk
Andy
Lloyd
LloydA@Cardiff.ac.uk
Claudia
Loijens-van Rhijn
c.b.h.loijens@tilburguniversity.edu
Cecilia
Lowe
cecilia.lowe@york.ac.uk
Janis
MacCallum
j.maccallum@napier.ac.uk
Garry
Maguire
gmaguire@brookes.ac.uk
anjiru
Mbure
wmbure@stonehill.edu
Ana
ana.fernandez-caparros@uv.es r.m.forsyth@mmu.ac.uk
6
List of Reviewers (contd.) Teresa
McConlogue
t.mcconlogue@ucl.ac.uk
Mike
McCormack
m/mccormack@ljmu.ac.uk
Steve
McPeake
s.mcpeake@ulster.ac.uk
Fiona
Meddings
F.S.Meddings@Bradford.ac.uk
Beverley
Merricks
b.a.merricks@bham.ac.uk
Trish
Murray
p.b.murray@sheffield.ac.uk
Steven
Nijhuis
steven.nijhuis@hu.nl
Stephen
Nutbrown
psxsn6@nottingham.ac.uk
Marion
Palmer
Marion.palmer@iadt.ie
Julie
Peirce-Jones
jpeirce-jones@uclan.ac.uk
Ann
ann.read@solent.ac.uk
Anna
Read Ruiz Esparza Barajas Steen-Utheim
Paul
Sutton
psutton@marjon.ac.uk
Nhan
Tran
n.tran@ncl.ac.uk
Julie
Vuolo
j.c.vuolo@herts.ac.uk
Sharon
Waller
sharon.waller@anglia.ac.uk
Freda
Wolfenden
freda.wolfenden@open.ac.uk
Elizabeth
elruiz@guaymas.uson.mx anna.steen-utheim@bi.no
7
Programme Summary Day 1 24th June 2015 09:00 Registration 24th June 2015 09:30 Pre-conference Master Classes 24th June 2015 11:20 Parallel Session 1 24th June 2015 12:00 Parallel Session 2 24th June 2015 12:30 Lunch (Restaurant) 24th June 2015 13:30 Welcome & Keynote: Dr. Maddalena Taras 24th June 2015 14:45 Parallel Session 3 24th June 2015 15:15 Refreshments (Restaurant) 24th June 2015 15:30 Parallel Session 4 24th June 2015 16:10 Parallel Session 5 24th June 2015 16:50 Parallel Session 6 24th June 2015 17:30 Close 24th June 2015 19:15 Drinks Reception 24th June 2015 20:00
Evening Meal
Day 2 25th June 2015 08:30 Networking Breakfast (Restaurant) 25th June 2015 09:00 Registration 25th June 2015 09:15 Parallel Session 7 25th June 2015 09:50 Parallel Session 8 25th June 2015 10:35 Parallel Session 9 25th June 2015 11:15 Refreshments (Restaurant) 25th June 2015 11:30 Poster Session 25th June 2015 12:30 Lunch (Restaurant) 25th June 2015 13:30 Parallel Session 10 25th June 2015 14:10 Keynote: Professor Jo-Anne Baird & Poster Award 25th June 2015 15:15 Refreshments & Close (Restaurant)
8
Programme Session by Session Day 1 9.00am Registration 09.30am Master Classes Delegates are invited to attend a series of Master Classes prior to the start of the conference on Wednesday 24 June 2015. Experts in the field of assessment will be leading these sessions. 001
Rethinking feedback for greater impact on learning David Boud Deakin University, Australia Location: Proceed 1
002
Designing and carrying out effective assessment David Carless University of Hong Kong, Hong Kong Location: Proceed 2
003
Transforming the Experience of Students through Assessment' (TESTA) Tansy Jessop University of Winchester, UK Location: Propel 1
004
Help students to help themselves: developing assessment literacy Margaret Price Oxford Brookes University, UK Location: Propel 2
ParPaParallel Session 1 11.20am 005 Developing and Embedding Inclusive Assessment across Plymouth University Pauline Kneale, Jane Collings* Plymouth University, UK Location: Proceed 1 Chair: Steve Bennet 006 Written Assessment and Feedback Practices in Postgraduate Taught Courses: an international perspective Victor Guillen Solano1 1 Sheffield Hallam University, UK, 2The University of Sheffield, UK Location: Proceed 2 Chair: Leanne de Main 007 Impact on Student Learning: Does Assessment Really Make A Difference? Natasha Jankowski1,2 1 University of Illinois Urbana-Champaign, USA, 2National Institute for Learning Outcomes Assessment, USA Location: Propel 1 Chair: Martin Dixon
9
008 ‘Another brick in the wall’? Teachers’ representations about assessment and teacher education processes Serafina Pastore*1, Monica Pentassuglia2 1 University of Bari, Italy, 2University of Verona, Italy Location: Propel 2 Chair: Jenny Fisher 009 Measuring the impact of high quality instant feedback on learning Stephen Nutbrown*1, Su Beesley2, Colin Higgins1 1 University of Nottingham, UK, 2Nottingham Trent University, UK Location: Accelerate 1 Chair: Jill Barber 010 The influence of students' epistemic beliefs on their satisfaction with assessment and feedback Berry O'Donovan Oxford Brookes University, UK Location: Accelerate 2 Chair: Ruth Larsen 011 I wish I could believe you: the frustrating unreliability of some assessment research Tim Hunt*1, Sally Jordan2 1 Information Technology, The Open University, UK, 2Department of Physical Sciences, The Open University, UK Location: Expand 1 Chair: Cecilia Lowe 012 Investigating the feedback gap(s) in pre-service language teacher education: What is the Emperor really wearing (and who will tell)? June Starkey OISE/UT, Canada Location: Expand 2 Chair: Janet Macaulay 013 Learner engagement with Interactive Computer Marked Assignments on beginners’ language modules Anna Proudfoot*, Anna Comas-Quinn, Ursula Stickler, Qian Kan, Tim Jilg The Open University, UK Location: Forward Chair: Janis MacCallum
Parallel Session 2 12 noon 014
Employer Led Problem Based Learning: Developing and Assessing Employability Skills for Success Ron Cambridge London Metropolitan University, UK Location: Proceed 1 Chair: Steve Bennet
015
Ipsative assessment for student motivation and longitudinal learning Gwyneth Hughes Institute of Education, UK Location: Proceed 2 Chair: Leanne de Main
016
Using exemplars to develop assessment literacy: what do students learn to notice during pre-assessment workshops? Kay Sambell*, Linda Graham Northumbria University, UK Location: Propel 1 Chair: Martin Dixon
10
017
Making use of assessment feedback: Students' perceptions of the utility of interventions for supporting their engagement with feedback Naomi Winstone*1, Michael Parker1, Robert Nash2 1 University of Surrey, UK, 2Aston University, UK Location: Propel 2 Chair: Jenny Fisher
018
Exploring students' perceptions about peer-evaluation: a case study Elizabeth Ruiz Esparza Barajas Universidad de Sonora, Mexico Location: Accelerate 1 Chair: Trish Murray
019
Culturally Responsive Assessment: Modifying Assessment Processes to Meet Diverse Student Needs Natasha Jankowski*1,2, Erick Montenegro1,2 1 University of Illinois Urbana-Champaign, USA, 2National Institute for Learning Outcomes Assessment, USA Location: Accelerate 2 Chair: Ruth Larsen
020
On-line Assessment and Personalised Feedback - Some Novel Approaches Jill Barber University of Manchester, UK Location: Expand 1 Chair: Cecilia Lowe
021
An alternative explanatory framework for what students want from feedback, what they actually use, and what tutors think they need Mark Carver University of Cumbria, UK Location: Expand 2 Chair: Janet Macaulay
022
Institutional approach to improving feedback and assessment practices using TESTA at the University of Greenwich Monika Pazio*, Duncan McKenna University of Greenwich, UK Location: Forward Chair: Janis MaCullum
12.30pm Lunch Location: Restaurant 13.30pm Welcome & Keynote: Dr. Maddalena Taras Location: Accelerate Suite Parallel Session 3 14.45pm 023
Standardised Assessment to Increase Student Learning and Competency Ida Asner LiveText Consultant, USA Location: Proceed 1 Chair: Michaela Borg
024
Meeting the challenge of assessment when personal transformation is the outcome Annette Becker Utica College, USA Location: Proceed 2 Chair: Phil Newton
025
Peer Reflection within Sports Coaching Practical Assessments Martin Dixon*, Chris Lee, Craig Corrigan Staffordshire University, UK Location: Propel 1 Chair: Blazenka Divjak
11
026
Transforming the Experience of STAFF through Assessment Eddie Mighten*, Diane Burkinshaw Sheffield Hallam University, UK Location: Propel 2 Chair: Gwyneth Hughes
027
Marketing Downloads : Student response to a learning and assessment innovation at Kingston Business School Kingston University Hilary Wason, Nathalie Charlton and Dr Debbie Anderson Hilary Wason*, Nathalie Charlton, Debbie Anderson Kingston University, UK Location: Expand 1 Chair: Natasha Jankowski
028
Animate to communicate: using digital media for assessment Jenny Fisher*, Hayley Atkinson Manchester Metropolitan University, UK Location: Expand 2 Chair: Neil Lent
029
Grade Point Average: Outcomes from the UK pilot Higher Education Academy University of Cumbria, UK Location: Forward Chair: Mike McCormack
15.15pm Refreshments Location: Restaurant Parallel Session 4 15.30pm 030
Portraying Assessment: The Fear of Never Being Good Enough Peter Day, Harvey Woolf* University of Wolverhampton, UK Location: Proceed 1 Chair: Matthew Williamson
031
Investigating student preferences for a novel method of assessment feedback: A comparison of screencast and written feedback through questionnaire and focus group methods David Wright*, Damian Keil Manchester Metropolitan University, UK Location: Proceed 2 Chair: Liesje Coertjens
032
Helping the horses to drink: lessons learned from an institution-wide programme designed to enhance assessment Andy Lloyd Cardiff University, UK Location: Propel 1 Chair: Irene Glendinning
033
Getting traction on assessment development: what can we learn from a professions' (Law; Medicine) perspective? Chris Trevitt Australian National University, Australia Location: Propel 2 Chair: Rachel Forsyth
034
How can an institution increase the assessment quality of its examiners? Remko van der Lei*, Brenda Aalders Hanze University of Applied Sciences, The Netherlands Location: Accelerate 1 Chair: Christie Harner
12
035
Charting the assessment landscape: preliminary evaluations of an assessment map Anke C. Buttner*, Carly Pymont University of Birmingham, UK Location: Accelerate 2 Chair: Sharon Waller
036
The constrained impact of a capstone dissertation assessment on the continuing workplace learning of master teachers Pete Boyd*, Hilary Constable University of Cumbria, UK Location: Expand 1 Chair: Kristen Sullivan
037
Oral forms of assessment and the nature of the spoken word: Insights from the world of acting and actor training Gordon Joughin*1, Eliot Shrimpton2 1 Higher Education Consultant, Australia, 2Guildhall School of Music and Drama, UK Location: Expand 2 Chair: Richard McManus
Parallel Session 5 16.10pm 038
Case-Based Assessments in Business Management: Think Local, Not Global Carl Evans University of St Mark & St John, UK Location: Expand 2 Chair: Richard McManus
039
A sensible future for moderation? Sue Bloxham*1, Lenore Adie2, Clair Hughes3 1 University of Cumbria, UK, 2Queensland University of Technology, Australia, 3 University of Queensland, Australia Location: Accelerate 1 Chair: Christie Harner
040
Using participatory photography as an assessment method: the challenges Gwenda Mynott Liverpool John Moores University, UK Location: Expand 1 Chair: Kristen Sullivan
041
Formative thresholded assessment: Reflections on the evaluation of a faculty wide change in assessment practice Sally Jordan The Open University, UK Location: Accelerate 2 Chair: Sharon Waller
042
Students' responses to formative and summative online feedback generated using a statement bank: Outcomes from two quantitative studies Philip Denton*, David McIlroy Liverpool John Moores University, UK Location: Propel 2 Chair: Rachel Forsyth
043
Dialogue+: Promoting first year undergraduate students' understanding of, and participation with assessment and feedback processes Rebecca Westrup University of East Anglia, UK Location: Propel 1 Chair: Irene Glendinning
044
Assessment for Employment: introducing ‘Engineering You're Hired’ Patricia Murray*, Andrea Bath, Russell Goodall, Rachel Horn University of Sheffield, UK Location: Proceed 2 Chair: Liesje Coertjens
13
Parallel Session 6 16.50pm 045
Assessing Student Learning: A Source of Ethical Concern for Higher Education Teachers Luc Desautels*1, Christiane Gohier2, France Jutras3, Philippe Chaubet2 1 Cégep régional de Lanaudière, Canada, 2Université du Québec à Montréal, Canada, 3 Université de Sherbrooke, Canada Location: Propel 1 Chair: Irene Glendinnnig
046
Applying assessment regulations equitably and transparently Marie Stowell*1, Harvey Woolf2 1 University of Worcester, UK, 2ex University of Wolverhampton, UK Location: Accelerate 1 Chair: Christie Harner
047
E-marking: institutional and practitioner perspectives Carmen Tomas University of Nottingham, UK Location: Accelerate 2 Chair: Sharon Waller
048
Chinese Tutor and Undergraduate Responses to an Assessment Change Jiming Zhou The University of Hong Kong, Hong Kong Location: Forward Chair: Ernesto Panadero
049
The journey to digital storytelling and artifact-based assessment in Psychology: lessons to be learned from the arts-based disciplines Diane Westwood University of Sunderland, UK Location: Expand 1 Chair: Kristen Sullivan
050
Experiences of co-creating marking criteria Nicky Meer*, Amanda Chapman University of Cumbria, UK Location: Proceed 1 Chair: Phil Newton
051
Preconceptions surrounding automated assessment - A study of staff and students Stephen Nutbrown*1, Su Beesley2, Colin Higgins1 1 University of Nottingham, UK, 2Nottingham Trent University, UK Location: Propel 2 Chair: Rachel Forsyth
17.30pm Close 19.15pm Drinks Reception 20.00pm Evening Meal
14
Day 2 8.30am Networking Breakfast Location: Restaurant 9.00am Registration Parallel Session 7 9.15am 052
Placement for Access and a Fair Chance of Success in South African Higher Education Institutions Robert Prince University of Cape Town, South Africa Location: Accelerate 1 Chair: Anna Steen-Utheim
053
Students' responses to learning-oriented assessment David Carless University of Hong Kong, Hong Kong Location: Propel 2 Chair: Nicola Reimann
054
How mature are your institutional policies for academic integrity? Symposium: 054, 062, 074 Irene Glendinning Coventry University, UK Location: Proceed 2 Chair: Anke Buttner
055
Why is formative assessment so complicated? What is behind the push-me, pull-you relationship between theory and practice and how can we all move forward? Donna Hurford1 1 University of Southern Denmark, Denmark, University of Cumbria, UK Location: Proceed 1 Chair: Diane Burkinshaw
056
Changing the Assessment Imagination: designing a supra-programme assessment framework at Faculty level Jessica Evans*1, Simon Bromley2 1 The Open University, UK, 2Sheffield Hallam University, UK Location: Accelerate 2 Chair: Tim Hunt
057
‘Leave me alone, I’m trying to do my work’ - The discrepancies between staff and students’ perceptions of feedback and assessment practices Monika Pazio*, Duncan McKenna University of Greenwich, UK Location: Forward Chair: Rebecca Westrup
058
The impact of the assessment process and the international MA-TESOL course on the professional identity of Vietnamese student teachers David Leat1, Tran Thanh Nhan*1,2 1 Newcastle University, UK, 2Vietnam National University, Viet Nam Location: Expand 1 Chair: Andy Lloyd
059
Conceptualising Fellowship of the Higher Education Academy (HEA) as an assessment process Nicola Reimann*1, Ian Sadler2 1 University of Durham, UK, 2York St John University, UK Location: Propel 1 Chair: Carmen Tomas
15
Parallel Session 8 09.50am 060 From research to practice: The connections students make between feedback and future learning Stuart Hepplestone*, Helen J. Parkin Sheffield Hallam University, UK Location: Propel 2 Chair: Nicola Reimann 061 Live Peer Assessment: Its Effects and After Effects Steve Bennett*, Trevor Barker University of Hertfordshire, UK Location: Forward Chair: Rebecca Westrup 062 International postgraduate students and academic integrity: challenges and strategies to support Symposium: 054, 062, 074 Mary Davis Oxford Brookes University, UK Location: Proceed 2 Chair: Anke Buttner 063 Higher education teachers’ assessment practices: Formative espoused but not yet fully implemented Ernesto Panadero*1, Gavin Brown2 1 Universidad Autónoma de Madrid, Spain, 2The University of Auckland, New Zealand Location: Expand 2 Chair: Pete Boyd 064 Examine student theses - similarities and differences in relation to examiners' experience Mats Lundström*1, Lars Björklund2, Karin Stolpe2, Maria Åström3 1 Malmö University, Sweden, 2Linköping University, Sweden, 3Umeå University, Sweden Location: Proceed 1 Chair: Diane Burkinshaw 065 From practice oriented and academic traditions to academic professional qualifications - A historical view of Swedish teacher education Karin Stolpe*1, Mats Lundström2, Lars Björklund1, Maria Åström3 1 Linköping university, Sweden, 2Malmö university, Sweden, 3Umeå university, Sweden Location: Accelerate 1 Chair: Anna Steen-Utheim 066 Improving Communication of Assessment Task Requirements and Expectations Through Improving Assignment Brief Design Garry Maguire*, Fiona Gilbert Oxford Brookes University, UK Location: Propel 1 Chair: Carmen Tomas 067 Enhancing Engagement through Collaboration in Assessment Daniel Russell*, Barry Avery Kingston University, UK Location: Expand 1 Chair: Andy Lloyd 068 A moving target: assessing the process and progress of learning One hour session John Couperthwaite Pebble Learning Ltd, UK Location: Accelerate 2 Chair: Tim Hunt
16
Parallel Session 9 10.35am 069
The Abstract Labour of Learning and the Value of Assessment Paul Sutton University of St Mark & St John, UK Location: Proceed 1 Chair: Diane Burkinshaw
070
Phenomenographically exploring students' utilisation of feedback Edd Pitt University of Kent, UK Location: Forward Chair: Rebecca Westrup
071
Embedding key transferable skills for success during and after University through innovative assessment Joanne Hooker*, Jayne Whistance Southampton Solent University, UK Location: Expand 1 Chair: Andy Lloyd
072
Structuring peer assessment and its evaluation by learning analytics Blazenka Divjak University of Zagreb, Faculty of Organization and Informatics, Croatia Location: Expand 2 Chair: Pete Boyd
073
Student Perceptions of different Assessment Modes in Computer Programming Courses Suraj Ajit University of Northampton, UK Location: Propel 2 Chair: Nicola Reimann
074
Custom essay writing and other paid third parties in Higher Education; what can we do about it? Symposium 054, 062, 074 Phil Newton Swansea University, UK Location: Proceed 2 Chair: Anke Buttner
075
Student understandings and use of learning outcomes in higher education Tine Sophie Prøitz*1, Anton Havnes2 1 Buskerud and Vestfold University College, Norway, 2Oslo and Akershus University College of Applied Science, Norway Location: Accelerate 1 Chair: Anna Steen-Utheim
076
Incorporating digital technologies in the assessment of oral presentations at a distance Stefanie Sinclair The Open University, UK Location: Forward Chair: Ernesto Panadero
11.15am Refreshments
17
Poster Session 1: assessment challenges in disciplinary and professional contexts 11.30am Location: Proceed 1 Chair: Sally Jordan 077
The use of stakeholder-informed simulation in assessment: sharing experience from an undergraduate medical student disability awareness programme Adam Wilson*, Anand Gidwani, Christopher Meneilly, Vivienne Crawford, David Bell Queen's University Belfast, UK
078
Identifying potential English language teachers from a cohort of MA students in order to meet the requirements of an external validation authority Susan Sheehan University of Huddersfield, UK
079
How to assess our students well: innovative approaches for addressing the challenges of assessment and feedback Yue Zhao The University of Hong Kong, Hong Kong
080
How can admissions testing better select candidates for professional programmes? Belinda Brunner Pearson VUE, UK
Poster Session 2: Assessment Literacies 11.30am Location: Proceed 2 Chair: Rebecca Westrup 081 A quantitative analysis of student engagement with online feedback Claire Moscrop1,2 1 Edge Hill University, UK, 2Lancaster University, UK 082 The AsSET toolkit: developing assessment self-efficacy to improve performance Sue Palmer-Conn*, David McIlroy Liverpool John Moores University, UK 083 Assessment representations and practices in Italian higher education context: Hints from a case study Serafina Pastore*1, Monica Pentassuglia2 1 University of Bari, Italy, 2University of Verona, Italy 084 I don’t have time to attend a 2 hour training session: consequences and impact Neil Witt, Emma Purnell* Plymouth University, UK 085 The Power of the "One-Pager": a simple idea for effective, informal formative assessment Deborah Anderson*, Rebecca Lees Kingston University, UK 086 Developing assessment literacy for Postgraduates who Teach: compliance or quality enhancement? John Dermo University of Bradford, UK
18
Poster Session 3: Assessment Research: theory, method and crtiique 11.30am Location: Propel 1 Chair: Erica Morris 087
To measure the unmeasurable: using Repertory Grid Technique to elicit tacit criterias used by examiners Lars Björklund*1, Karin Stolpe1, Mats Lundström2, Maria Åström3 1 Linköping University, Sweden, 2Malmö University, Sweden, 3Umeå University, Sweden
Poster Session 4: Diversity and Inclusion 11.30am Location: Propel 1 Chair: Amanda Chapman 088
Preparing international students for the diversity of UK assessment within a UK-China articulation agreement Katie Szkornik*, Alix Cage, Ian Oliver, Zoe Robinson, Ian Stimpson, Keziah Stott, Sami Ullah, Richard Waller Keele University, UK
089
Gender differences in completion and credit on physical science modules Niusa Marigheto*, Victoria Pearson, Pam Budd, Jimena Gorfinkiel, Richard Jordan, Sally Jordan The Open University, UK
090
Learning diversity in higher education: Comparison of learning experiences among cross cultural student populations in a Hong Kong university Yue Zhao The University of Hong Kong, Hong Kong
Poster Session 5: Institutional change in assessment policy and practice 11.30am Location: Propel 2 Chair: Mark Huxham 091
Using an evidence based approach to transform academic approaches to assessment Courtney Simpson, Caroline Speed, Alexandra Dimitropoulos, Janet Macaulay* Monash University, Australia
092
Walking the Assessment Talk: Aligning what we believe, say, and do John Delany Christchurch Polytechnic Institute of Technology, New Zealand
093
Increasing assessment literacy through institutional change Rachel Fosyth Manchester Metropolitan University, UK
094
Marking on and off line - a university wide pilot Sue Gill, Christie Harner* Newcastle University, UK
095
The assessment challenge: an end-to-end solution Paolo Oprandi*, Carol Shergold, David Walker, Catherine Jones University of Sussex, UK
19
096
Leading Enhancements in Assessment and Feedback (LEAF Scotland) Dave Morrison*1, Hazel Marzetti2 1 University of Glasgow, UK, 2University of Edinburgh, UK
097
Standardising Assessment at the Institution to Increase Student Learning Stuart Blacklock LiveText, United States Minor Outlying Islands
Poster Session 6: Learning and contemporary higher education assessment 11.30am Location: Accelerate 2 Chair: Kay Sambell 098
Assessment timing: student preferences and its impact on performance Richard McManus Canterbury Christ Church University, UK
099
Peer and Public Pressure: Using Assessment to Raise Confidence and Ambitions amongst Undergraduate History and Sports Students Lee Pridmore, Ruth Larsen*, Ian Whitehead University of Derby, UK
100
An evaluation of the student and staff experience of the introduction of audio feedback for undergraduate assessment Nick Purkis*, Sandy Stockwell, Jane Jones, Pam Maunders, Kirsty Brinkman The University of Winchester, UK
101
The Impact of Commercial Involvement on the Development Of Academic Processes And On The Quality of Outcomes: A Case Study Theme: Learning and contemporary higher education assessment Ufuk Cullen*, Zach Thompson Greenwich School of Management, UK
102
Assessment Strategy: Online Distance Education Elaine Walsh*, James Brunton Dublin City University, Ireland
103
‘Skills Passport' for Life Sciences at Edinburgh Napier University: Helping students to help themselves Janis MacCallum*, Samantha Campbell-Casey, Patricia Durkin, Anne MacNab Edinburgh Napier University, UK
104
The effect of the test re-do process on learner development in higher education foreign language courses Kristen Sullivan Shimonoseki City University, Japan
105
Assessment Feedback Practice In First Year Using Digital Technologies – Preliminary Findings from an Irish Multi-Institutional Project Lisa O'Regan*1, Mark Brown2, Moira Maguire3, Nuala Harding4, Elaine Walsh2, Gerry Gallagher3, Geraldine McDermott4 1Maynooth University, Ireland, 2Dublin City University, Ireland, 3 Dundalk Institute of Technology, Ireland, 4Athlone Institute of Technology, Ireland
106
Visualising the Narrative: Assessment through a programmatic lens Bryan Taylor*, Mark Russell King's College London, UK
107
Student and staff experiences of peer review and assessment in undergraduate UK HE settings Denise Carter, Julia Holdsworth* University of Hull, UK
20
108
Reflective activities and summative assessment in an open university access to higher education module Carolyn Richardson The Open University, UK
Poster Session 7: Student responses to assessment 11.30am Location: Expand 1 Chair: Linda Graham 109
EFL tutors and students perceptions of written assessment, feedback and criteria across six English departments at a Libya University Imad Waragh University of Sunderland, UK
110
Measuring tertiary students' progress in English for specific purposes courses with self-assessment Dietmar Tatzl FH Joanneum University of Applied Sciences, Austria
111
Overcoming Assessment Challenges - Tipping the Balance Ruth Sutcliffe*, Rachel Sparks Linfield, Ros Geldart Leeds Beckett University, UK
112
Feeding forward from feedback with Business and Food first years Jane Headley*, Pam Whitehouse Harper Adams University, UK
113
Student perceptions of oral and written feedback Anna Steen-Utheim BI Norwegian Business School, Norway
114
Designing assessments to develop academic skills while promoting good academic practice and limiting students' use of purchased or repurposed materials Ann Rogerson University of Wollongong, Australia
115
Developing pedagogy: Using Pecha Kucha as formative assessment in two undergraduate modules Nicola Hirst Liverpool John Moores University, UK
116
Making the formative feedback effective: Feed-forward feedback: Study of student’s perception on the video assignment guidance and its influence on their learning Harish Jyawali GSM London, UK
117
Developing collegial relationships: Students providing feedback on staff member's teaching and assessment practices Jennifer Scoles, Mark Huxham* Edinburgh Napier University, UK
118
Students Love Assessment: Using assessment to improve engagement Toby Carter*, Nancy Harrison, Julian Priddle Anglia Ruskin University, UK
119
How do students engage with personalised feedback from a summative clinical examination? Beverley Merricks
21
University of Birmingham, UK Location: Propel 1 Chair: Remko van der Lei 12.30pm Lunch Location: Restaurant Parallel Session 10 13.30pm 120
Joining the pieces: using concept maps for integrated learning and assessment in an introductory Management course Heather Connolly, Dorothy Spiller* University of Waikato, New Zealand Location: Proceed 1 Chair: Ron Cambridge
121
Decision-making theory and assessment design: a conceptual and empirical exploration Gordon Joughin*1, David Boud2, Phillip Dawson3, Margaret Bearman3, Elizabeth Molloy3, Sue Bennett4 1 Higher Education Consultant, Australia, 2Deakin University, Australia, 3Monash University, Australia, 4University of Wollongong, Australia Location: Accelerate 1 Chair: Anton Havnes
122
Domains influencing student perceptions of feedback Margaret Price*, Berry O'Donovan, Birgit den Outer, Jane Hudson Oxford Brookes University, UK Location 1: Propel 2 Chair: June Starkey
123
Challenges and benefits of assessing reflection Stefanie Sinclair*, John Butcher, Anactoria The Open University, UK, Location: Proceed 2 Chair: John Couperthwaite
124
Effective Extensions: managing the lived experience of online students Susanna Chamberlain*, David Baker, Danielle Zuvela Griffith University, Australia Location: Forward Chair: Naomi Winstone
125
Valid and reliable assessment of students' academic writing using Comparative judgement Liesje Coertjens*, Tine van Daal, Marije Lesterhuis, Vincent Donche, Sven De Maeyer University of Antwerp, Belgium Location: Expand 2 Chair: Sally Mitchell
14.10pm Keynote: Professor Jo-Anne Baird & Poster Award 15.15pm Refreshments & Close
22
Day 1: Keynote Sectarian divides and challenges in assessment Dr Maddalena Taras Dept of Education, University of Sunderland, UK Assessment is often perceived as creating tensions of power in the triumvirate with learning and teaching. Assessment is not a homogenous, single concept: in addition, social and political issues encroach on educational processes and principles. Add to this already volatile mix current discourses and concepts of learner and learning centredness, independent learners, self-regulated learning, student voice, student empowerment and we find an uneasy truce seems to have been agreed between assessment and learning and teaching. This keynote will challenge the ‘tensions’ around issues of assessment. It does this in three ways by examining:
cross-sector concepts of feedback; the relative roles of learners and tutors in feedback and assessment; ways in which assessment may have a place in the above discourses.
A key question is: how can learner and learning-centred, learning oriented and empowering assessment deliver for students? Also, importantly: how can tutors discover assessment processes which provide them with the courage and the confidence to let go of their perceptions and fears of redundancy and disempowerment if they relinquish their traditional hold on assessment? It will suggest that it is, perhaps, by examining the ethicality and ‘transparency’ of our own understandings and practices that we can better support our students.
Maddalena has an extensive record of research in higher education assessment with outcomes relevant to all levels of education. Her work has had an important impact on debate, staff development and academic practice in UK universities and internationally. It covers assessment practices and discourses including linguistic and cultural influences on perceptions of assessment. Her original selfassessment model linking practice and theory regarding supporting student learning and inclusion in assessment has created a paradigm shift in thinking. An original theoretical framework for summative, formative and self-assessment represents a second paradigm shift. She has also critiqued extensively Assessment for Learning as increasingly promoted in schools and initial teacher training world-wide. Maddalena’s work questions the definitions of formative assessment and the underpinning theoretical assumptions, identifying the contradictions that ensue in practice. 23
Day 1: Abstracts 001: Transforming the Experience of Students through Assessment (TESTA) Tansy Jessop University of Winchester, Winchester, UK Transforming the Experience of Students through Assessment (TESTA) has helped programmes in more than 50 universities in the UK, Australia, South Africa and India to rethink assessment and feedback, using evidence and well-founded principles about how students learn best. Findings from TESTA show that the structure of degree programmes often inhibits student learning, and may prevent lecturers from identifying the source of common assessment and feedback challenges. This interactive workshop will examine the principles, tools and approaches used in conducting TESTA. Participants will explore authentic case studies and anonymous data as a way into the TESTA process. The workshop will explore the transformative nature of using TESTA, and ways in which programmes have improved assessment and feedback as a result of going through the process. For more information, see www.testa.ac.uk 002: Rethinking feedback for greater impact on learning David Boud Deakin University, Melbourne, Australia Many of our commonplace feedback practices are criticised in student surveys as not meeting their needs. There is also concern that unless students engage with useful information provided to them, then it is very unlikely to have any influence on them. What can be done then to increase the effects of feedback practices? The session will use current research to focus on rethinking the way we talk about feedback and focus on two key strategies: the design of course units to maximise the chances of students engaging in feedback and improving their work, and changing the nature of comments provided to students so they learn more effectively. 003: Designing and carrying out effective assessment David Carless University of Hong Kong, Hong Kong The ‘class' takes as its starting-point a position that well-designed assessment tasks are one of the most important factors in promoting student engagement. First, I will present some examples of well-designed assessment task designs from my recent research. Then I will invite the audience to contribute their ideas on assessment task design. I will relate the audience ideas to key relevant concepts in the literature. Finally, I will suggest a set of principles for effective assessment task design and implementation; and will invite critique from the audience.
24
004: Help students to help themselves: developing assessment literacy Margaret Price Oxford Brookes University, Oxford, UK For students to reach their potential, they need to able to navigate the complex world of assessment through becoming assessment literate. Developing assessment literacy means more than becoming a strategic student, it provides a gateway to further learning. Put simply a student who understands the nature and purposes of assessment and assessment standards can be a more effective learner in their chosen subject. Higher education often fails to recognise this and assumes that students need no help in understanding and working within our assessment processes. This class advocates the need to develop student assessment literacy and looks at practical approaches to doing this. 005: Developing and Embedding Inclusive Assessment across Plymouth University Pauline Kneale, Jane Collings Plymouth University, Plymouth, UK Equality legislation, and support for students with disabilities and learning difficulties, has given rise to a culture of modified assessments tailored for individuals in each module. In some disciplines this can mean a member of staff having to set as many as six alternative assessments for a module. This is both time-consuming and can leave some students feeling they are causing additional problems and less supported. The increasing numbers of students with diverse learning needs impact on modules which have relied on traditional timed examinations. Awareness of these issues led to a Plymouth University-wide project building on the work of Waterfield and West (2006) to explore, promote and embed an inclusive approach. Well-designed inclusive assessment enables all students to demonstrate they have achieved the learning outcomes without compromising academic of professional standards (Waterfield and West 2006). Supporting assessment for learning through diagnostic, formative and summative assessment (Crisp 2012) followed by detailed feed-forward and feedback can enhance an inclusive approach. An additional emphasis is on moving towards authentic learning tasks that promote student engagement, and student learning from assessment. Participants in the workshop will examine and discuss a range of inclusive assessment materials which have been trialled with academics from all disciplines at Plymouth University. This project is driving a culture change in assessment, and developing deeper understandings of particular students' needs. Participants will gain an understanding of; the role and need for inclusive assessment and a range of inclusive assessment opportunities. Participants will work with examples of inclusive assessment methods, and discuss ways in which these can be employed in their own disciplines. The challenges of disciplines which typically rely on timed examinations, such as mathematics,
25
history, English, and business will be explored. Whilst exams are useful part of assessment Light, Cox and Calkins (2009) argue that a bias toward exams places too much emphasis on memory and factual knowledge, speed writing and thinking. Meeting the challenge of inclusivity has led to a university-wide curriculum review and revision, with a broader range of assessment styles adopted, so that wherever possible all students are working towards the same assessments. 006: Written Assessment and Feedback Practices in Postgraduate Taught Courses: an international perspective. Victor Guillen Solano1 1 Sheffield Hallam University, Sheffield, UK, 2The University of Sheffield, Sheffield, UK UK universities have been attracting an increasing number of international students in recent years (Foster, 2013), which poses a number of challenges for UK higher education institutions in terms of capacity, pedagogy, curriculum and, crucially, students’ socialisation into different academic practices. As pointed out by Goodfellow (2006, p.481), writing is ‘integral to students’ induction into academic cultures and discourse communities, and is the principal way they demonstrate the knowledge and skills they have acquired during their studies, and their fitness for accreditation’. For students, particularly those with a non-UK educational background, the challenge is to adapt to new ‘ways of thinking and practising’ (Hounsell & McCune, 2002), which tend to vary considerably across disciplines (Becher, 1989; Hyland, 2009; Lea and Street, 1998), and across institutions and tutors (Baynham, 2000). Research in the last 10 years has produced a considerable amount of literature to support the view that tutor feedback can help students understand academic expectations and has great potential to help enrich their learning experience (Bloxham & West , 2007; Carless, 2006; Hattie & Timperley, 2007; McCune & Hounsell, 2005; Poulos & Mahony, 2008; Prowse et al., 2007; Yorke, 2003). This paper reports on a study looking into the role that tutor feedback may play in helping students understand and engage with writing practices in postgraduate taught courses. The research draws on the concept of communities of practice (Wenger, 1998), Academic Literacies (Lea & Stierer, 2000) and linguistic capital (Bourdieu, 1977) to argue that academic writing and feedback-giving are social practices influenced by cultural, disciplinary and institutional contexts; therefore, students cannot be expected to be familiar with academic expectations or specific writing practices in their programmes, regardless of their level of study. The production of texts, as observed by Swales (1998, p.118), is ‘comprehensively situated’ in institutional spaces, ‘in disciplinary traditions and conventions’ and ‘within the textual careers of their authors’– and of those who read them too. The research combined quantitative and qualitative methods to study academic expectations, the types of written assessment and the sort of feedback that students found in different academic departments, hoping that this can shed some light on the ways in which students come to develop their writing and how they can be better supported in this process.
26
007: Impact on Student Learning: Does Assessment Really Make A Difference? Natasha Jankowski1,2 1 University of Illinois Urbana-Champaign, Champaign, IL, USA, 2National Institute for Learning Outcomes Assessment, Champaign, IL, USA Institutions, programs, and faculty have been engaged in assessing student learning for many years and yet there are still questions posed to the value, importance, and relevancy of undertaking assessment (Ewell, 2009; Kuh, Ikenberry, Jankowski, Cain, Ewell, Hutchings, & Kinzie, 2015). One area of constant concern within institutions in the United States is whether undertaking assessment of student learning in fact leads to improvements in student learning beyond a course-based approach (Fulcher, Good, Coleman, & Smith, 2014). Meaning, does assessment (whether undertaken within a specific course, module, or program-level) actually improve student learning at a program or institutional level? The National Institute for Learning Outcomes Assessment (NILOA) has undertaken an impact study to explore whether connections can be made between doing assessment and improvements in student learning. Results from this study will be presented along with implications for practice. 008: ‘Another brick in the wall’? Teachers’ representations about assessment and teacher education processes. Serafina Pastore1, Monica Pentassuglia2 1 University of Bari, Bari, Italy, 2University of Verona, Verona, Italy Globalisation, technology, reflection, professional practice, assessment, and accountability are, with no doubt, the main trends in the teacher education field. The emphasis on teacher training has had strong social and political consequences: while educational research has highlighted how teachers learn and develop their practice, it is time now to understand how to design and implement effective and responsive training courses. Educational research points out how teaching-learning quality is interwoven with teachers’ representations of teaching, learning, curriculum, and assessment. This aspect is crucial both at practical level (teachers training courses and the role of Graduate Schools of Education) and at policy level (innovations and changes in school systems have a repercussion on teachers’ training). How these institutional and social changes do affect teaching practice? What is the influence of teachers’ representations and conceptions? Starting from a preliminary review of main inquiries on teachers’ assessment representations and on the effects on teaching practice, this paper tries to reflect on the relationship between assessment conceptions and teacher training pathways realized in the Italian higher education context. The study has been conducted by a survey in two public universities in Italy.
27
This survey has been realised using the COA-III inventory with a random sample of trainee teachers. The COA-III inventory is a questionnaire created by G. Brown in order to investigate dimensions of teachers’ conceptions of assessment. 303 questionnaires have been gathered. In addition to a descriptive analysis of variables (absolute and relative frequencies), a series of multiple analysis of variance (MANOVA) studies were conducted for statistically significant differences in mean scores for the main factor scores across the teacher characteristics of: role; years of experience; years of teacher training; sex; age; school level. Trainee teachers indicated a general level of confusion about assessment. They don’t perceive teaching training courses as effective and significant for their professional development. More specifically higher education teaching courses about assessment are irrelevant. Although the context of this paper is the Italian higher education system, we thought the paper has relevance to international debate on teaching, learning, and assessment practices in the higher education context and in teacher education paths. 009: Measuring the impact of high quality instant feedback on learning. Stephen Nutbrown1, Su Beesley2, Colin Higgins1 1 University of Nottingham, Nottingham, UK, 2Nottingham Trent University, Nottingham, UK Feedback on coursework submissions has been the focus of substantial amounts of research (Hattie & Timperley, 2007). Relevant literature highlights the importance of feedback in terms of what has been done well or badly, and emphasises the importance of "feedforwards", i.e. how to improve for the next submission or assignment (Beesley, Nutbrown & Higgins, 2014). This study involves the assessment of second year programming assignments within Computer Science related degrees, and analyses the impact of changing the type of feedback given to students. The effect of focusing the feedback on how to improve in future work is measured by allowing re-submissions and analysing the change in performance. The performance in subsequent assignments was also measured, allowing a comparison to previous cohorts working on the same exercises. The students were also surveyed for additional insight into their thoughts regarding the feedback provided. An automated system (The Markers Apprentice - Beesley, Nutbrown & Higgins, 2014) was used to provide instant feedback on common mistakes. The software system has 127 convention rules against which the students' submissions are tested and which gives feedback to the students. There are also functionality tests that check if the solutions produce the correct output. The automatically produced feedback highlights the mistakes, the reason for them and the exact point in the student submission where they occurred. It also provides a good and bad code example relevant to the mistake found. Furthermore, a link is provided to learning resources about the mistake, offering an actionable item on how to improve (e.g. by reading the examples and the particular learning material). Results following re-submissions by students to the same exercise were measured, allowing the
28
observation of the improvements made based on the feedback in the current assignment. Subsequent exercises completed by this student cohort were then analysed and compared to the same exercises for an earlier cohort. It was found that the students who received this form of feedforwards (141 students) produced on average 35% fewer of these common mistakes for the next part of the coursework when compared to the earlier cohort (118 students). This study highlights not just the importance of feedforwards, but also the techniques used in assessment, reinforcing the importance of assessment and hence, importantly, good feedback for learning. 010: The influence of students' epistemic beliefs on their satisfaction with assessment and feedback Berry O'Donovan Oxford Brookes University, Oxford, UK Students' beliefs about the nature of knowledge frame how students interpret their educational experience and relate in complex ways to their approaches to and perspectives on learning and assessment. This paper draws upon Baxter Magolda's (1992) Measure of Epistemological Reflection (MER) to identify the epistemic beliefs of business undergraduates in a two-year study at a UK post-92 institution with a particular focus on how these beliefs influence their views on, and satisfaction with, assessment and feedback. Findings are explored within a context in which student satisfaction matters within an increasingly competitive and commercial higher education sector, in which to attract and retain students universities are ‘compelled to pursue market orientation strategies placing greater emphasis on meeting student expectations' Arambewela and Hall (2011, 1). The vast majority of students (175 of 200) on entry to a business degree programme were interpreted with absolute or transitional (dualistic) ways of knowing in which they arrived expecting a transmission style of teaching, assuming knowledge to be certain and uncontested, and learning about accurate memorization. Such assumptions influenced their beliefs about ‘good' assessment and feedback. These included that assessment should be undertaken by ‘expert' assessors who know the students whose assessments they mark, so that effort and attendance can be rewarded as well as the work itself. ‘Good' feedback was considered to be specific correction against a model answer. By contrast, the 25 students with more pluralistic beliefs valued feedback that was dialogic and involvement in marking processes. The same students were contacted at the end of their second year and thematic analysis of 19 students' responses to a second MER registers their increasing instrumentality and growing unease with assessment and feedback. At this stage most students perceived knowledge to be contestable and contextual. However, they railed against the ambiguity and possible inconsistency that this presented in assessment processes. Students' approach to assessment was pragmatic and instrumental, often centering on how to cope with what they perceived as the subjectivity and inconsistency of assessment across modules, and indeed, assessors. Consequently, students declared an intense wariness of feeding forward any prior assessment feedback to a subsequent assessment and the need to approach each module's assessment criteria as discrete, along with demands for more advice and dialogue with tutors
29
at each assessment point. Implications in terms of assessment and feedback processes and student satisfaction are discussed. 011: I wish I could believe you: the frustrating unreliability of some assessment research Tim Hunt1, Sally Jordan2 1 Information Technology, The Open University, Milton Keynes, UK, 2Department of Physical Sciences, The Open University, Milton Keynes, UK We strive to understand which assessment practices have the best impact on learning (e.g. Gibbs & Simpson, 2004-5; Nicol, 2007). It can be difficult to ascertain, however, whether one intervention, for example the introduction of an online quiz to a course studied by diverse students, is responsible for the effect we see. Although there are well-established educational research methods (e.g. Cohen et al 2011) we cannot all be methodological experts. It is frustrating to read a paper about an interesting intervention, only to find the evidence presented or the methodology used, insufficient to justify the conclusions. When such poor practice is spotted it can lead to criticism of the intervention, even when this has been introduced for the best of motives and valid research methods would have been shown it to be effective. This paper uses good and bad examples to highlight the difficulties inherent in educational research. We try to offer practical suggestions of how they might be overcome, that may go some way to improving the robustness of results reported in future. Even well cited papers can include mistakes. For example Sly (1999) implies that the introduction of an optional quiz led to improved examination scores. However, a statistically significant correlation does not necessarily imply causation. The observed effect was likely to be caused simply by more motivated students both engaging with the quiz and doing better in the examination. Listening to student opinion is important (Dermo, 2009), but care is needed when drawing conclusions because students' reports of their own behaviour and motivation do not always match their actions (Jordan, 2014). One might be tempted to conclude only the purest methods will do, but that reveals other difficulties. For example the "testing effect" (Roediger & Karpicke, 2006) has been rigorously demonstrated in laboratory settings, but is considerably more difficult to demonstrate in classroom practice (e.g. Wooldridgea 2014), where authentic assessment of higher order learning outcomes is desired and ethical guidelines can limit the use of control groups. Examples of good research methodology (e.g. Angus & Watson, 2009; Mogey et al., 2012; Ćukuťić et al., 2014) address the issues head-on and learn from past mistakes. Statistically sound experimental results can be counter-intuitive (e.g. Simpson's paradox). The problems are not confined to educational research; Ioannidis (2005) gives broader examples. Reliability could be improved by simple measures such as repeating research in different institutions and reporting negative research findings.
30
012: Investigating the feedback gap(s) in pre-service language teacher education: What is the Emperor really wearing (and who will tell)? June Starkey OISE/UT, Toronto, Ontario, Canada Teacher Educators are appropriately invested in understanding the attributes of the knowledge base required for future French as a Second Language (FSL) teachers. According to Andrews (2003), teachers need knowledge of the subject matter (knowledge about language) but also require another kind of language knowledge - language proficiency (knowledge of language), in addition to knowing how to teach in ways that allow students to learn. As Borg (2004) explains in her articulation of the ‘apprenticeship of observation,’ the task of teaching these skills is complex because pre-service teachers come to in-service teaching having observed only some aspects of a teachers’ work, namely, the parts of teaching that are visible. Lortie (1975) first introduced the term ‘apprenticeship of observation’ to describe how, having spent thousands of hours in a classroom as students, pre-service teachers are not aware of other crucial components of a teacher’s work, such as language proficiency development and assessment practice. With regard to the provision of feedback in higher education (HE) contexts, Boud and Molloy (2013) observe that institutions in HE are critiqued more for the feedback they provide to students than for almost any other aspect of the courses they offer. Evans (2013) highlights the need for more research on the role of feedback in HE in light of the literature on this topic that has appeared in the last decade and declares a “feedback gap” (pp. 73, 94-97) between the feedback students receive and their capacity to use such feedback to improve their work. Boud and Molloy (2013) concur with this gap, noting that professors in HE spend tremendous amounts of time giving feedback in the hope that it will be helpful to students, though it is not clear that students understand or act on the feedback they receive. Students, on the other hand, report that the feedback they receive is not specific enough to be useful, and that assignment requirements often lack clarity (Higgins, Hartley, & Skelton, 2002). Since initial teacher education programs are an integral part in mobilizing this new FSL curriculum in Ontario, Canada, a study contributing to the literature on assessment models in higher education (HE) that may make a difference for the development of French-proficient and assessment-literate pre-service teachers is needed. This paper will discuss the initial findings of the feedback practices of university professors/instructors who teach in French. 013: Learner engagement with Interactive Computer Marked Assignments on beginners’ language modules Anna Proudfoot, Anna Comas-Quinn, Ursula Stickler, Qian Kan, Tim Jilg The Open University, Milton Keynes, UK This paper reports on a project which evaluated students’ engagement with Interactive Computer Marked Assignments (iCMAs) on beginners' language
31
modules at The Open University (OU) and investigated a possible link with students’ overall performance. The Department of Languages implemented the use of iCMAs across all six beginners’ language modules in 2012. The four iCMAs – two testing listening skills and two testing reading skills - are formative assessment so students are encouraged to submit their iCMAs at the relevant points in the study calendar rather than leave them until the end of the module. The team investigated student engagement with the iCMAs on two modules (L194 beginners’ Spanish and L197 beginners’ Chinese) by looking at both quantitative and qualitative data. Quantitative data was obtained using the moodle reporting tool which automatically records the number of attempts, time spent on each iCMA, the point at which the iCMA was submitted and the marks achieved. Quantitative and qualitative data were obtained from a large-scale survey of students, which asked how iCMAs impacted on their learning, particularly in relation to motivation, engagement, retention and attainment. Follow-up interviews were conducted with a small number of students, providing further qualitative data. Some interesting contradictions arose between the objective data provided by the moodle reporting tool and students’ perceptions of their engagement with iCMAs. The survey results show that those students who completed the modules successfully had positive perceptions of iCMAs and used them as a means of evaluating their own learning. It also highlighted the importance of constructive feedback, echoing the findings of researchers, such as Gibbs (2010) and Hattie & Timperley (2007). The moodle-generated data, however, showed that a small group of students who either failed the module or got marks in the lowest band generally did not submit the iCMAs at the recommended date but tended to leave them until the last minute, evidence of a correlation between lack of engagement with iCMAs and poor overall performance, a correlation that has also been noted by Jordan (2014). This project was a team project, supported by the Open University’s Scholarship fund. Gibbs, G. (2010). Does assessment in open learning support students? Open Learning, 25(2), 163–166. Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112. Jordan, Sally (2014). Using e-assessment to learn about students and learning. International Journal of e-Assessment, 4(1)
32
014: Employer Led Problem Based Learning: Developing and Assessing Employability Skills for Success Ron Cambridge London Metropolitan University, London, UK This presentation examines an Employer Led Problem Based Learning (PBL) teaching and assessment practice which focuses on the development of substantial academic and employability skills of a differing and diverse cohort of nontraditional students at London Metropolitan University. LondonMet is a post-92 university which recognises that the majority of its students are non-traditional and is thus committed to: ‘transforming lives, meeting needs, building careers’ and especially focuses on enhancing student ability to participate in the business workforce (LondonMet 2010-2013). However, whilst UK government has encouraged employability agendas (BIS, 2012, Leitch Review, 2006) and whilst business corporates realise the benefits of employing a diverse workforce, recruiters argue that many non-traditional graduates do not meet the required employability standards, from the very first step of entering the business world, including their communication, CV or dresscode (Rao, et al., 2011). Similarly, non-traditional students themselves have also voiced a problem of ‘fitting in’, both whilst in university (Leathwood and O'Connell, 2003) and when entering the corporate world (Cambridge, 2012). Thus, the university, together with State Street Bank, the world’s second-largest international custodian bank, developed Employer Led Problem Based Learning assessments. This unique project instils concrete employability skills including presentation skills, successful teamwork, productive research aptitude, and confidence through support and motivation. The university and the bank are equally involved in teaching and assessing students to truly develop solid employability skills. Assessment takes place in the State Street Bank Offices in Canary wharf in a form of a professional presentation. Correspondingly, students’ feedback has also been highly positive, placing more value on such practice and assessment, and rendering it as a ‘life changing experience’. The presentation concludes with the aim of this practice exchange which demonstrates the value of involving the business world in teaching and assessment of students, without hindering academic rigour, which benefits students’ long-life learning and skills development, and which also goes beyond traditional temporal classroom assessment. Finally, the provision of practical and research based recommendations highlight the benefits of the connection between both the academic and the business worlds, and the need for the real involvement of potential employers in the development of employability skills that go beyond the contextual level (Crayford, et al., 2012), as well as the need to involve both employers and academic faculties in the student affairs strategic planning (Whitney, 2010).
33
015: Ipsative assessment for student motivation and longitudinal learning Gwyneth Hughes Institute of Education, London, UK Assessment for learning is increasingly recognised in higher education with interest growing in formative assessment, peer and self-assessment and authentic assessment (Molloy & Boud, 2013, Gibbs, 2006). However, summative assessment overshadows formative assessment and is often highly competitive when student performance is compared to external standards and criteria. Students are aware of ranking and high and low performers are clearly visible. But, for all the winners in a competitive system there are many losers. Competition can damage self-esteem (Dweck, 1999) and obsession with reliability means that peer and self-assessment are undervalued and assessment for learning may be compromised (Hughes, 2014a). Ipsative assessment, when a learner's work is compared to their own previous work rather than to external standards, offers a new alternative. There is evidence that ipsative assessment is motivational for all learners not only high achievers because all students can make progress and achieve a personal best at least some of the time (Hughes, 2014b). When the starting point is the individual the potential for demonstrating progress is much more visible than when students are assessment on an assumed level playing field. However, although recording progress may happen informally, marks are rarely given for progress and ipsative feedback is not made explicit. This paper will explore the rationale for introducing ipsative assessment into higher education. It will present some research findings on introducing ipsative feedback to Masters programmes at the Institute of Education and explore some of the challenges (Hughes 2014b). While some students were motivated by ipsative assessment and this approach encouraged a longitudinal view of learning across a programme, there were some concerns. Ipsative feedback also requires a holistic view of curriculum planning so that progress in developing disciplinary skills and attributes can be monitored. It also requires that past performance feedback is accessible to students and assessors so that comparisons can be made. Digital technologies can be of help here to some extent. Finally, combining ipsative assessment with competitive assessment may be problematic as the latter tends to override the longer-term view of learning. It may be advisable to keep ipsative developmental assessment separate from conventional summative assessment. This is what typically happens when students undertake projects, dissertations and doctorates and supervisory practice could provide a model for a ‘dual system' of assessment. 016: Using exemplars to develop assessment literacy: what do students learn to notice during pre-assessment workshops? Kay Sambell, Linda Graham Northumbria University, Newcastle upon Tyne, UK Pre-assessment workshops, focused on the use of exemplars, are often advocated as an efficient way to develop students' assessment literacy when working with
34
large classes (see, for example, Price et al, 2012; Hendry et al, 2009, 2012 & 2013). According to Sadler (2012), such approaches can help students learn to notice the things that matter to an expert assessor, thus empowering them to monitor their own work effectively. Rust et al (2003 & 2005) have been particularly influential in this area of work in university settings. The seminal work of this research group focused upon tracking the impact of pre-assessment workshops on student performance and learning outcomes. Our research seeks to build on this, illuminating students' views of important precursors for the kind of learning processes which were stimulated. Three different modules were studied, each of which incorporated a 2 - 3 hour preassessment workshop utilising exemplars. Each case study had large class sizes (n= 84; n= 143; n= 176). A mixed methods approach to data collection was employed, in which students ranked the exemplars against marking criteria and generated feedback for each before engaging in detailed discussion with the module teaching team about their judgments. This enabled the research team to build a picture of the students' initial views of each exemplar as a stand-alone resource (Handley & Williams, 2011), providing an additional comparative lens through which the whole-class survey and follow-up interviews with a selected sample of students could be viewed. Broad themes to emerge from our analysis will be reported, with statistical data being supplemented by illustrative excerpts from the student feedback logs and interviews. Findings show evidence of a high proportion of students claiming to learn valuable lessons. Salient factors included the deployment of peer-review (Nicol et al, 2013); the use of pre-emptive formative assessment (Carless, 2006); and the perceived need for discussion as a pre-requisite for longer-term learning (Hendry et al, 2011). However, the capacity to drill down and analyse the feedback statements each student generated for each exemplar also allows us to illuminate the complex nature of the challenges that students experienced when developing assessment literacy. Our approach enabled us to take a critical stance, importantly throwing light on a range of concerns that have emerged in recent studies. These include issues such as: ‘conformative' assessment (Torrance, 2012); students wanting recipes (Bell etal, 2013); and copying rather than learning from exemplars (Handley & Williams, 2011). 017: Making use of assessment feedback: Students' perceptions of the utility of interventions for supporting their engagement with feedback. Naomi Winstone1, Michael Parker1, Robert Nash2 1 University of Surrey, Guildford, UK, 2Aston University, Birmingham, UK Recent approaches to assessment and feedback in higher education stress the importance of students' involvement in these processes. Rather than a one-way transmission of information from expert to novice, feedback is best represented as a communicative process involving dialogue between teacher and student (Nicol, 2010). By engaging students actively with their feedback, we open up opportunities for them to develop skills of self-monitoring (Nicol & MacfarlaneDick, 2006), such that they become less reliant on external feedback and more reliant on self-regulated learning (Sadler, 1989). Researchers have called for a more heavy focus on the receiver's end of the feedback process, through exploring how students interpret and implement feedback. These calls respond, in part, to
35
evidence that students engage weakly with feedback, sometimes even failing to collect it altogether (e.g. Price, Handley & Millar, 2011). Many practitioners have proposed interventions for supporting students' use of feedback, and enabling them to participate more actively in the process. We conducted a systematic review of the literature on such interventions. These included: peer feedback; self-assessment; presentation of feedback without a grade; assessment portfolios; action planning; feedback workshops; exemplar work; and feedback dialogue. Our review revealed that whilst all these interventions primarily aim to support students in making more use of feedback, they differ in terms of the psychological processes they target. We discuss the efficacy of each intervention and the challenges associated with their use; we then present a framework that clusters interventions according to the cognitive attributes they aim to foster in students. As part of a series of focus groups, we gathered students' perceptions of these interventions. Given brief descriptions of each intervention, we asked students to rank them in two different ways: 1) how useful they thought the intervention would be for helping them to make use of feedback; and 2) how likely they would be to use it themselves. Through these activity-oriented focus groups (Colucci, 2007), we elicited group-level dialogue that gave insights into students' perceptions, requiring them to identify and reconcile conflicting perspectives. We present evidence that 1) many of the interventions framed in the literature as being most effective in supporting engagement with feedback are evaluated poorly by students; 2) students do not always see congruence between an intervention's perceived usefulness, and their likelihood of using it; and 3) students' engagement with feedback interventions can be supported through careful presentation of the rationale behind them. 018: Exploring students' perceptions about peer-evaluation: a case study Elizabeth Ruiz Esparza Barajas Universidad de Sonora, Hermosillo, Sonora, Mexico Modern methodologies and new trends in education emphasize student participation at all levels of their learning process (Swain, Brooks & Tocalli-Beller, 2002). In this process teachers should discuss assessment criteria with their students and prepare them to engage in peer and self-assessment (Assessment Reform Group, 2002). Engaging students in the scholarship of assessment has been challenging. Studies have evidenced resistance towards these assessment methods (Liu & Carless, 2006) and claims have been made in recente literature that little is known about what students think about these type of assessments (Hanrahan & Isaacs, 2010). The qualitative study that is being reported here is part of a bigger study which is being carried out in a Bachelor of Arts in English Language Teaching program in Mexico. The focus of this first part was to explore the perceptions about peerevaluation in a group of 27 students from the third semester of the program. The instrument used in this study was the student journal in which they had to answer and reflect on five questions posed by their teacher. These questions were: a) what is peer evaluation? b) What is your opinion of it and why? c) How do you feel
36
about giving peer-evaluation and why? d) How do you feel when receiving peerevaluation and why? e) What is your history of peer-evaluation? The results were rich in insights. The students knew what peer-evaluation was. Their perceptions were mixed between positive and negative ones. However, when confronted with the question of how they felt when receiving feedback from their peers, the responses were more on the negative side. Among the reasons they stated were mistrust in the knowledge and commitment of their classmates to assess correctly. Past histories also influenced their way of thinking about receiving and giving peer evaluation. The study was successful in unveiling the university students' perceptions about peer- evaluation and finding the reasons for their way of thinking. The study expects to make teacher educators and teachers in the different fields aware of the mismatch between modern methods of assessment and students' perceptions as well as to motivate teachers of higher education to find out their students own perceptions. The study hopes to add to the literature about students' perceptions about peer-evaluation by providing a new context that has been largely unexplored, that of Mexico. 019: Culturally Responsive Assessment: Modifying Assessment Processes to Meet Diverse Student Needs Natasha Jankowski1,2, Erick Montenegro1,2 1 University of Illinois Urbana-Champaign, Champaign, IL, USA, 2National Institute for Learning Outcomes Assessment, Champaign, IL, USA While much has been written about the various assessments used by different types of institutions to better understand their students' learning (Banta & Palomba, 2014; Kuh, Jankowski, Ikenberry, & Kinzie, 2014; Maki, 2010; Suskie, 2014; Walvoord, 2014), little is known about how institutions modify or adapt their assessment practices based on the student populations served. As the diversity of students entering higher education institutions continues to grow, understanding ways to engage diverse perspectives in the assessment of very different types of learners becomes increasingly crucial. The National Institute for Learning Outcomes Assessment (NILOA) conducted a survey of institutional assessment practices in the United States and found that while there were differences in approaches to assessment for various institutional types, institutions that predominantly served minority student populations approached assessment differently (Montenegro & Jankowski, 2014). The minority-serving institutions described their assessment approaches as culturally responsive, and utilised more locally developed assessment measures. This paper presents findings from the survey and follow-up interviews with select minority-serving institutions as to their process for selecting assessments in relation to their student populations, and presents implications for assessment practice at institutions with either a diverse student population or those with a high number of underserved students.
37
020: On-line Assessment and Personalised Feedback - Some Novel Approaches Jill Barber University of Manchester, Manchester, UK The Manchester Pharmacy School first adopted summative on-line examinations in 2005, as a means of reducing the assessment time (for staff) associated with classes of over 200 students. Since then we have greatly increased the range of question types examinable on-line to include short essays and questions incorporating chemical structures. We have encountered and addressed numerous technical problems, and we are now reaping the benefits of time savings of up to 90% in the marking process. Students are not easily persuaded that they benefit when an instructor spends 6 hours, rather than 60, in marking an assessment. It is the next cohort that sees the fruit of the additional 54 hours spent on teaching. The greatest perceived benefit to students is that it is much easier to give excellent feedback when the assessment is on-line. Feedback is much studied and much analysed (Hattie and Timperley 2007; Mann 2013). Students like to receive feedback, and tend to be critical about lack of feedback in the National Student Survey. Perhaps surprisingly, however, feedback scores in the NSS correlate very poorly with overall satisfaction with a course (Langan et al, 2013). Denton and Rowe (2014) identified limitations of written feedback in enhancement of learning, suggesting that students are not always sure how to convert even good quality feedback into improved marks. We have used online assessments to deliver two novel forms of feedback. Assessments conducted within Blackboard may readily be displayed as a single Excel spreadsheet for the entire cohort. For many years we have practised full disclosure. Students are given the opportunity to withdraw (very few do) and then all identifiers are removed and the resulting marked spreadsheet made available to all students. Marking is therefore completely transparent, and students can see not a single model answer but a wide variety of excellent answers. In 12 assessments (2008 - 2014) there were no complaints about the marks awarded. Secondly, we have developed tools to provide confidential personalised feedback. These resemble the ‘statement banks’ used powerfully by for example Denton and Rowe (2014) but the statements used, though written by the instructor, are selected by a computer. The computer is capable of rapid calculation of complex algorithms and of delivering quite sophisticated feedback appropriately. Student satisfaction is evident (an increase of over 20% in feedback scores on the NSS), and there is evidence of improved learning, especially when feedback is delivered following mid-semester tests.
38
021: An alternative explanatory framework for what students want from feedback, what they actually use, and what tutors think they need Mark Carver University of Cumbria, Lancaster, UK Feedback is the area students are least satisfied with (Bols & Wicklow, 2013) and, from an institutional perspective, this is highly influential on rankings and prestige (Locke et al., 2008). When asking what students want, responses are similar: feedback should be prompt, specific, understandable and regular (e.g. Bols & Wicklow, 2013; Doan, 2013). However, such responses highlight the "beguiling appeal" of straightforward models of feedback (Foster et al., 2013, p. 34), where the expert tells the novice what to do. This dependence on experts contrasts with approaches stressing the tutor's intent that students come to understand the quality of their work and determine for themselves what to improve (Sadler, 1989). However, with pressure to deliver on student satisfaction, there is temptation to ‘unquestioningly move to meet all student expectations’ (Price, 2013, p.16) rather than developing assessment literature students to be effective co-producers, rather than just consumers, of learning. This paper discusses my survey of 614 initial-teacher education students from three campuses of two universities in north-west England and in-depth interviews of fifteen of these students. Exploratory factor analysis of the survey suggested typologies of student responses to feedback based on their general goals and attitudes to learning. Analysis also highlighted a limited student understanding of feedback, despite these students being familiar with Assessment for Learning through their role as trainee teachers. Interviews used two methods to probe for more detailed explanations: Biographic Narrative interviewing (Wengraf, 2001) and Kelly's Repertory Grid (Fransella et al., 2004), with students giving much more developed responses using these structures. Analysing the two data sources together suggests that many students have instrumental approaches to feedback with high dependence on the teacher. However, students also see themselves as using feedback as well as it could be used. I argue that this is an under-appreciated factor in understanding students' dissatisfaction with feedback, and that research asking students what they want must first be underpinned by an understanding of how students understand and intend to use feedback. In this session, I am particularly keen to discuss with attendees whether my data supports arguments for improving student literacy or instead focussing on reducing students' instrumental approaches to learning. Given the significant time taken to conduct my study, I am also interested in discussing how surveys of students can utilise these findings in order to ask more effective questions and if there is a need for more in-depth interviewing.
39
022: Institutional approach to improving feedback and assessment practices using TESTA at the University of Greenwich Monika Pazio, Duncan McKenna University of Greenwich, London, UK Student satisfaction with feedback and assessment has historically been considered an issue that needs to be improved, not just at the University of Greenwich but also across the HE sector as reflected in the NSS scores. Implementing institutional change is difficult as it quite often coincides with the change of culture and practice. While there are pockets of good practice across the sector on a course level, as reported in case studies, there is a need and challenge to extend that good practice on a larger institutional scale. Transforming the Experience of Students through Assessment (TESTA) methodology (Gibbs and Simpson, 2004) has been popularly adopted by other universities as a tool for auditing assessment design (Jessop, El Hakim N.A). It portrays a rich and detailed picture of feedback and assessment practices on a programme level. Through the use of mixed methods and data coming from both staff and students it attempts to trigger change in holistic assessment design, encouraging greater consistency between modules and better progression. TESTA@Greenwich monopolizes on the original methodology with some changes to the research design in relation to the questionnaire, additional layer of the focus on the quality of feedback as well as engagement of Student Change Agents into the research and dissemination process. The initial pilot period resulted in a lot of support from staff and programme leaders seeing TESTA as a potential tool for improvement of teaching and learning at the programme level; this view was also shared by the management. Having recognised the value of TESTA process for individual programmes after the initial pilot period, the University of Greenwich decided to use TESTA as an instrument for implementing much needed cross institutional change. This is done through embedding TESTA into the new programme review, ensuring therefore that the report will result in appropriate changes made to the programme design as well as giving students an opportunity to participate in curriculum co-design activities. The paper will then in the first instance outline the evaluation of the pilot in terms of the mechanisms, problems encountered and changes made to the original methodology. More importantly, it will focus on the institutional approach to TESTA. Since the large scale implementation is still in its early stage, no evaluation data will be provided. The focus therefore will be rather on outlining the approach and how it will be embedded into programme review process. 023: Standardised Assessment to Increase Student Learning and Competency Ida Asner LiveText Consultant, Chicago, IL, USA According to JISC - the Joint Information Systems Committee - the term electronic management of assessment is increasingly being used to describe the way that
40
technology can be used to support the management of the whole assessment and feedback lifecycle, including the electronic submission of assignments, marking, feedback and the return of marks and feedback to students. The use of technology coupled with the implementation of a best assessment practice process is an effective way to standardise assessment across an institution. Doing so will help to ensure parity for learners across different parts of the institution, as well as increase student achievement of core learning competencies. This session will focus on methods for best-practice assessment and describe the benefits of implementing technology to manage the process in a standard way. Best practice would suggest that the following methods support that goal: 1. Communicating learning goals, objectives, outcomes, and expectations is one practical way of enhancing achievement. Inform students of detailed expectations and specific learning objectives from the very beginning of a course or programme of study, repeat it at check points and keep it at the forefront of their minds throughout each academic term. Tell them the purpose is to develop certain competencies, such as critical thinking, strong communication, or problem solving skills, and more importantly, how these assessments will help them do that. This creates clear, focused goals for students to pursue. McMillian (2000) in his work on Practical Assessment, Research & Evaluation states that in order for assessment to be considered fair, students also must know the format of the assessments. 2. Upon completion of draft or practice work, students can be involved in applying the scoring criteria to their own work as well as the work of their peers. Participating in self-assessment can help students track their performance. Understanding how to apply self criticism and evaluate one's work can initiate students' understanding of the connections between evaluation and achievement. 3. Students should engage in peer assessment to shape their learning. They will see examples of both poorer and better work. Self and peer assessment to some extent allow students to take ownership and responsibility for directing their learning progress - creating a greater sense of control over their success. 4. Instructors further facilitate learning by providing students with constant, direct feedback on their learning progress. Directly communicate the results of quality, authentic assessments to facilitate continued student success. 024: Meeting the challenge of assessment when personal transformation is the outcome Annette Becker Utica College, Utica, NY, USA Regardless of the discipline, a college graduate today enters an intensely diverse, complex and rapidly changing world. In these postmodern times, little is static or predictable, requiring a degree of internal security to help navigate these challenging cultural conditions. In light of this, experts from the domains of psychology, college student development and adult education agree on the need for higher education to prepare students differently than has been done in the past (Kegan, 1994; Baxter Magolda, 2001, 2004; Mezirow, 1994). Much research has been conducted on the impact of college on student growth (Arum & Roska, 2011; Arum, 2013; Astin, 1999; Astin & Lising Antonio, 2012; Buissink-Smith, Mann, &
41
Shephard, 2011; Pascarella & Terenzini, 2005) reviewed initially in the pioneering work of Feldman & Newcomb (1969). The data indicate that cognitive outcomes, reflected in the measurement of knowledge and skills, are weaker indicators of sustainable learning than measures reflecting internal change in the individual student (Arum, 2013; Arum & Roska, 2011; Astin, 1999; Astin & Lising Antonio, 2012; Feldman & Newcomb, 1969; Hanson, 2012; Pascarella & Terenzini, 2005). Essentially, these sources found that when a student becomes a critical thinker, the change is more lasting than when the student simply describes and demonstrates the critical thinking skill. Furthermore, there is a substantial body of literature that describes college as an opportune time to encourage personal transformation toward self-authorship, the inner security that will assist students in meeting the challenges they will face upon graduation (Baxter Magolda, 2001, 2002, 2004; Kegan, 1994; Mezirow, 1991). Self-authorship is a multidimensional process that occurs in identity development as the individual becomes more secure with what they know and who they are among others (Kegan, 1982; Baxter Magolda, 2008). Education is one way to intentionally encourage personal transformation however; assessment of such a broad and ambiguous concept presents a challenge. This paper describes a theoretical framework that can be used to offset the ambiguity met when assessing personal transformation in students in higher education. Like the stability of a three-legged stool, the theoretical principles found in constructive-development theory (Kegan, 1984), self-authorship (Baxter Magolda, 1999) and transformative learning (Mezirow, 1994), provided the foundation for a qualitative study of personal transformation and self-authorship in registered nurses (RNs) who recently completed an RN to BSN program (Becker, 2014). Themes of meaning, consciousness and context emerged from the narratives of change. 025: Peer Reflection within Sports Coaching Practical Assessments Martin Dixon, Chris Lee, Craig Corrigan Staffordshire University, Stoke-on-Trent, UK In a practical subject such as sports coaching, the majority of learning should be situated in practice (Cushion et al. 2003) as this can close the ‘transfer distance' between learning and real world application (Abraham et al. 2010). However, recent academic reviews show that formal coach education is rarely considered important or useful in the learning process (Piggott, 2012; Stoszkowski & Collins, 2012) meaning students are unlikely to learn or implement any new ideas (Lemyre et al. 2007). Furthermore, these programmes are not developing what Jones (2000) describes as necessary intellectual and practical attributes such as independent and creative thinking skills, which are also core to student learning in higher education. The current study aims to investigate how the utility of reflective practice may enhance learning within an academic coach education programme, specifically through reflection on peer led coaching sessions. Schon (1983) asserted that reflection is the process that mediates experience and knowledge, therefore making it key to experiential learning. Reflection on and with others can develop future practice, increase learning gained from experience and improve professional socialisation (Morris & Stew, 2007). Moreover, the importance of reflection in
42
practical settings to promote lifelong learning is evident when peers are key contributors to the learning process and a supportive environment is constructed, where both individual self-evaluation and collective critical reflection are nurtured (David et al. 2000). Therefore the current study aims to investigate how undergraduate student coaches reflect on and learn from the practice of their peers within practical assessments. A qualitative research design will be adopted to lead to an understanding of participants' experiences (Taylor & Bogdan, 1984). Four focus groups will be conducted so ideas can be generated and discussed between group members, allowing for ‘richer' information to be gathered than if participants were asked individually (Gratton & Jones, 2004). Focus groups will consist of four or five full time undergraduate students who have studied and completed a peer led practical assessment on the level four module ‘Coaching and Teaching in Sport'. Data will be subject to inductive content analysis as the researcher builds patterns, categories and themes emerging from the data (Cresswell, 2007). Results will be discussed in light of the wider body of research with implications for peer learning and practical assessments across disciplines in higher education. 026: Transforming the Experience of STAFF through Assessment Eddie Mighten, Diane Burkinshaw Sheffield Hallam University, Sheffield, South Yorkshire, UK Baumgartner (2001) suggests that transformational learning in the workplace can take many forms: For example it can be as a result of gradual processes or through ‘sudden powerful experiences’ (p.8). The purpose of this practice exchange is to share our experiences of working through the nationally recognised Transforming the Experience of Students through Assessment (TESTA) programme[1], and specifically how it was experienced as a transformational learning experience for staff, driving forward an Academy-wide cultural change to student learning. Enhancing student learning through assessment has been extensively researched throughout this century and more recently is an important variable when measuring quality in Higher Education. Gibbs (2010) concludes that the pedagogical model adopted is an important factor in what institutions do with available resources to maximise the student experience. While the main ambition of the Academy was to improve student satisfaction with assessment as measured by the National Student Survey (NSS) the experience of TESTA has arguably proved to be a more of a transformational process for staff through deepening their understanding of the power of assessment in student learning. Despite the fact that research evidence and awareness of Assessment for Learning has been growing in the Higher Education sector for a number of years (Black and Wiliam 1998, Gardener et al., 2012) there was limited evidence of the impact of these changing perspectives on assessment in the work of the Academy prior to this intervention in 2014. Indeed, despite undergoing a whole programme revalidation assessment practice received little attention beyond the mechanics of numbers and types of assessment, with limited discourse about the philosophy of embedding assessment as part of the learning and teaching process.
43
Using the TESTA methodology to generate baseline data for each course, the Academy undertook a process of diagnosis and subsequent interventions with its 7 UG and 4 PG courses in order to engage all staff with improving assessment practice. The combination of this ‘sudden experience’ (Baumgartner 2001, p.8) the transformational learning that ensued and the power of collegial conversations amongst staff were significant factors in bringing about this cultural change. TESTA allowed us to rethink our practice and as a result of this intervention, there is clear evidence of a cultural shift in the Academy. One way this is exemplified is by the informal collegial ‘corridor’ conversations about assessment practice which now frequently take place.This practice exchange will explore how this cultural shift was achieved. 027: Marketing Downloads : Student response to a learning and assessment innovation at Kingston Business School, Kingston University Hilary Wason, Nathalie Charlton, Debbie Anderson Kingston University, Kingston Upon Thames, UK ‘Marketing Downloads’ is a learning and assessment innovation introduced at Kingston Business School which challenges the traditional practice of assessment as a passive activity and puts the student at the centre of the process thereby increasing engagement and motivation. It aims to develop students’ capability and identity as learners, (Lyn and Maclachlan, 2007) focussing on the learning experience and how knowledge is transformed not just reproduced (Fry, Ketteridge and Marshall, 2009). The innovation encourages students to contextualise academic theory taught in the classroom with the practice of marketing outside it, and to think about assessment beyond merely receiving a mark. To this, end only a very small percentage mark is attached to the exercise. In summary, Marketing Downloads involves students initiating their own research into a real life marketing example of their choice and bringing it to the classroom to present and lead a classroom debate. Peer and tutor feedback are given immediately and the Marketing Download is then uploaded on the university’s virtual learning site. Many students appear to lack the confidence and trust to fully engage with their learning and assessment and do not always integrate with their network of students and tutors. To address this, Marketing Downloads draws on the active learning pedagogy of collaboration (Prince, 2004) with students working in groups researching and presenting their examples, integrating with their peers and developing a positive learning identity (Lyn and Maclachlan, 2007) Marketing Downloads has run for a full academic year and primary research has been undertaken to evaluate its impact as an assessment tool on student confidence and engagement. A series of focus groups and in-depth interviews have taken place to explore the perceptions, experiences and motivations of students and to highlight key learnings. Several key themes have emerged, the most prevalent being the importance students placed on the value of the discussion about the Marketing Download with peers in the classroom, and the opportunity to learn from other students’
44
participation: ‘I definitely learned from watching the other ones’. The theme of reciprocity (Lyn and Maclachlan, 2007) came out strongly in research, with students identifying the importance of actively asking questions when watching a Marketing Download so that other students did the same in return: “your interaction in theirs meant that they could interact with you”. In summary, the assessment has resulted in engagement with and evidence of learning from the experience of the assessment process. 028: Animate to communicate: using digital media for assessment Jenny Fisher, Hayley Atkinson Manchester Metropolitan University, Manchester, UK In this paper, we report on our use of animation as a contemporary assessment tool in learning and teaching. The research project, to develop the use of animation in assessment, was funded by Manchester Metropolitan University's Centre for Excellence in Learning and Teaching. According to Lam and McNaught (2006), animations are an innovative means of assessment for undergraduate students. The 2014 NMC Horizons Report identified that education technology will have a significant impact globally in higher education over the next 5 years (NMC, 2014). Growing use of social media, integration of online, hybrid and collaborative learning, and a shift from students as consumers to creators are three of the key trends. The key aim of our project was to pilot the use of student-designed animated videos as an assessment tool for undergraduate students. In addition, we explored and tested online animation software, and evaluated students' engagement and staff experiences of using online animation software to create learning materials and assessment products. The focus of our paper will be a discussion of the outcomes of the research, and include students' own experiences of the use of animation. In concluding, we offer thoughts on the new challenges of assessment and the contribution of digital media in a shifting higher education landscape. 029: Grade Point Average: Outcomes from the UK pilot Higher Education Academy Heslington, York, UK This session will predominantly be of interest to delegates working in the UK higher education sector. It will be an opportunity to find out more about the recent recommendations regarding grade point average (GPA) across the UK which have been made by the project Advisory Group chaired by Bob Burgess. The group's report provides information and advice to the UK higher education sector on the potential adoption of a national grade point average (GPA) system to represent the cumulative and summative achievement of students. The GPA debate was prompted by the perceived limitations of the honours degree classification (HDC) system, in particular, insufficient differentiation between student performance, a lack of recognition outside the UK and limited transparency in how the HDC is calculated by different higher education providers. A GPA
45
system was considered worthy of further investigation because of its potential capacity to increase granularity of awards, transparency in award calculations, international recognition and student engagement in their programmes. The Higher Education Academy (HEA) facilitated a pilot study in which 21 diverse providers performed a retrospective data modelling exercise on 2012-13 student achievement outcomes using a common GPA scale. A sample of these providers carried out further modelling using a number of amended scales in order to determine one that could be recommended to the sector. The proposed scale aligns with UK marking patterns and will be competitive in an international context. Pilot providers also reported on the acceptability and implementation of a GPA system taking into account their institutional context and mission. The session will include a brief outline of the project recommendations and an opportunity to debate the outcomes and consider the implications. The full report can be read at https://www.heacademy.ac.uk/node/11124 030: Portraying Assessment: The Fear of Never Being Good Enough Peter Day, Harvey Woolf University of Wolverhampton, Wolverhampton, UK Although asking students to create or respond to images has been quite widely used in pedagogic research in schools1, there is only limited application of the approach in higher education. The Wolverhampton Portraying Assessment project is a contribution to adopting image-based research in higher education. Deriving from McKillop2 and Brown and Wang’s work on students’ creation of images of assessment3 and Yorke et al’s 4 on art and design students’ NSS responses in particular, Portraying Assessment aims to: • Provide insights into students' reactions to their current and past experiences of assessment. • Help students to clarify their understanding of achievement in assessment, of the transparency and objectivity of marking, and the relevance of the NSS questions on assessment and feedback. • Allow module tutors to modify their assessment regimes on the basis of more reliable and finely grained empirical evidence than is available from Module Evaluation Questionnaires. Unlike the McKillop and Brown-Wang studies, which asked students to draw their perceptions of assessment, Portraying Assessment involves the very skills and techniques the students are employing in their Photography course to generate the images to analyse their subjective experiences of assessment. Early research is providing some similarities to other research in this area in describing the overriding fear and anxiety felt by students toward being assessed, whilst at the same time showing significant differences in that assessment is seen as a necessary but empowering process.
46
This paper reports on the findings of the project to date and the visual methodology used to analyse the data, discusses how the principles of the approach could be transferred to other subjects in and beyond art and design and explores how students might be disabused of their fear. 1 Cousin, Glynis. 2009. Researching Learning in Higher Education: An Introduction to Contemporary Methods and Approaches Abingdon: Routledge. 21415. 2 "Drawing on Assessment: Using Visual Representations to Understand Students’ Experiences of Assessment in Art and Design." Art, Design & Communication in Higher Education, 2006, 5 (2). 3 Illustrating Assessment: How Hong Kong University Students Conceive of the Purposes of Assessment." Studies in Higher Education, 2013, 38 (7). 4 "Hit by a Perfect Storm? Art & Design in the National Student Survey." Studies in Higher Education, 2014, 39 (10). 031: Investigating student preferences for a novel method of assessment feedback: A comparison of screencast and written feedback through questionnaire and focus group methods David Wright, Damian Keil Manchester Metropolitan University, Crewe, Cheshire, UK Assessment feedback plays a crucial role in student learning (Weaver, 2006). In higher education, feedback is usually provided by the marker in the form of handwritten or typed comments regarding the student's work. In order to be effective these comments must be detailed, understandable to students, and focus on areas for improvement (Gibbs and Simpson, 2004). Despite the importance of feedback, many students report being dissatisfied with the quality of the feedback they receive. Specifically, students have reported that written assessment feedback is usually vague, difficult to understand, and does not provide specific guidance on how they can improve their work in future (Chanock, 2000; Weaver, 2006). This has contributed to consistently low student satisfaction scores for Assessment and Feedback in the National Student Survey between 2005-2013 (HEFCE, 2011; 2014). An alternative method for providing assessment feedback is through screen-casting technology. Screen-casting allows the marker to display the assessment submission on screen, highlight specific sections, and provide a verbal commentary on the work (Thompson and Lee, 2012). The aim of this study was to establish students' preferences for receiving assessment feedback by screencast, compared to more traditional written methods. Level 6 students enrolled on an undergraduate programme at Manchester Metropolitan University received feedback for the first assessment on one unit by screencast and for the second assessment on the unit by written comments. After receiving the feedback for the two assessments, 44 students completed a questionnaire to indicate how useful they had found the two methods in helping them understand the grade that they received and understand how they could improve their work in future. In an extension to previous research on this topic (e.g., Thompson and Lee, 2012; Vincelette and Bostic, 2013), two focus groups were then conducted with 13 of these students to explore their opinions on the two feedback methods in further
47
detail. Questionnaire findings indicated that 78% of students preferred the screencast method, compared to the written feedback. Analysis of the focus group data revealed several reasons for this preference. Specifically, students perceived the screencast feedback to be more detailed, more personal, easier to understand, and provide clearer guidance on how they could improve both course-specific content and more general academic skills. The findings support the use of screencasting as an effective method of delivering assessment feedback, which, if used regularly, may contribute towards improved scores on the National Student Survey for Assessment and Feedback. 032: Helping the horses to drink: lessons learned from an institutionwide programme designed to enhance assessment Andy Lloyd Cardiff University, Wales, UK This paper outlines the findings from the evaluation of a four-year institution-wide project that aimed to improve consistency in the management of assessment and enhance academic feedback to students. The project took a holistic view of assessment, focussing on all of the different phases in the assessment lifecycle, recognising both the interrelated nature of the different stages and processes that support assessment (Bloxham and Boyd, 2007) and the difficulties that have often been encountered in managing change in this area (Deneen and Boud, 2014). The paper will detail the outputs produced by the project in each of the different areas in the assessment lifecycle and consider the outcomes and changes to practice that these helped to foster. While the project sought to recognise the unintended consequences (Stobart, 2003) that can arise from such changes, particularly when policies designed to place a greater focus on consistency are adopted (Price ... [et al.], 2011), the paper will identify the change mechanisms that had the greatest impact, while noting that these may not always promote or result in the desired outcomes. The paper will also describe the deliberate, considered, and consultative approach to change taken by the project, which involved the development and maintenance of an effective partnership with students, staff, and other stakeholders (Conner and Patterson, 1982). It will also highlight the value of this approach, while identifying the areas within the assessment lifecycle that proved more difficult to change, and the value of working with particular stakeholder groups. The paper will thus seek to consider the impact of a number of the different strategies used to implement change on the different stages in the assessment lifecycle (Price ... [et al.], 2010). It will consider the ways in which staff and students now engage with assessment within the institution and will put forward a suggested ‘change recipe' designed to promote and facilitate developments in practice while maintaining an appropriate balance between flexibility and consistency. It will conclude by identifying the further plans developed by the institution to enhance assessment, these being informed by the lessons learned from the previous project.
48
033: Getting traction on assessment development: what can we learn from a professions' (Law; Medicine) perspective? Chris Trevitt Australian National University, Canberra, Australia This session seeks to identify practical steps to achieve educationally desirable assessment reform in higher education in an increasingly resource limited and challenging socio-political environment. Globally, expectations of higher education continue to expand (OECD, 2008). Pressures to change are relentless. The main trends include: increasing access (enabled, for example, by the increasingly mature and ubiquitous reach of digital communications technologies), diversifying funding arrangements, and an enhanced focus on quality assurance and accountability. Systems need to be 'readily scalable (wide access), academically credible (high quality) and affordable (low cost)' according to Daniel et al. (2009). Assessment is a key fulcrum for leveraging such change. Daniel et al. instance the University of London 'radical innovation' of 1858 – when it 'established its external studies program and delinked its examinations from study at any institution' – noting that one of the advocates was 'Dr. Robert Barnes … [who] extolled the virtues of examination, saying, “Knowledge alone must be tested. There is no substitute for it. The University and the public are not concerned to inquire “when or where” it was obtained...”' Nowadays, however, assessment has to be concerned with more than just knowledge. The increasing emphasis on generic attributes (eg Barrie, 2006; Jones, 2009) makes for more complex assessment demands: an examination testing 'knowledge alone' is no longer adequate. The continuing pressure for change, mounting evidence for assessment as a focus for action, and increasing complexity of what has to be assessed, is a challenging combination (eg Knight, 2002; Deneen and Boud, 2013). In seeking ways forward, one option is to review others' initiatives. Our context is the discipline of Law, especially innovative approaches to online assessment, and we are looking to experiences in Medicine. Mindful of a growing interest in generic attributes, and imperative to move beyond just 'knowledge' as curriculum (eg Barnett et al., 2001) the perspective of the professions (eg Medicine) may be transferable more broadly. In this session we table a synopsis of key developments in assessment in Medicine (eg Val Wass, 2001; Howley, 2004; Norcini and McKinlay, 2007; Kogan et al., 2009), and briefly consider these in light of: i) the pressures (and opportunities) for change that we are experiencing in Law (eg Trevitt et al., 2014), and
49
ii) the analysis by Price et al. (2011). Participants are invited to compare and contrast their experiences, noting differences across disciplines, institutions, and/or systems of higher education. 034: How can an institution increase the assessment quality of its examiners? Remko van der Lei, Brenda Aalders Hanze University of Applied Sciences, Groningen, The Netherlands Many teachers / examiners in higher education are well educated in didactic methods, but have a weaker background in the area of assessment. Teachers are employed on the grounds of their didactic abilities, research and / or work experience. The quality of assessment is therefore not generally assured. In the Netherlands this can be seen in the light of a number of incidents (Ministry of Education, 2010, 2011; Netherlands Association of UAS, 2012). The Netherlands Association of Universities of Applied Sciences have therefore standardised the minimum qualifications for examiners' assessment knowledge / skills (National expert group on assessment in higher education, 2013). The standard is based upon two theoretical assessment models: the assessment cycle (Hanze University, 2004) and the assessment quality pyramid (Joosten-Ten Brinke, 2011; Sluijsmans, 2012). The standard contains three learning outcome criteria: 1. The examiner can analyse the consequences of the institution's assessment policy for his / her own assessment; 2. The examiner can place his / her own assessment in a broader context; 3. The examiner can analyse his / her own assessment with regards to the assessment cycle. The Hanze University is a university of applied sciences with 1600 examiners. The university's examiners will be required to be certified according to the above standard. The certification process will entail: 1. Submission of a portfolio with current products of each step in the assessment cycle (with literature-based reviews for every step); 2. An interview in which the above portfolio is judged against the learning outcome criteria of the standard. In this practical exchange session the participants will be introduced into the method by which Hanze University has chosen to improve assessment quality. The session will invite participants to look critically at the certification process. More importantly, the participants will be able to look at their own institutions through questions such as:
What is assessment quality? How is quality of examiners' assessments measured? How can an institution influence assessment quality of its examiners? How can assessment quality be assured in the long term?
50
035: Charting the assessment landscape: preliminary evaluations of an assessment map Anke C. Buttner, Carly Pymont University of Birmingham, Birmingham, West Midlands, UK Students face many assessments throughout their higher education, and there is a huge amount of information related to each individual assessment. Yet students are expected to complete these assessments to succeed on their courses. Not only does this require students to develop their assessment literacy, their understanding of assessment-related principles and their familiarity and ability to make the most of feedback (Price, Rust, O’Donovan, & Handley, 2012), but they also need to know where to go for basic information about their assessments and on how assessments on their programme are related (cf. curriculum mapping, Harden, 2001). While such ‘assessment mapping’ information may be relatively easy to process compared to the higher level concepts required for assessment literacy, accessing it is not always easy. The Birmingham Assessment for Learning Initiative (BALI, 2012/13) audited assessments across many different University of Birmingham undergraduate programmes and examined students’ experiences of assessment using the Higher Education Academy’s TESTA (Transforming the Experience of Students through Assessment) method. One experience frequently reported by students was the sense that assessments do not relate neatly to each other and that feedback does not translate from one type of assessment to another, leaving students feeling as if they have to tackle each assessment in isolation. Similarly, the BALI researchers found that assessment information was often held by different academic and/or administrative staff in separate documents and multiple formats. Finding all relevant information was a time-consuming process. What may be needed is an assessment map: a database tool which combines the information relating to each assessment in a single place, but which can be queried in a number of ways depending on who – students, academic or administrative staff – uses the database and for what specific purpose. This paper aims to present a description of an ideal assessment map. To explore what such a map would need, we interviewed staff and students about assessment mapping practices at the University of Birmingham, traced the journey of an individual assessment through all its stages, and reanalysed the TESTA focus-group data from the BALI project in the light of assessment mapping concerns. On the basis of our findings we compiled a prototype assessment database and created an pilot assessment map for students. We present initial evaluations of these prototypes as potential tools and discuss the pedagogic implications of assessment mapping as a low-level component of assessment literacy.
51
036: The constrained impact of a capstone dissertation assessment on the continuing workplace learning of master teachers Pete Boyd, Hilary Constable University of Cumbria, Cumbria, UK Within higher education in professional fields, there is a tension between the authenticity of the assessment task and academic standards and traditions (Darling-Hammond & Snyder, 2000). Focusing on assessment design in professional fields provides insight into boundary-crossing aspects of higher education, such as preparing graduates for employment in the knowledge based economy, which are of relevance even in traditional subject disciplines (Bologna Process, 2009). This study focuses on experienced school teachers who, as parttime higher education students, have pursued advanced professional education and passed a significant capstone assessment, their Masters dissertation. The study investigates the teachers' perspectives on their dissertation and its impact on their practice in relation to their continuing workplace learning and roles as leaders of change. Despite an historical commitment to teacher and school development through enquiry it is a contested area (Stenhouse, 1975; McIntyre, 2005; Kemmis, 2006; Nutley, Jung & Walker, 2008). Previous research on teachers has identified their broadly sceptical views on the value of educational research (Gore & Gitlin, 2004; Joram, 2007). Using group interviews with master teachers from individual secondary schools in England the study generated data by asking teachers to review the impact of their dissertation on their practice and to consider their approach to supporting a student teacher in resolving classroom dilemmas. The group interviews were audio recorded and transcribed and subjected to qualitative discourse analysis. The teachers' claimed considerable impact and in the presence of their colleagues were able to describe some practical changes in practice and improvements in student learning arising from their dissertation projects. However, the analysis supports previous research suggesting that the formality of the Masters dissertation, as practitioner research, does not easily or directly translate into the continuing informal workplace learning of the teachers (Turner & Simon, 2013; Frankham & Hiett, 2011). In our analysis the discourse of enquiry in the MA dissertation does not appear to persist sufficiently beyond the life of the programme. The dissertation itself appears to have impact, but the subsequent increased research capacity of the master teachers does not seem to be aligned to their continuing learning or leadership of change. This raises the question: should the assessment design be changed to be more authentic, meaning more aligned with practice in the field, or should practice in schools change to become more explicitly research-informed? This dilemma is echoed in varied ways across different fields and subject disciplines in higher education. 037: Oral forms of assessment and the nature of the spoken word: Insights from the world of acting and actor training Gordon Joughin1, Eliot Shrimpton2 1 Higher Education Consultant, Brisbane, Australia, 2Guildhall School of Music and Drama, London, UK Previous research by one of the presenters (Joughin, 2010, 2008, 2007, 2005) has highlighted the importance of students' conceptions of oral assessment and the influence of oral forms of assessment on student learning. This research has
52
pointed to the inherent nature of the spoken word which underlies students' experience of oral assessment and has noted parallels between students' descriptions of their experience of oral assessment and the ‘psychodynamics of orality' studied by writers such as Walter Ong in his classic work, ‘Orality and Literacy' (Ong, 2002.) This paper reports on a project which has sought to establish a better understanding of the nature of the spoken word in higher education in general by investigating how it is understood and experienced in the specific field of acting, a field which gives strong and explicit attention to the nature of speech. This project constitutes an empirical investigation of the established craft knowledge of acting, locating that knowledge in the broader field of higher educational research. The research was conducted primarily at the Guildhall School of Music and Drama, London, with additional data from the University of Windsor, Canada. While studies of oral assessment have addressed the question, ‘What does it mean to speak what one knows?', this study addresses the preliminary question, ‘What does it mean to speak?' Twelve teachers and five students were interviewed regarding their understanding of the spoken word. Interviews were transcribed, with the interview transcripts being analysed using a phenomenographic approach. This approach involves identifying those aspects of the spoken word which figure most prominently in teachers' and students' consciousness, along with variations in teachers' and students' overall conceptions of the spoken word. Knowledge of teachers' and students' conceptions of the spoken word, including variations in those conceptions, provides important information for voice and acting teachers by highlighting the aspects of speech that figure prominently in students' awareness and which therefore need to be taken into account in helping students develop the most appropriate conceptions of the spoken word consistent with high levels of performance. This knowledge also illuminates the nature of orality, thereby contributing to a better understanding of the dynamics of oral assessment in other disciplines. 038: Case-Based Assessments in Business Management: Think Local, Not Global Carl Evans University of St Mark & St John, Plymouth, UK The use of case-based learning and assessment is commonplace in a number of disciplines, such as healthcare (see for example, Kaddoura, 2011) and education (e.g. Kantar, 2013). Despite criticisms of the use of case studies in assessment especially in examination settings (Packard and Austin, 2009), their use is prevalent in business management (Garvin, 2006) in order to develop critical thinking skills (Healy and McCutcheon, 2010) and engage with complex business problems faced by managers (Lee et al, 2009; Weil et al, 2011). However, case studies readily available for business assessments typically comprise multi-national, world-leading corporations, such as Microsoft, Coca-Cola, McDonalds etc., although demonstrating best-practice in management, and embrace the globalisation agenda (Vos, 2013), can result in a number of problems. Notably, the large volume of material available on the web about these
53
organisations, and the number of assessments already prepared and circulating, can lead to plagiarism (Vernon, 2001). Moreover, corporations tend to be guarded about detailed information in the public domain, particularly concerning operations activities, which can therefore lead to only a surface-coverage of the key learning points by students. In addition, employers feel that business courses are too focused on large corporations rather than preparing students for working in SMEs (21st Century Leaders Report, 2014:9). This paper will present the authors approach to developing his own case studies for use in business assessments, based on local small businesses. This typically incorporates a written paper, student visits and guest speakers from the respective company to reveal more intimate details of the organisation's operations, facilitate student questionning and facilitate a more in-depth assessment analysis. In addition, a broader range of unique business scenarios can be presented (Mostert, 2007) and the use of a ‘live' case provides a closer fit to reality which enhances learning (Rebeiz, 2011; Theodoiou, 2012). While case-based learning has received some academic coverage, the use of case studies in assessment remains relatively unexplored. Consequently, the approach presented will inform and challenge HE colleagues, with the session providing an opportunity for delegates to evaluate the development and use of case-based assessments, particularly how they might apply these practices to their own professional area. The session therefore reconciles to the conference theme, ‘Learning and Contemporary Higher Education Assessment’. 039: A sensible future for moderation? Sue Bloxham1, Lenore Adie2, Clair Hughes3 1 University of Cumbria, Lancaster, UK, 2Queensland University of Technology, Brisbane, Australia, 3University of Queensland, Brisbane, Australia Internationally, quality assurance and quality improvement within higher education is increasingly in the spotlight (Johnson 2014). Within this context, the sector is renewing its attention to those activities referred to as ‘moderation' in its efforts to ensure that judgements of student achievement are based on appropriate standards. This paper is based on an international collaboration drawing together moderation research by the authors that focuses on: the range of activities employed in higher education, academics' discourses on the subject, and studies of moderation in practice. A forthcoming paper has examined commonly used moderation methods in relation to three purposes: achieving equity for students, justification/accountability for grades and community building in relation to standards. The findings suggest that many commonly used activities only meet the aim of justification/ accountability in terms of a specious public defence of standards and offer little in terms of concrete evidence of assuring or calibrating standards or providing equity to students. Whilst moderation procedures can give confidence that due process has been observed, these ‘inputs' and ‘processes' do not necessarily guarantee appropriate ‘output' standards (Alderman 2009) or a satisfactory student assessment experience (Crook, Gross, and Dymott 2006). Furthermore, some common moderation methods may be distorting standards. On the positive side, moderation with a focus on community building has been shown to add to assessors' assessment literacy as well as knowledge of standards.
54
This paper takes that work further in exploring a potential future for moderation which goes beyond compliance activities and makes a valid contribution to equity, accountability and community building. It will outline the constraints on moderation such as time pressures in the assessment cycle and staff workload. It will also consider the lessons to be learnt from studies of professional judgement including the value of significant academic engagement and debate focussed on evidence aligned with standards descriptors. The paper will argue for ‘flipping' the place of moderation in the assessment cycle in order to develop moderation which is informed by, and informs, the learning design with potential benefits for staff and students. The proposed model suggests focusing our limited resources more sharply on moderation practices with an impact beyond mere policy compliance or routine completion of unproven activities. Full references will be supplied at the conference. 040: Using participatory photography as an assessment method: the challenges Gwenda Mynott Liverpool John Moores University, Liverpool, UK Higher education continues to be mainly based on the written word and reliant on textual sources. In contrast, the literature shows that the current generation of students are more likely to identify themselves as visual learners that are intuitive visual communicators who are image rather than text oriented (Bleed, 2005). This paper discusses an innovative approach to assessment within a level 4 academic skills module. The module is core on the Business and Public Relations BA (Hons) programme at Liverpool John Moores University. One of module's aims is the development of a culture of continuous personal reflection and development. Students have struggled with this component, often not seeing the relevance and not being able to relate it to personal experience. There has been an identified need to find new ways of engaging students with critical reflection. The assessment was designed to use participatory photography to empower students and academics to explore and reflect on what learning looks like from the student viewpoint. The students are producers of their own visualisation of their learning experience with the lecturer equally participating with their own photos of their teaching and learning experiences. This use of a visual assessment method is supported by Schell et al. (2009) who state that participatory photography ‘is a successful tool for conducting research, teaching students to think critically, and introducing students to a new medium to create knowledge’ (p.340) and that engaging with visual methods is important to both students and teachers. However, a number of challenges with using a participatory photography approach have been identified. These include ethical issues, technical issues and power dynamics both within student groups and in the student/lecturer relationship. This paper will highlight these discussion points.
55
041: Formative thresholded assessment: Reflections on the evaluation of a faculty-wide change in assessment practice Sally Jordan The Open University, Milton Keynes, UK A practice exchange at the 2013 Assessment in Higher Education Conference discussed the rationale behind a faculty-wide change in assessment practice and the evaluation of the change in practice, which was in progress in 2013. This research paper reports on the now-completed evaluation and discusses the conclusions, many with wide-ranging implications. Previous practice in the Faculty had been for all modules to be assessed by a combination of summative continuous assessment, with extensive feedback comments, and an end-of-module task (an examination or an extended assignment). In such a "double-duty" model (Boud, 2000), assessment's formative and summative functions can be difficult to balance (Giorka, 2008; Price et al., 2011) with the formative function sometimes "getting lost" (Brearley & Cullen, 2012). Staff and students had also been observed to have a different understanding of the purpose of continuous assessment: staff saw its purpose as primarily formative, but students were primarily concerned with obtaining high marks. There was a wish to free students from anxiety over the minutiae of grading, placing greater focus on feedback and dialogue between students and their tutors. The revised practice still required students to meet a threshold of some sort in their continuous assessment, with two different models of thresholding compared in the study; in each case a student's final grade was determined by the end-ofmodule assessment alone. The change in practice was evaluated by a number of practitioner-led sub-projects, using both quantitative and qualitative research methodologies. The change to formative thresholded assessment sometimes led to a slight reduction in the number of submissions of continuous assessment tasks. However other factors, in particular a poorly timed assessment task, had a considerably larger effect and the change in practice did not have a significant effect on overall completion and success rates. Many students were found to have a poor understanding of assessment strategies, in particular of our previous summative continuous assessment. This is in line with a frequently found result that students have poor understanding of the nature and function of assessment (Carless, 2006; Orsmond & Merry, 2011), pointing towards a need for clarity and consistency across qualifications. In the future, each module will have two parts to its summative assessment, with a project/experimental report as well as an examination. Other assignments will remain formative but thresholded, with a clear purpose of preparing students for the examinable components.
56
042: Students' responses to formative and summative online feedback generated using a statement bank: Outcomes from two quantitative studies. Philip Denton, David McIlroy Liverpool John Moores University, Merseyside, UK A review of technology enhanced marking tools found that they are viewed positively by assessors (Heinrich et al, 2009). Students have highly rated the feedback returned by tutors selecting comments from an electronic statement bank (Denton et al, 2008). Nicol and Milligan (2006) question the effectiveness of this approach, however, and a recent study reported limited student engagement when statement bank feedback was returned (Denton & Rowe, 2014). To further understand students' responses to tutor comments published online, two quantitative studies were undertaken. In an experiment, 162 level three Natural Science students were divided into six equivalent groups based on their % marks in a summative spreadsheet assessment. Five test groups were emailed feedback reports composed using a statement bank and including a request for a blank reply placed at different positions, the sixth group receiving no such invite. Received replies indicate that students either read their feedback completely or, as was the case for at least one-quarter of test group students, not at all. This outcome further supports the use of strategies to support students with their summative feedback (Taras, 2001). A t-test found that the mean marks of test group students who replied to their feedback (M=73.5%, N=56) and those who did not (M=56.7%, N=78) were significantly different (p<.0001). While seemingly providing evidence of learning through habitually reading feedback, a longitudinal study would be needed to confirm that students' responses do not vary according to the mark they attain. In the second study, 52 level three Natural Science students submitted a summative laboratory report after the return of online statement bank feedback on a formative task with the same assessment criteria. The average summative % marks of students who received formative feedback (M=65.8%, N=41) and those who did not (M=55.9%, N=11) were significantly different (p=.008). Students apparently made effective use of the tutor's comments and it has been noted that learning through feedback may be evidenced when students use it to produce improved work (Boud, 2000). Other measures of student engagement would need to be included, however, for this to be confirmed in this study. Outcomes of both studies suggest there is no inherent deficiency in statement bank feedback returned online and support the notion that providing opportunities for students to â&#x20AC;&#x2DC;close the gap' encourages engagement with feedback. Where these opportunities or other support are absent, it is patent that a significant proportion of online feedback remains unread. 043: Dialogue+: Promoting first year undergraduate students' understanding of, and participation with assessment and feedback processes Rebecca Westrup University of East Anglia, Norwich, UK The aim of this research was to support first year students' developing participation with, and understanding of assessment and feedback processes at university through the promotion of dialogue with lecturers. Central to the
57
intervention is the use of an interactive coversheet (ICS) (Bloxham and Campbell, 2010), lecturer and peer mentor-assisted workshops and peer mentoring. To succeed at university, students are required to enter into new environments with unfamiliar ground rules whilst they negotiate relationships with tutors and understand the meaning of writing assignments and assessment processes. Often, learning about assessment and feedback processes is exacerbated by the tension that students cannot participate in the practice of academic writing to the standard required by undergraduate studies, (Lowe & Cook, 2003) and do not (fully) understand the rules and processes of this practice (Carless, 2006). Bloxham and Campbell (2010: 293) suggest that ICSs are a potential mechanism for encouraging students to take responsibility for their learning within this process as they allow students to â&#x20AC;&#x2DC;prompt dialogue on the issues of importance to them'. However, they reported that although the ICS did encourage students to think about their work, many found the questioning aspect of the process problematic as they did not have a clear enough understanding of the standards and requirements of the assessment, writing an essay, to engage in meaningful discussion. For the majority of the students this was exacerbated by feeling too embarrassed or intimidated to ask their tutors for help. This may be a result of a perceived power imbalance between students and their tutors (Boud, 1995) and drawing on the work of the WriteNow CETL (O'Neil, 2008), peer mentoring was considered a less embarrassing or intimidating opportunity for students to develop a meaningful understanding of the standards and requirements of the process that will then enable them to ask questions. This paper will outline the Dialogue+ process and explore the experiences of students, lecturers and peer- mentors and consider if the intervention developed first year students' inclusion within assessment practices and their understanding of assessment and feedback processes whilst also promoting dialogue between students and their lecturers. It is hoped that this research will further illuminate the complexities of students' relationships with academic writing and generate new understandings that will be useful in future discussions regarding academic writing, dialogue and assessment processes to ensure a sense of reciprocity between students and tutors within inclusive pedagogical practices. 044: Assessment for Employment: introducing "Engineering You're Hired" Patricia Murray, Andrea Bath, Russell Goodall, Rachel Horn University of Sheffield, Sheffield, UK â&#x20AC;&#x2DC;Engineering You're Hiredâ&#x20AC;&#x2122; (EYH) was first run in 2013. It is a one week project compulsory for all 1000 second year students from the nine departments in the Faculty of Engineering. Students work in inter-disciplinary, multi-cultural groups to address real world engineering problems. The project week arose from various complementary concerns: too much summative, fragmented assessment; the need to enhance students' competitiveness in a global labour market, development of an engineering approach to tackling realistic problems in an industrial context. The learning outcomes for EYH include group working, problem solving and reflection.
58
A major constraint was that EYH is non-credit bearing. A pass is based on attendance, however a fail inhibits a student's progression. Students commonly recognise credit as indicating academic value, which therefore influences motivation and where they invest effort. Educational best-practice has promoted this, and the transparency provided by publicising learning outcomes and assessment criteria enables students to make these decisions. However, is this always the best and only way, or does it lead to learning to the test? EYH was developed to stimulate learning using other methods of reward and motivation. During the week, groups are gathered into ‘hubs’ of six groups with an associated Graduate Teaching Assistant (GTA), staff member and industrial mentor. Involvement of industry is key to making the week distinctive and promoting credibility and value to the students. Most of the time, student groups work on solutions to their problem with industrial mentors on hand to give advice. Each day ends in a ‘boardroom’ akin to the TV show ‘The Apprentice’. Each group, led by their daily team leader has 10 minutes to present to the board (GTA, staff and industrial mentor) and receive feedback on progress, in terms of both team working and progress towards their proposal. Projects are assessed through a written proposal (assessed by the GTA) and a verbal pitch (assessed by the board). Winning groups are awarded certificates and small prizes, and this is formally endorsed by inclusion in the students' Higher Education Achievement Report (HEAR). EYH is novel and contests conventional approaches to assessment. In developing the week, we have tried to inspire and challenge students, rather than to formally assess them. Our desire is to engage them in an activity, which although noncredit-bearing, has winners and rewards, and provides opportunities for every student to develop and evidence skills that will be of benefit in their future careers. 045: Assessing Student Learning: A Source of Ethical Concern for Higher Education Teachers Luc Desautels1, Christiane Gohier2, France Jutras3, Philippe Chaubet2 1 Cégep régional de Lanaudière, L'Assomption (Québec), Canada, 2Université du Québec à Montréal, Montréal (Québec), Canada, 3Université de Sherbrooke, Sherbrooke (Québec), Canada What should be done with final grades approaching, but not quite reaching, the threshold for success? Is it acceptable to adapt competence standards when dealing with a physically handicapped student or one struggling with mental illness? Is it possible not to be unduly influenced by institution's academic success plan? How can objectivity be fostered when assessing traineeship, and especially trainees' professional behaviour? These are ethical problems that challenge teachers, may they work at the primary or secondary levels (Green, Johnson, Kim & Pope, 2007; Pope, Green, Johnson & Mitchell, 2009; Boon, 2011) or in Higher Education (Gohier, Desautels, Joly, Jutras & Ntebutse, 2010). Partial results of our collaborative research focusing on the development of ethical reflection within college faculty (SHRC, 2010-2013) also confirms these findings; in this research, conducted with two groups of college
59
teachers in Montreal (2012, n = 13) and Quebec City (2013, n = 12), we observed that: - The problem of assessing students' learning is mentioned at the first meeting of each group; - One of the cases is singled out for the main discussion in one of the five subsequent meetings of each group; - When all the verbatim segments are ranked, the "Student Assessment" category (6.27%, n = 137/2186) occupies the second place in number of occurrences, just behind the "Values" category (7.41%). The paper will share the portrait of ethical issues in assessment matters as found in scientific literature and offer our own findings; ways to understand and deal with these ethical issues will then be proposed to participants: adopting common guidelines, developing professional judgment and fostering peer ethical deliberation. 046: Applying assessment regulations equitably and transparently Marie Stowell1, Harvey Woolf2 1 University of Worcester, Worcester, UK, 2ex University of Wolverhampton, Wolverhampton, UK While much has been written on good assessment practice, little attention has been paid to the ways in which re-assessment of students who fail their first assessments operate in practice. This paper, which derives from the Student and Assessment Classification Working Group's (SACWG) recent research on Level 4 progression regulations, explores the extent to which institutional assessment rules and regulations on re-assessment are equitable and transparent. The significant differences in progression requirements between institutions may often be justified on grounds of the specificity or uniqueness of an institution's culture, history or learning and teaching principles. However, it is not usually clear from a reading of universities' assessment regulations why students with the same profile of marks should be treated differently in different institutions. The rationales for why some institutions allow students to move from Level 4 to 5 with a particular credit volume which includes compensated credits and others do not are rarely, if ever, stated. Similarly, those institutions that devolve assessment decisions to faculty or departmental assessment boards or module teams risk variable treatment of students within the institution. In the absence of clearly defined rules and regulations, decisions may be made according to the application of a board's folk memory of custom and practice. At module level too there is a variety of practices across the sector. Some modules require all assignments to be passed with the pass mark; others allow total compensation between assignments provided the overall result produces the pass mark. Other modules impose a threshold mark to be attained in each assignment even if the overall mark is at or in excess of the pass mark. In addition to these performance standards, some institutions specify additional pass criteria such as attendance at classes or submission of all assignments. The rules
60
for compensating or condoning failed modules are correspondingly complex - and, often, obscure. The paper discusses the findings from SACWG's Reassessment research and questions the fairness and transparency of the variegated practices in the context of current debates about academic standards, the expectations of the QAA's Quality Code, how greater transparency can be achieved, the role of the student as consumer, the increase in the number of submissions about academic status made to the Office of the Independent Adjudicator, and the central part assessment plays in learning and teaching. 047: E-marking: institutional and practitioner perspectives Carmen Tomas University of Nottingham, Nottingham, UK Across the sector institutions are facing the challenges of a transition to fully electronic management of assessment (EMA). Clear benefits are recognised at institutional level, for administrative processes, and for the student experience of an end-to-end online approach to setting coursework, receiving submissions and returning marks and feedback. In such a system, e-marking, or on-screen marking, is an essential pre-requisite in order to achieve the potential of electronic management of assessment (EMA) and associated aspirations. However, enabling and encouraging the uptake of e-marking practices represents a key challenge. Previous studies have identified types of practitioners and addressed attitudes to e-marking (Ellis and Reynolds, 2013). Institutions need to consider strategies for transitioning to full e-assessment with parameters and target outcomes that take account of practitioners' experiences. This paper explores both perspectives. Most of the existing research in the area of practitionersâ&#x20AC;&#x2122; experience of e-marking addresses accuracy of marking and overall staff attitudes to technology uptake. It is argued that a greater insight into the practitioner's experience of adapting their marking practice is required. Twenty four academic participants (early adopters and those new to e-marking) were interviewed about their e-marking experience. The analysis presented here provides additional insights into the experience of emarking at different stages of the marking process. This perspective helps to differentiate stages in e-marking according to their relative impact. In the light of a greater understanding of the e-marking process, the paper will conclude by reviewing the evidence and discussing institutional strategies for implementing EMA that might reconcile apparent opposing interests within an institution. Many of the considerations throughout the paper could apply to both exam and coursework assessment. However, greater emphasis is placed on coursework assessments where there is already substantial uptake of technology in the sector.
61
048: Chinese Tutor and Undergraduate Responses to an Assessment Change Jiming Zhou The University of Hong Kong, Hong Kong Assessment innovation has been on the educational agenda of many countries in recent years. In Chinese higher education, the move towards assessment for learning (AfL) is being positioned to enhance English language programs. Previous studies indicate the absence of the AfL practices in Chinese college English classrooms. However, the empirical question remains as to how tutors and undergraduates interpret and enact assessment change in their local context. Metaphorically speaking, this study aims to slice through the layers of the innovation onion, to reveal the agentive spaces in which tutors and undergraduates perceive and enact a top-down initiated assessment change. This study selects an information-rich case-a university which moved beyond the rhetorical engagement and actively engaged itself in a change towards AfL. Four tutors and eight groups of students (n=45) were interviewed at two different times over one academic year. Forty lessons were observed and audio-recorded. Data were analysed following an inductive coding procedure adapted from the qualitative analysis protocols established by Miles and Huberman (1994). Findings indicate perceptional gaps between the four tutors and their students with regard to the three process dimensions of the AfL principles, i.e. where the learner is going, where the learner is, and how to get there. The tutors perceived the higher-order skills as the targeted competences of classroom assessment activities, while most students aimed at a wide repertoire of vocabulary and grammar knowledge. The tutor-student disagreement on ‘where the learner is going’ led to the relatively low degree of student engagement in the assessment activities. Peer assessment was valued by the tutors as an efficient AfL strategy to elicit the evidences of ‘where the learner is’, but was implemented in the form of peer grading in the first semester. The summative use of peer assessment were perceived by students as "totally nonsense and unfair". In the second semester, students were encouraged to give peer feedback to inform their peers of ‘how to get there’, and they began to recognise the learning potential of peer assessment. This study unravels some tensions in the promotion of the AfL principles in Chinese higher education. It carries practical implications for implementing AfL in culturally appropriate ways in the non-western context. Theoretically, the findings suggest that tutor and student intentions and practices of peer assessment anchor at different points along a continuum. This paper therefore proposes a peer assessment continuum with dimensions of purpose, criteria and feedback.
62
049: The journey to digital storytelling and artifact-based assessment in Psychology: lessons to be learned from the arts-based disciplines. Diane Westwood University of Sunderland, Sunderland, UK Inquiry-Based Learning (IBL) is assumed to be comfortably research-based having, as it does, a process and participant/producer orientation (Healy, 2005a). In practice, however, moving to and maintaining a process orientation is not easy when swimming against the tide of a discipline’s product focus. In those circumstances, how can we encourage students to enjoy the research journey – or even to see that there is a journey to be taken? Some disciplines are also more attuned to the idea of ‘production’ than others – and disciplinary differences are particularly marked in the arena of assessment. In Psychology, for example, assessment often takes the written form and remains a private transaction between marker and student conducted under the cloak of a VLE. This lies in contrast to the situation in the arts-based disciplines where assessed artifacts sit visibly at the centre of learning, to be shaped by ‘crits’ and ongoing formative feedback in a public, shared environment. Clearly this would be a highly unusual approach to assessment in Psychology. The highly unusual, however, carries the potential to destabilize and defamiliarise – factors that can free students from the passive consumer role and empower them as producers (Benjamin, 1934) as well as slowing down and highlighting the process (Margolin, 2005). This case study considers the journey toward the digital story artifact as an assessment form in Psychology, with lessons learned from the arts-based disciplines. It considers the powerfully transformative potential of digital stories as a new mode of expression reveals itself and raises awareness among students of how the discipline is evolving and of the contentious nature of some of its developments - in particular those relating to the ‘performative turn’ (e.g. Gergen & Gergen, 2012). This will help students to locate and justify their work in the context of the discipline. 050: Experiences of co-creating marking criteria Nicky Meer, Amanda Chapman University of Cumbria, Lancaster, UK This Practice Exchange is based upon the experience and results of a five-year longitudinal action research project that focuses on students acting as equal partners in their assessment process. This is done through co-creating the assessment and the marking criteria and includes peer and self-assessment. This exchange will focus on co-creation of marking criteria and three versions of the marking criteria are discussed, one written by the academic staff, one written by the students and a final version agreed through negotiation and collaboration. This study shows that this type of tutor/student partnership working empowers students and gives them ownership of the criteria which were then used in a peer and self-assessment exercise. Significant differences in values and language were discovered, with gaps between student understanding and lecturers' academic discourse. The presenters believe that for students to fully understand marking criteria they need to be active participants in the process and by using discussion and collaboration in this way brings students into the academic community of practice leading to better engagement and results.
63
There are many assessment and learning concepts underpinning this research such as collaborative learning (Gibbs, 1999; Greenback, 2003), the assessment process being at the 'heart of the student experience' (Brown & Knight, 1994), the importance of both explicit and tacit knowledge with regards to assessment criteria (Rust et al, 2003; O'Donovan et al, 2004; Rust et al., 2005; O'Donovan et al, 2006 and Price et al, 2007) and a social constructivist approach was used to enable students to embed themselves in the discourse of academic practice, where experts encouraged and shared knowledge with the novices, as in a community of practice (Wenger, 1998). The aim of this practice exchange is to introduce the work on co-creation undertaken by the presenters in order to create a forum where academics can discuss this in addition to disseminating their own experiences - both positive and negative, and explore how this kind of partnership working could be used and or adapted by them into their own practices. 051: Preconceptions surrounding automated assessment - A study of staff and students. Stephen Nutbrown1, Su Beesley2, Colin Higgins1 1 University of Nottingham, Nottingham, UK, 2Nottingham Trent University, Nottingham, UK This paper reports on the perceptions and expectations regarding coursework feedback of staff and students, with the emphasis on automated feedback. The work is informed by surveys, interviews and focus groups. These investigate staff and student attitudes and thoughts on the use of assessment, focussing on automated assessment. Of particular interest is what should be provided in feedback together with how it should be presented. Automated, and semi-automated assessment techniques often focus on time efficiency, consistency, accountability and accuracy (Douce, 2006). With growing student cohorts (UCAS, 2014), these benefits are becoming increasingly valuable. Substantial research into techniques for automating assessment make it possible to fully automate marking particular types of assignment, for example CourseMarker (Higgins, Hegazy, Symeonidis & Tsintsifas, 2003) for programming assignments or diagrams, and Numbas (Numbas, 2014) for mathematics and statistics. Semi-automated tools that assist markers, such as Turnitin GradeMark (Turnitin, 2014) or Moodle Rubrics (Moodle, 2014), are also commonly utilised to reduce marking time and improve consistency. However, evidence presented in this paper highlights some preconceptions and concerns regarding these techniques from staff as well as students. In particular, concerns regarding fairness, the quality of feedback, and staff-student communication are discussed. Drawing from real world data, we are able to consider if these concerns represent real problems or can be addressed by altering perceptions via improved feedback presentation and content. The findings of a survey given to students from different disciplines (Computer Science and Business students), together with focus groups and interviews with students allow us to highlight initial concerns with automated approaches. After the use of a fully automated system for the assessment of programming assignments, a survey was conducted and contrasted with a pre-assessment survey to identify any change in
64
concerns and attitudes. Staff outlooks were obtained and analysed via surveys and interviews with the results also presented. From the data collected, as well as the experiences gained through the use of CourseMarker and development of The Marker's Apprentice (Beesley, Nutbrown & Higgins, 2014), this paper provides suggestions for addressing the concerns of the staff and students regarding automated assessment. Several common, best practises to consider for presenting student feedback, particularly when utilising automated assessment techniques, are then presented.
65
Day 2: Keynote The tensions of transparency in assessment Professor Jo-Anne Baird Director of the Oxford University Centre for Educational Assessment Assessment has come to dominate teaching and learning in many educational settings. Students have become highly strategic learners who want to know how they will be assessed so that they can plan their learning accordingly. Over the past four decades, there has been a move towards greater transparency of assessments, with past papers, assessment criteria, marking schemes and examples of graded work being made available. Understanding what you need to do and opening the ‘secret garden’ of assessment are surely very important principles for empowering students to regulate their own learning and overcoming class-based forms of social capital related to assessment systems. Yet, tensions arise in this model between education and coaching. Valuable learning is a product of memorisation and higher order thinking skills, but if we make things too transparent and overly coach students, we produce dependent learners. In this address, I will critically analyse the tensions arising from transparent assessment practices, drawing upon a national evaluation of examinations in Ireland. Before coming to OUCEA, Jo-Anne was Professor of Education and Coordinator of the Centre for Assessment and Learning Studies at the University of Bristol. Jo-Anne previously held the position of Head of Research at the Assessment and Qualifications Alliance, where she managed the research programme and was responsible for the standard-setting systems for public examinations. She was also a Lecturer at the Institute of Education in London. Her first degree and doctorate were in psychology and she has an MBA. Jo-Anne is a Visiting Professor at Queen’s University, Belfast, she is Executive Editor of the journal, Assessment in Education: principles, policy & practice and has recently been working on the relationship between assessment theory and learning.
66
Day 2: Abstracts 052: Placement for Access and a Fair Chance of Success in South African Higher Education Institutions Robert Prince University of Cape Town, Cape Town, South Africa The majority of South African (SA) Higher Education (HE) students drop out before completing their degree studies and only 27% of those who complete their undergraduate programmes do so in minimum time (Scott et. al. 2013). The challenges of low throughput rates and high dropout rates has meant that extended degree programmes, where degrees are formally done over a longer period of time, have become a feature of the HE landscape in South Africa. In â&#x20AC;&#x2DC;A proposal for undergraduate curriculum reform in South Africa: The case for a flexible curriculum structureâ&#x20AC;&#x2122; (Scott et. al., 2013) a case is made for how the extended curricula can be operationalised in a more flexible manner. One difficulty associated with the extended curricula as well as the flexible curriculum structure is determining which students will benefit from an extended programme and which will cope with doing their programme of studies in minimum time. In South Africa there are two assessments of school-leavers that are pertinent to this debate. The first is the National school leaving examination, the National Senior Certificate (NSC), which is a statuary requirement for entry into higher education. The results of the NSC is norm-referenced and are therefore often difficult to interpret for the purposes of admission and placement. The second assessment is the National Benchmark Tests (Griesel, 2006). The National Benchmark Tests (NBT) Project is criterion-referenced and test students in three domains: Academic Literacy (AL), Quantitative Literacy (QL) and Mathematics (MAT). Two of the NBT project objectives are a) to provide a service to HE institutions requiring additional information to assist in admission (selection and placement) of students in appropriate curricular routes and b) to assist with curriculum development, particularly in relation to foundation and augmented courses (or similar). This paper describes the two assessments and argues that using the results of the two assessments in a complimentary way is the most productive approach for the purpose of placement of students into extended or flexible curriculum programmes. 053: Students' responses to learning-oriented assessment David Carless University of Hong Kong, Hong Kong The aim of this paper is to explore how students respond to key aspects of their assessment experience. A deeper understanding of the student response is a facilitating factor for the development of effective learning-oriented assessment. The existing knowledge base on students' responses to assessment suggests a number of issues. The processes surrounding assessment can often lead to a certain amount of stress, anxiety or extreme poles of euphoria and trauma (Cramp, 2012). Students express preferences for certain types of tasks over
67
others e.g. examinations and group work are sometimes least favoured (Flint & Johnson, 2011). The literature also evidences various student dissatisfactions with feedback, especially but not only in relation to issues, such as usefulness, timeliness and comprehensibility of feedback (Evans, 2013). The evidence base for the paper is 90 individual semi-structured interviews with 54 students from the disciplines of Architecture, Business, Geology, History and Law. Student participants were informants in a wider study that explored in-depth the practices of five award-winning teachers from these disciplines. The main research question is: what are students' perceptions of the assessment processes which they experienced in the modules under investigation? Students were interviewed individually, some more than once, about their experiences of various aspects of assessment in these modules. Interviews were transcribed verbatim and analysed inductively to identify and explore emergent themes. The findings discuss what the students thought about good assessment task design, including their perceptions of the strengths and limitations of different kinds of task. Students valued assessments which involved some flexibility and choice; and favored assignments which mirrored real-life uses of the discipline. Criteria in the form of rubrics were generally seen as vague and students were not convinced that they fully represented what teachers used to evaluate student work. Students perceived feedback as useful when it was timely and skillfully embedded within the day-to-day running of the course, but feedback was found to be frustrating when it did not connect with students' needs and this was particularly apparent with those who were less academically successful (cf. Orsmond & Merry, 2013). Implications from the study include analysis of the nature of effective assessment task design; strategies for promoting student engagement with rubrics; and some thoughts on how feedback processes might be developed to support less academically successful students more effectively. 054: How mature are your institutional policies for academic integrity? Symposium see 054,062,074 Irene Glendinning Coventry University, Coventry, UK It is clear that robust but proportional institutional policies for managing academic conduct and encouraging scholarly academic practice should be central to the assessment strategy of any Higher Education Institution. As part of continuing research into policies for managing academic integrity in different parts of Europe, the author has developed a toolset, the Academic Integrity Maturity Model (AIMM), for measuring the quality of institutional policies and systems. The prototype tool, based on metrics for comparing national data designed for the IPPHEAE (Impact of Policies for Plagiarism in Higher Educations Across Europe) project, has been piloted using survey data collected from several different European institutions. The resulting analysis was found to be useful in guiding the development and refinement of institutional policies. Sample results, and the
68
AIMM metrics on which they were based, will be presented for discussion by participants. A different toolset with similar purpose, the Academic Integrity Rating System (AIRS) has been developed by the International Center for Academic Integrity (ICAI) for assessing policies in educational institutions in the United States of America. The creators of AIMM and AIRS are working together to develop a unified set of metrics that can be applied across the globe, informed by other recent research. The aim of both tools is to indicate strengths and weaknesses of institutional policies and practices in order to guide further development. The success of the hybrid toolset depends on whether it meets the needs of institutions in different parts of the world, how accurately the metrics assess current policies and quality of guidance for their strategic direction. Participants will be invited to explore and critique the latest ideas for the combined toolset. Examples and ideas from participants' own experiences will help to contribute towards this development. 055: Why is formative assessment so complicated? What is behind the push-me, pull-you relationship between theory and practice and how can we all move forward? Donna Hurford University of Southern Denmark, Odense, Denmark, University of Cumbria, Lancaster, UK In this presentation, informed by a literature review, I offer evidence for the proposition that the various theoretical expectations of formative assessment are problematic for classroom practitioners, whether school or university teachers. Formative assessment is multi-dimensional (Brookhart, 2004). Its purported learner-centredness secures its significance for social constructivist and selfregulatory learning theories, in addition critical theory stakes a claim. Whilst this multi-dimensionality may affirm formative assessment's theoretical status it also confounds attempts to clarify the theorisation of formative assessment and to secure a universal understanding and application of the concept. Sadler (1989) identified the absence of theorisation of both formative assessment and feedback as a concern 25 years ago and the concern remains (Taras, 2007; Bennett, 2011). Whilst variation is inevitable it is argued that without agreed conceptualisations of formative assessment, comparative studies are too diverse to be meaningful and therefore contribute little to the evidence base for formative assessment (Bennett, 2011). The push-me, pull-you relationship between formative assessment's theory and practice is illustrated in both theoretical and practice-based studies. To take critical theorisation as an example, Crossouard and Pryor (2012) interpret teachers' formative assessment practices as evidence of policing rather than politicising education. And yet, in England at least, teachers and schools are not encouraged to politicise education rather they are expected to conform to national educational policy. Further studies indicate that interpretations of formative assessment are evident in both convergent and divergent teaching and assessment practices (Torrance and Pryor, 1998; 2001).
69
A review of assessment studies suggests that the unreliability of the comparative studies of formative assessment is not only due to its theoretical multidimensionality but also to the variation in teachers' applications of formative assessment. How formative assessment is taught is frequently criticised; teachers in schools and universities struggle to see how theorised formative assessment can be applied to classroom practice leading to diverse interpretations. The variability in both conceptions and applications of formative assessment is evident in reviews of classroom practice (Black and Wiliam, 1998; Boyle and Charles, 2010; 2014). In response to such findings, practice-based studies have been found to successfully realign interpretations of formative assessment with its purported constructivist and self-regulatory roots (Torrance and Pryor, 2001). To conclude I will invite discussion about such studies and suggest the need for a comprehensive review of theorisations identifying shared and alternative conceptualisations of formative assessment. Full references will be provided in the presentation. 056: Changing the Assessment Imagination: designing a supraprogramme assessment framework at Faculty level. Jessica Evans1, Simon Bromley2 1 The Open University, Milton Keynes, UK, 2Sheffield Hallam University, Sheffield, UK As the HE sector seeks to support a more cohesive and holistic student experience (Harvey and Kosman, 2014), the enhancement of frameworks to support this can bring about sharp encounters with the practices and assumptions of individual courses and modules. Whilst there are recognised organisational processes that are more likely to lead to success in assessment innovation - for example, the curriculum mapping audit and enhancement process ( O'Neill, 2009; Jessop, 2010) - identifying and defining the key principles that specify the overall objectives of an assessment framework can still be difficult (Sumsion and Goodfellow, 2007). In this paper, we describe a major Faculty change project that created a new set of policies and principles for assessment for the whole curriculum, which spanned ten programmes in social sciences and psychology. The paper describes the need, process and the outcome of the project and reflects in particular on the obstacles encountered, in particular the deep attachments staff had to a modular perspective on the student experience and assessing subject knowledge at that level, rather than programme-wide outcomes. It also makes the case for the conceptual formulations - that is, the actual ways in which the policy is written that an assessment policy needs to be workable within a modular structure where modules contribute to a range of different programmes as both optional and core/compulsory. To work, the policy had to be meaningful - that is, create a cohesive and coherent assessment experience - but not restrictive, insofar as it had to allow for each programme's distinctiveness. Our assessment project had the objective of assuring that modular learning outcomes contributed, coherently and developmentally across levels of study, towards programme-wide learning outcomes and graduate attributes. A curriculum mapping audit (O'Neill, 2009; Jessop, 2010) revealed a repetitive and limited
70
range of assessment methods with skills distributed arbitrarily over study levels (Price et al., ASKE, 2012; Sadler, 1989). We needed students to have an opportunity to internalise assessment criteria and develop independent learning skills and key practical and professional skills. A key part of the process was staff engagement, bringing previously widely distributed activities of assessment setting across individual modules together into programme level teams. The paper suggests that it is only when staff work more collaboratively, in their day-to-day assessment work, that students will experience a coherent assessment diet. 057: ‘Leave me alone I’m trying to do my work’ - The discrepancies between staff and students’ perceptions of feedback and assessment practices Monika Pazio, Duncan McKenna University of Greenwich, London, UK Feedback and assessment is often perceived as the aspect of student academic experience that causes anxiety but also heavily influences students’ learning and satisfaction with their university experience. It is also that aspect that has been graded considerably low in NSS scores and hence considered to be in need of improvement across the HE sector (Gibbs and Simpson, 2004;National Student Survey, 2014). In order to implement change one needs to have an understanding of what factors of feedback and assessment practice contribute to general dissatisfaction. Those factors have been widely identified from the students’ point of view (Krause et al, 2005; Carless, 2006;Sambell and McDowell, 1998). However, it is interesting and important to contrast students’ views with how staff perceive they cater to the needs of their students in relation to feedback and assessment, and identify where and why the gaps emerge. The paper therefore looks at and compares students’ and staff perceptions of what feedback and assessment looks like on their programme with an aim of identifying discrepancies between the views of those two groups. The identification of the reasons for those discrepancies can have a powerful effect on better understanding of the two perspectives and as a result find more appropriate solutions to the pressing issues. The data were obtained through the mixed methods TESTA methodology (Gibbs and Simpson, 2004) and collected across 5 programmes from 4 faculties providing therefore a snippet of practice across the institution. Staff data were collected through the programme audit conducted with 5 programme leaders and the peripheral involvement of other teaching staff members. Student data were obtained through questionnaire with 33% of the total number of students enrolled on the 5 programmes responding, and 7 focus group interviews with a range of 26 students per group. The quantitative data was analysed using correlational SPSS analysis and main themes were identified from interview data using thematic analysis. Those themes reflected the practice from across the 5 programmes to provide a picture of possible cross institutional misunderstandings. The areas that emerged as contradictory were the quantity and quality of feedback, variety of assessment, communication of goals and standards as well as fairness.
71
058: The impact of the assessment process and the international MATESOL course on the professional identity of Vietnamese student teachers David Leat1, Tran Thanh Nhan1,2 1 Newcastle University, Newcastle upon Tyne, UK, 2Vietnam National University, Hanoi, Vietnam This multiple case study research is grounded in the school of social constructivism. The ecological activity system to be utilised as the operational conceptual framework in this research is adapted from the second and third generation of Engeström (1987)’s activity system in combination with Pryor and Crossouard (2008)’s socio-cultural theorisation of formative assessment and Hodgson and Spours (2013)’s high opportunity progression eco-systems. The assessment process is perceived as encompassing both summative assessment tasks and formative feedback while the professional identity is regarded as a case of multiplicity in unity and discontinuity in continuity (Akkerman and Meijer, 2011). This qualitative research follows narrative inquiry and employs active, semi-structured interview as the primary data collection method and the documentary analysis as the supplementary one. It calls for the voluntary participation of fourteen Vietnamese student teachers both current students and alumni in four major Anglophonic countries: Australia, New Zealand, the United Kingdom, and the United States and analyses four sets of course syllabi in those four countries. The thematic data analysis has yielded insights into the positive impact of the assessment process on four major aspects of the professional identity: cognition, affection, behaviour, and socio-culture. Other factors of the international MA-TESOL courses: the subject, the mediating artefacts, the rules, the community, and the power relation have also been elaborated as exerting salient impact on the professional identification. Endeavours have also been made to depict the long-term impact of the assessment process and the MA-TESOL course on the continued career paths of the student teachers on their return to Vietnam. The research findings yield qualitative evidence about the links between the assessment process and MA-TESOL courses with the professional identity development of Vietnamese student teachers. On a larger scale, the study may facilitate cross-institutional understanding of assessment policy and practice in various Anglophonic countries and serve as a reference to improve language teacher training programmes that will be more beneficial to the learners. 059: Conceptualising Fellowship of the Higher Education Academy (HEA) as an assessment process Nicola Reimann1, Ian Sadler2 1 University of Durham, Durham, UK, 2York St John University, York, UK This paper critically examines HEA Fellowship and the UK Professional Standards Framework (UKPSF), using conceptual tools taken from the field of assessment. It suggests that gaining Fellowship exhibits the characteristics and challenges which have been highlighted in relation to assessment, and that problems experienced locally are due to the conflicting purposes, uses and interpretations of Fellowship. Despite the existence of the UKPSF since 2006, the fact that UK HEIs are now required to return Fellowship statistics to the Higher Education Statistics Agency (HESA) and an increasing number of HEIs therefore expect their staff to obtain
72
Fellowship, there is surprisingly little scholarship about the Fellowship phenomenon. Searches using the British Education Index returned a very small number of peer reviewed publications. Several, potentially conflicting purposes of assessment (Gipps 2012) underpin HEA Fellowship. At the national and institutional level Fellowship is used for accountability purposes (HESA returns), whilst its explicit purpose is to support CPD, thus serving as assessment for learning (Wiliam 2011) as well as involving an element of certification. Although commonly referred to as â&#x20AC;&#x2DC;CPD schemes', HEA accreditation of institutional schemes focuses on the rigour of decision making, i.e. the summative assessment process, which contains typical features such as criteria (the UKPSF), internal and external moderation, whilst formative assessment is less scrutinised. Assessment formats vary considerably, including professional dialogue (Pilkington 2012), e-portfolios and reflective texts. This not only raises questions about principles such as validity, reliability, transparency, attribution etc. (Bloxham and Boyd 2007), but also about the differences in emphases between summative and formative assessment which are implicit in the formats adopted. Anecdotal evidence suggests that individuals hold contrasting understandings of Fellowship, which determine the nature of their engagement, similar to what is known about conceptions of teaching (Trigwell and Prosser 1996) and assessment (Samuelowicz and Bain 2002). Institutional targets are likely to result in instrumental approaches to applications, akin to assessment as learning (Torrance 2007). Particular challenges arise in relation to standards. Bloxham stresses the tacit and socially constructed nature of standards which are â&#x20AC;&#x2DC;learnt informally through active participation in relevant communities and practices' (Bloxham 2012, p.188). However, it is unclear which community and whose practices underpin Fellowship decisions. Fellowship is owned and influenced by a multitude of stakeholders with conflicting interests and understandings. As it is establishing itself as a benchmark within the sector (Hibbert and Semler 2015), the standards are concurrently constructed and negotiated. 060: From research to practice: The connections students make between feedback and future learning Stuart Hepplestone, Helen J. Parkin Sheffield Hallam University, Sheffield, UK This paper will share the findings and present recommendations from a research study at [name] University which aimed to better understand the connections that students make between the feedback that they receive and future assignments, and explore whether technology can help students to make better connections. The project interviewed ten tutors and twenty students using a semi-structured approach. The findings of the study cover each stage of the assessment process from the perspective of both staff and students (including submission of work, giving and receiving feedback, storage and future use of feedback). The study found that students need to be able to access the resources that will enable them to complete assignments to the best of their ability. Students were more likely to refer back to feedback where they could establish connections between two assignments, and when it was accessible to them at the point of writing their next assignment. Students generally appreciated the ease and convenience of online submission and of feedback issued online. There was,
73
however, a preference for hard copy because of circumstance (i.e. it is easier to print an electronic copy than to covert hard copy to an electronic format). Students were less likely to engage with their feedback if there is a separation of grades and feedback with grades being published first. Our staff work hard to provide high quality feedback that is valuable to students. The methods that they use to create and issue feedback vary, and this is mostly due to personal preferences. Students must rationalise the different forms and mediums through which they receive feedback in order to understand and engage with it effectively and this can be problematic. Our challenge is to achieve consistency from the student perspective whilst embracing the variety of practice from the staff perspective. The recommendations from the study are two developments: (1) an 'end-to-end online marking experience for staff' that facilitates ease and efficiency of marking online with techniques that both they and their students are familiar with; and (2) an 'assessment and feedback store' enabling students to retain all feedback from all modules in one place alongside an assessment calendar and advice on how to use feedback effectively. During this session, the methods used by the project will be presented, along with an overview of the findings, the recommendations and subsequent actions taken by the University to make changes to policy and practice. 061: Live Peer Assessment: Its Effects and After Effects Steve Bennett, Trevor Barker University of Hertfordshire, Hatfield, Hertfordshire, UK Over a 4 year period on a first year course in e-media design, we have noted evidence of increased engagement and performance among students. This we believe has been driven by an intervention involving live peer assessment of previous cohorts' work using EVS clickers. This is inline with other research on the influence of "feed-forward": namely exercises to involve students in evaluation of other students work in order to acquire familiarity with the marking criteria. In a series of focus groups we believe we have identified the factors in this as: ď&#x201A;ˇ ď&#x201A;ˇ
a clearer sense of goal, a stronger sense of mapping of the explicit criteria in an assessment with exemplars of performance.
However, it is one thing to know more clearly what one is attempting to do - it is another thing to actually do it - a process which requires time management, selfevaluation, problem reframing and help seeking. In this session we will re-examine the focus group discussions undertaking in these years to look more intently at the kinds of strategies students have employed to improve the quality of their work following a feed-forward intervention. We will also show examples of a greater self-knowledge arising from participation in these activities - and how the student defines themselves in the context of their peers. We will relate these concepts to Schon's concept of an "Appreciative System" in
74
the theories associated with reflective practice as well as the more social understandings expressed in the work of Bourdieu and Wenger. 062: International postgraduate students and academic integrity: challenges and strategies to support Symposium see 054,062,074 Mary Davis Oxford Brookes University, Oxford, UK Differences of culture or education and language barriers are often cited as factors contributing to problems with academic integrity with international students (Jiang and Sharpling, 2001; Pennycook, 1996). These problems may result in a lower pass rate for international students: according to HESA (2009/10), the percentage of non-EU full-time Master's students who left without an award in 2009/10 was slightly higher than that of EU students. Furthermore, Behrens (2010) reported that 39% of complaints to the Office of the Independent Adjudicator came from postgraduates, 22% of these were from outside the EU, and most of these complaints related to plagiarism. It seems essential to respond to this challenge to assessment by gaining greater understanding of international students' practices and needs related to academic integrity. This session will focus on a 2 year study of eight Asian and North African postgraduate students through an English for Academic Purposes Pre-Master's programme and their subsequent Master's degrees in business, public relations and technology at a UK university. It will draw on assignment and interview data which demonstrates the longitudinal development of their practices and the challenges they experience. It will also include interviews with their tutors to gain further insights. The students and tutors report different understandings of the university's plagiarism definition, which could cause problems in assessment. In addition, there appears to be a gap between the high expectations of tutors with regard to students' source use and the availability of support. The session aims to raise awareness among participants of the issues for international students and how they may be addressed to enable them to perform successfully. Interview comments from well-known researchers of academic integrity will be used as starting points to promote discussion of strategies to respond to international students' needs. References Behrens, R. (2010). â&#x20AC;&#x2DC;Learning from student plagiarism complaints'. Paper presented at Institutional policies and procedures for managing student plagiarism. ASKe, Oxford Brookes University, 25 May. HESA (2009/10). HESA Student Record (2009/10). Available at http://www.hesa.ac.uk/requests Last accessed 28/6/14. Jiang, X. and Sharpling, G. (2011). â&#x20AC;&#x2DC;The impact of assessment change on language learning strategies: The views of a small group of Chinese graduate students studying in the UK'. Asian EFL Journal, 13 (4), 33-68. Pennycook, A. (1996). â&#x20AC;&#x2DC;Borrowing other's words: text, ownership, memory and plagiarism'. TESOL Quarterly, 30, 210-230.
75
063: Higher education teachers’ assessment practices: Formative espoused but not yet fully implemented Ernesto Panadero1, Gavin Brown2 1 Universidad Autónoma de Madrid, Madrid, Spain, 2The University of Auckland, Auckland, New Zealand Background & aim Despite advocacy for Formative Assessment / Assessment for Learning (FA/AfL) practices, there is relatively little information about the actual implementation of FA/AfL in higher education settings. Because teacher and instructor perceptions about FA/AfL are crucial for its implementation, our aim was to explore how higher education teachers self-reported these practices. This was done with a special focus on self-assessment and peer assessment among other aspects of formative assessment (i.e., feedback, and use of rubrics). Methodology A non-experimental, anonymous, self-report survey of 155 Spanish university teachers in 6 large Spanish universities was conducted. . The participants (Males 36%, Females 32%, sex not reported 33%) had an average of 17.31 years (SD = 10.68) teaching experience. Findings Preliminary analysis showed that 83% reported having attended pedagogic training courses, but only 43% had received courses on assessment topics and only 35% had training in formative assessment. A relatively narrow range of assessment types were used: classroom participation 70%, essay type exams 67%, test type exams 59%, group presentations 53%, individual presentations 35%, portfolios 19%, and oral exams 6%. Slightly more of the assessments were individual (73%) compared to group work (69%). The teachers considered assessment to be very important on their courses (89% gave rating 5 or more out of 7), with a similar proportion (82%) giving the same score range for the main aim of students being just to pass the course. In contrast, teachers considered (86% gave ratings of 6 or 7) that the goal of assessment is to generate information about students’ progress. Most considered that grades should not be normally distributed but should be awarded in accordance with students’ achievement (76%) and that all students should pass if they achieved the course goals (83%). Just over half (56%) of the teachers considered using FA/AfL practices, which is around the same number of teachers who claimed to have implemented self-assessment (54%) and more than those using peer assessment (37%). The sharing of assessment criteria with students was frequent (63% indicated always) and almost all (89%) considered the assessment culture of their department was positive. However, only 33% claimed to give feedback at least weekly, with 60% giving it only monthly or quarterly. The main conclusion is that Spanish higher education teachers endorse the ideas of FA/AfL but may need further training or more supportive environments to more effectively implement the full range of practices.
76
064: Examine student theses - similarities and differences in relation to examiners' experience Mats Lundström1, Lars Björklund2, Karin Stolpe2, Maria Åström3 1 Malmö University, Malmö, Sweden, 2Linköping University, Linköping, Sweden, 3 Umeå University, Umeå, Sweden Backgound As one important examination among others, students in higher education are supposed to write different kind of texts that are academically correct. It is common that the student in the end of their under-graduate education write some kind of thesis. This thesis should in the Swedish education system fulfil two different goals. By this means that the student thesis should be used for both demonstrating knowledge of different subjects as science or pedagogy and as qualification for future studies, for instance Ph D-studies. However, what is regarded as academically correct and of good quality may differ between both different subject traditions and between different examiners. Some studies (e.g. Härnqvist, 1999) have demonstrated large differences in examiners opinion on what is good quality in theses. On the other hand, Bettany-Saltikov et al. (2009) found very good inter-rater reliability between examiners from different disciplines in a small-scale investigation, when using generic assessment criteria on master thesis. The mentioned double aim with the theses might reinforce or at least generate different emphasis on criteria among examiners depending on experience, subject tradition and different focus on purpose. This paper discusses which criteria examiners at teacher education programmes use when they examine students' theses. Method A web-based questionnaire has been sent out to 120 examiners from three different universities in Sweden. The main part of the survey contains 45 criteria that examiners in earlier interviews have stated as important in student theses (e.g. relevant research questions, deep in analysis, red thread) and a Qmethodology was used (Shemmings, 2006). In the Q-methodology the informants were first asked to sort out the criteria in three piles; less important, important, and most important. Thereafter the informants once again sorted out the same criteria in different piles, in the end they have ranked the criteria from 1-9 where 9 is most important. This ranking formed a Q-grid. The informants were also asked some background questions about their experience as examiners, major subject and what kind of pre-service teacher education they mainly work at. Analysis/ Results Focus in the first analysis was similarities and differences between novices and experienced examiners. Earlier research (Kiley & Mullins, 2004) indicates that less experienced examiners pay more attention to institutional criteria. In the same study, experienced examiners tended to take a more holistic approach on examination. Results will be presented at the conference.
77
065: From practice oriented and academic traditions to academic professional qualifications - A historical view of Swedish teacher education Karin Stolpe1, Mats Lundström2, Lars Björklund1, Maria Åström3 1 Linköping university, Norrköping, Sweden, 2Malmö university, Malmö, Sweden, 3 Umeå university, Umeå, Sweden This paper aims to discuss Swedish teacher education in a historical perspective with focus on academic writing. Until 1977, teacher education was separated depending on the age of pupils the teacher students were aiming for. Elementary teachers were trained in so called Training Colleges by teachers that were practitioners and the teaching was practice oriented (Hartman, 1995). However, teachers for secondary and upper secondary school have since the 19 th century been trained in higher education (Linné, 1996), where they first took their disciplinary degree and later on they took their teacher's degree. In 1977, Swedish Government decided that all teacher training should be higher education and thereby be part of the universities. When Training Colleges and Teacher Education for secondary and upper secondary school came together in the universities, two different traditions should merge to one whole (Andersson, 2002). This was also the time when the concept of "academic professional qualification" was introduced, as a way to point out that the education should be both practice oriented and academic. Today, almost 40 years later, these two traditions are still visible in teacher education, not at least in the fact that teacher education should be based both on "research and proven practice". In meeting the aim of this paper, we seek the historical roots to, on one hand, academic traditions and on the other hand, practice oriented traditions. The empirical data consist of official reports, governmental decisions and regulations and laws. We have chosen to focus on the student thesis, since it was introduced as mandatory in teacher education in 1993 (SFS 1993:100) as a one step to make teacher education more research based (Bergqvist, 2000). We have analysed in what way authorities talk about if practice orientation and academic traditions could meet in the student thesis, and if they could, how this is manifested. Furthermore, we have interviewed teacher educators, who examines student thesis today. We have analysed how they see these two traditions as criteria for their examination. Preliminary results show that the governmental and official reports do not say much about how practice orientation and academic traditions could or should meet in the student thesis. Even so, the thesis is an important part in the official evaluation of Swedish teacher education (HSV, 2010:22 R; HSV 2008:8 R). However, some teacher educators seem to evaluate practice orientation higher than does others. More results will be presented during the conference.
78
066: Improving Communication of Assessment Task Requirements and Expectations Through Improving Assignment Brief Design. Garry Maguire, Fiona Gilbert Oxford Brookes University, Oxford, UK Staff-student communication and dialogue about assessment needs to be as effective as possible. This is especially challenging with increasingly varied and complex assessment types and an increasingly diverse student body. One specific area of practice associated with this challenge is the design of written instructions for assessment: the assignment brief. The literature highlights the problematic nature of students having to read the assessorâ&#x20AC;&#x2122;s mind and unpack the task from the instructions in the brief (e.g. Race 2007; Sadler 2010, Sloan and Porter, 2010). Research indicates this can have unnecessarily detrimental effects on student attainment, particularly for specific student groups (Cousin and Cureton, 2012). There have been calls for greater clarity in assignment briefs (e.g. Carroll, 2005), but to date there is, apart from for example Williams and Reid (2011) a lack of research into the student experience of processing instructions and apart from Gustavson-Pearce (2009), little experimental research into assignment brief design and communication. The project research involved an investigation into the nature of assignment text types across all disciplines, an institution-wide analytical survey of assignment briefs, semi-structured interviews of students focusing on their experience in processing assessment instructions and a national survey of staff perceptions of producing assignment briefs. Findings informed the development of a recently disseminated continuing professional development resource designed to improve effectiveness in staffâ&#x20AC;&#x201C; student communication about assessment task expectations and requirements. It also supports staff in ensuring inclusive practice, in facilitating academic literacy skills development and ultimately in enhancing student performance. This resource and the Assignment Brief Communication guidelines will be explained and discussed. An account of a university-wide pilot scheme wherein the guidelines were applied to assignment briefs on a consultancy basis will also be given and recommendations for effective implementation across the sector discussed. 067: Enhancing Engagement through Collaboration in Assessment Daniel Russell, Barry Avery Kingston University, Surrey, UK As academics we recognise the importance of student engagement. One way to achieve this is by actively involving students in class. In the authors' faculty there has been a gradual move away from the traditional large lecture / small tutorial approach to course delivery to one of mixed-delivery sessions in a two hour block
79
in flat and flexible teaching rooms. This approach enables a more interactive pedagogy and has encouraged academics to not only re-think how they deliver the course material, but also how they assess students. In this paper we describe a data analysis assessment on a first year Business module in which the students produce a questionnaire, the data and decide on the topics for analysis. Most examples of collaboration in assessment focus on the grading of an assessment where students are involved in the summative assessment process either as peer assessors or self-assessors. Our approach to collaboration fosters student engagement earlier in the assessment and also provides several opportunities for feedback from a variety of sources. In groups, the students developed questions relevant to students on the module which were presented to their peers and the course tutor for selection. This resulted in a set of questions that was used to construct a questionnaire. It is clear that there were two main strands of collaboration or collaborative learning: students collaborating within their group to develop potential questions and the collaboration between the academic and the students in the construction of the questionnaire. Focus group feedback indicates that the students engaged with the process. We will use student performance to measure the impact of this collaboration on learning. 068: A moving target: assessing the process and progress of learning One hour session John Couperthwaite Pebble Learning Ltd, Telford, UK Assessment practices in HE remain skewed towards the assessment of learning (Race, 2008) resulting in practices that tend not to equip students well for the processes of effective learning in a learning society (Boud, 2000). The importance of assessment for learning, driven by good feedback processes, is extensively argued (Hounsell, 2008; Molloy and Boud, 2013, Nicol et al., 2013) as is the need to develop in learners the ability to make evaluative judgments about their own performance (Cowan, 2010). This session considers these assessment issues through the lens of professional degree routes, where reflecting upon learning is an authentic assessment choice within the curriculum, as well as a programme and professional requirement. Teachers and students are typically joined by external mentors, assessors and supervisors as part of an authentic assessment process, exacerbating the normal assessment challenges of validity, reliability and transparency. This transactional complexity is further compounded by an educational desire to observe, support and verify the developmental journey of the learner: ipsative assessment designed to motivate users through marking progress (Hughes, 2014) The session will begin by considering the role of â&#x20AC;&#x2DC;external' assessors involved in the iterative, and ipsative, assessment of trainee nurses at LaTrobe University. By example we will relate how practice requirements have directly led to the
80
development of tools which allow assessors to ‘peer into the past', making earlier work visible for assessment; and will discuss the importance of robust tracking of assessor input over time. We will then consider the use of assessment metrics, used to assure the quality and reliability of complex workplace clinical assessments and workforce skills that rely on assessment, both internally and externally. The context for these assessments is the observation of clinical practice skills, recorded by the employee/lifelong learner on reflective templates alongside evidence of contemporary learning. These need both verification digitally in the workplace and collation in portfolio form as evidence of skill and learning acquisition. Drawing upon work from Edith Cowan University we will discuss how the process they have developed within an industry setting places the employee/lifelong learner firmly in charge of their assessment and learning journey. The final part of the session draws upon assessment practice from the University of Edinburgh. Here learners are led to reflect deeply on their development over time, and to plan for future progress. 069: The Abstract Labour of Learning and the Value of Assessment Paul Sutton University of St Mark & St John, Plymouth, UK In this paper I seek to sociologically re-imagine (Wright-Mills, 1959) learning in Higher Education as a form of abstract academic labour, and assessment as the commodity such labour produces. Marx's (1946) analysis of the twofold character of labour, I argue, can enable us to see how learning, assessment, and student subjectivities may be structured within the neo-liberal university. Firstly, I will describe the pivotal relation Marx (1946) establishes between the use and exchange value of labour (Hollway, 2010; Winn, 2015). I will then argue learning has become a form of abstract labour, assessment has been commodified, and the ‘social character’ of the learner has become alienated (Fromm, 2003). This encloses assessment within the limiting market concept of value. Hence, assessments are completed to be exchanged for a grade and are valued as a means to the end of certification and thereafter a job. However, the neo-liberal fate of learning and assessment is not immutable. As Holloway (2010) argues, there are cracks in capitalist educational edifices in which we can work ‘in-against-and- beyond’ abstraction, commodification, alienation and market value. Interstitial possibilities exists for the abstract labour of learning to become ‘concrete doing’, assessment conscious life activity and students selfdirected producers. We need, however, a dialectical perspective to unveil these possibilities. A dialectical perspective frames the educational world as complex web of interpenetrating opposites each relationally contained in the other (Ollman, 1976). Thus, quantity can be transformed into quality: ‘How much do I need to learn to get a 2:1?' may become: ‘How can I creatively engage with my assessment task?' This transformation is energised by the contradictory and fetishised social relations in the knowledge production process: between learners motivated by getting a job
81
(Fromm, 2003) and teachers motivated by emancipatory academic values (Freire, 2005). Through dialectical struggle a re-valorisation of learning and assessment is possible, but this process is often momentary and incomplete; characterised by nullity and refusal, as well as moments of engaged, self-directed student activity (Marx in Fromm, 2011). Complete transformation exists as the â&#x20AC;&#x2DC;Not-Yet' (Bloch, 1986). Yes, the present historically specific reality of learning and assessment in the neo-liberal university is limited. But interstitial possibilities exist for education to become other than it is, to become more authentically human. The present is latent with alternative futures that critical hope and a pedagogy in and for itself can help us realize (Sutton, 2014; 2011). 070: Phenomenographically exploring students' utilisation of feedback Edd Pitt University of Kent, Canterbury, UK This paper proposes a conceptual cyclical assessment and feedback model which attempts to further understand the problematic nature of feedback within higher education. Whilst at University students experience many instances of feedback on their work. Quite often such feedback is facilitated by academic lecturers via a monologic transmission process, in the hope that the student will utilise this and improve in their next assessment. Frequently lecturers report that feedback does not always have the desired effect of improving a student's subsequent performance (Hounsell, 1987). It also appears that the student's emotional response, motivation, self-confidence and subsequent effort deployment in future assessments following feedback is unpredictable and warrants further consideration. In response to such problems, the present research explored student's experiences of assessment and feedback from a phenomenographic perspective. Twenty undergraduate social sciences students, studying at an English University were asked to pictorially represent their experiences of assessment and feedback and participate in a 1-2-1 interview. The interview data were subjected to thematic data analysis (Braun & Clarke, 2006) and revealed 8 main themes (Lecturers, Emotions, Feedback Cognitions, Efficacy Cognitions, Draft Work, Motivation, Effort and Grades). The findings from this analysis indicate a multifaceted interpretation of the student experience and as such a six stage conceptual cyclical assessment and feedback model is proposed. The conceptual model indicates that a student's achievement outcome, relative to their predetermined expectation level, regulates their emotional reaction and subsequent feedback related cognitions. The phenomenographic outcome space (Ă&#x2026;kerlind, 2002) revealed five categories of description (broken relationship, needy, low achiever, emotionally charged and high achiever). The structure of the variation indicates a hierarchically inclusive pattern, representing how varying forms of behaviour and emotional reactions interact to affect the students processing and subsequent utilisation of the feedback received. The results of the study also suggest that grade outcome was a powerful construct which seemed to foster both adaptive and maladaptive emotions and behaviours. In conclusion the study suggests that understanding students' individual needs through fostering lecturer and student relationships, alongside dialogic feedback, helps to improve students' propensity to utilise the feedback received.
82
071: Embedding key transferable skills for success during and after University through innovative assessment Joanne Hooker, Jayne Whistance Southampton Solent University, Southampton, Hampshire, UK Aim of session This presentation will share the experience and the results of radically rethinking Institution Wide Language Programmes (IWLP) for academic English at Southampton Solent University (SSU). It will outline how the Languages Department have sought to realign EAP modules with SSU Employability initiatives through reconsidering assessment practices to reflect transferable skills and the use of technology. Summary SSU offers IWLP at all levels including EAP for International Students. Building self-confidence and developing the seven core skills outlined by the Confederation of Business Industry (CBI, 2011) has become a major focus of employability. Based on Future Fit: Preparing graduates for the world of work (2009) and the BIS report (2012) which encourages universities to deliver graduates who are “well– informed…well-rounded and employable,” a decision was taken to map academic language delivery to those key transferable skills. Traditionally, Applied and Academic English consolidated all aspects of academic writing for undergraduate studies. After a complete review, it retains the academic skills necessary to succeed at University but also transferable skills for life beyond academia. The unit redesign makes use of Moodle to create materials accessible 24/7, thereby enhancing the student learning experience and allowing students access to all materials for assessment preparation including Lecture Capture technology which gives students detailed presentations on the assessment to the unit and has proved very popular. The unit is divided into 2 parts: applied to reflect employability and academic to support university studies and utilises e-tools such as LinkedIn, Lynda and Weebly. The assessments were devised to incorporate not only the academic skills through a written test and an essay but also to include a public webpage to showcase student CVs, portfolio work (the essay, a covering letter to complement the interview oral assessment and reflection on that interview) and employability skills. Lastly, a job interview based on a real graduate job opportunity completed the requirement to have an oral exam. Feedback after the first delivery has been very positive with students acknowledging that the unit is challenging, yet citing the combination of covering and assessing academic and employability skills as being innovative, engaging and practical. As a result this approach is now being incorporated into other skills based units across the Southampton Solent Business School. In sharing best practice we seek to develop our approach to innovative assessments further as well as to learn from the experience of other institutions.
83
072: Structuring peer assessment and its evaluation by learning analytics Blazenka Divjak University of Zagreb, Faculty of Organization and Informatics, Varazdin, Croatia Today society characterised by rapid social and economic change and the evolution of ICT are giving rise to new competence needs, such as self-regulated and peer learning, evaluation of peer work and metacognitive skills. Their development is enabled by deep learning (Entwistle, 2003) and assessment clearly connected with learning outcomes (Biggs, 2003) that comprise key competences. Our research is based on Embedded Assessment Paradigm (Redecker & Johannessen, 2013). Learning analytics were used in order to interpret data about students' learning, assess their academic progress, predict future performance and personalize educational process. We have conducted action research during three years period in the course Project Management at the Master Level of Entrepreneurship in which 131 students were enrolled. Assessment and learning tasks were carefully prepared in the blended learning environment and clearly connected with intended learning outcomes of the course and the study programme (Divjak, 2012). In the first two years all students' tasks were assessed only by teachers based on well-defined assessment criteria and rubrics. In the last year we create innovative way of mutual learning and peer assessment based on the same rubrics as in previous years but assessed by students themselves. There were two tasks where peer learning were used: essay grading and project grading. The first task (essay grading) with smaller stakes was also used to train students how to assess according to criteria and rubrics in Learning Management Systems (LMS) and enhance student understanding of assessment standards and criteria. Each student assessed three essays randomly given to her/him by the LMS. After that task, student perspective on the mutual learning and peer assessment were collected in free form of ejournal. Taking into account students comments, the second task (project grading) were prepared and peer-assessed. Finally, by use of learning analytics collected in LMS during three years period we answer the following research questions: - How to prepare peer assessment to be reliable and valid and at the same time enhance mutual learning? - What is student perception about peer assessment, assessment standards and criteria and mutual learning activity? - Are we encouraging deeper learning by peer assessment? Assessment validity was evaluated due to the intended learning outcomes. Comparison of assessment results during three years give us bases for analysis of assessment reliability (Entwistle, 2009). Students â&#x20AC;&#x2DC;opinions were collected by ejournals in open form and questionnaire at the end of the course.
84
073: Student Perceptions of different Assessment Modes in Computer Programming Courses Suraj Ajit University of Northampton, Northampton, UK Assessment is a process of measuring the extent to which students have fulfilled the expected learning outcomes for a course/module. There are many assessment strategies adopted by universities across the world for computer programming. They include written examination (closed book, open book), lab examination, multiple-choice/short-answer exams (paper or computer based), course work (or assignments) and oral examination. Again, the format, level and types of questions asked vary across universities. Assessment can take many forms, and it can be argued that the greater the diversity in the methods of assessment, the fairer, assessment is to students (Race 2007). The most effective form of assessment is one that appropriately examines the learning outcomes of the module. Assessment methods are also known to play an important role in how students learn (Brown 2004). The traditional assessment approach, in which one single written examination counts towards a student's total score, no longer meets new demands of programming language education (Wang, Li et al. 2012). Because computer programming is problem-solving oriented and very practical, the assessment of programming learning performance is challenging. The conventional assessment is not easy to be adapted to various new developments in computer programming education. This paper reports on the experience of adopting several assessment methods across programming modules for computing courses at the University of Northampton. In order to evaluate the assessment strategies from the students' perspective, a survey questionnaire was developed and distributed among a selected sample of final year computing course students. The paper discusses the results obtained from this survey. In particular, the paper compares the obtained results with that of other research performed in the same discipline as well as other disciplines. The key research questions the paper attempts to address are as follows: 1. What is the most effective assessment method for programming modules from a students' point of view? 2. How do student perceptions of computing (programming) assessments compare with that of other disciplines? Is programming any different? 3. How do the student performances relate to their perceptions of the assessment methods?
85
074: Custom essay writing and other paid third parties in Higher Education; what can we do about it? Symposium see 054,062,074 Phil Newton Swansea University, Swansea, UK It is very easy for students to pay someone else to complete any type of assessment, including bespoke essays, dissertations and exam stand-ins. The use of such services appears to be widespread and represents a threat to the validity of many commonly used assessment strategies, particularly those involving written assignments. This session will describe the current status of these services with reference to existing academic literature and the mainstream media. We will describe how they are accessed, what they can do and how much it costs. We will then discuss, as a group, how assessment design could be used to limit the influence of paid third parties in higher education. 075: Student understandings and use of learning outcomes in higher education Tine Sophie Prøitz1, Anton Havnes2 1 Buskerud and Vestfold University College, P.O. Box 235 3603 Kongsberg Norway, Norway, 2Oslo and Akershus University College of Applied Science, P.O. Box 4 St Olavs pl. 0130 Oslo, Norway Learning outcomes are now being implanted in higher education across Europe. Yet, the impact on teaching and student learning is uncertain and a matter of debate and contrasting views. One of the great advantages of learning outcomes, Kennedy et. al. (2007, 6) argue, is that LO are â&#x20AC;&#x153;clear statements of what the learner is expected to achieve and how he or she is expected to demonstrate that achievement.â&#x20AC;? While teaching, learning and assessment need to be grounded on some level of explicitly stated expected learning outcomes, there are some dilemmas in this approach that need to be addressed. Firstly, learning is one of these phenomena that cannot fully be prescribed (Eisner, 1979; 2005). Secondly, there are internal tensions in the formulation and use of LO. Thirdly, the students need to understand and see the meaning of LO in the context of their efforts to master the educational programme. The focus of this paper is: How do students understand learning outcomes as described at programme and module levels, in teaching and in assessment? The study draws on data from 7 in-depth interviews with individual students and groups of 2 to 7 students conducted in 7 different study programmes, across three fields of science (medicine/health studies, technology/science and humanistic/social science) at four Norwegian HEIs (two university colleges and two universities). The students interviewed found the learning outcomes difficult to understand and too general to be useful for their learning. They also found it hard to see the
86
alignment between defined learning outcomes, teaching and assessment. Learning outcomes were seen as “a formal thing” primarily giving direction to what they could expect and as a “checkpoint” close too final assessment. Yet, the findings indicate that the function of learning outcomes vary across programmes dependent on how they are integrated in the flow of students’ work. In some programmes teachers and students actively used learning outcomes. In one programme there were no specified syllabus and the learning outcomes were the key guiding tools for directing students’ learning. The study contributes to the mapping of student’s understandings of learning outcomes in relation to the more structural aspects of modules and courses in higher education, as well as aspects in relation to teaching and assessment. The focus on student’s understandings adds a novel perspective to the Norwegian and European research agenda in the field of learning outcomes. 076: Incorporating digital technologies in the assessment of oral presentations at a distance Stefanie Sinclair The Open University, Milton Keynes, UK One of the major challenges within the delivery and assessment of higher education in recent years has been related to the impact of digital technologies. It has been argued that ‘the Internet and constant advances in technology have radically changed higher education delivery' (Blumenthal, 2013, p.19). However, ‘while technology has increasing influence throughout higher education, there is still much to be learned about its effective educational contribution' (Kirkwood and Price, 2013, p.335). There is growing recognition of the fact that more ‘thought needs to go into the development of pedagogical approaches that enable and support the [effective] integration of these new technologies' into teaching and learning practices in HE (Sinclair, 2011; Sinclair, 2013, p.38). While it is clear that digital technology has created new opportunities to practice and assess a wider range of communication skills, particularly - but not exclusively- in distance learning settings, the benefits and challenges of the use of digital tools for assessment purposes in HE requires further investigation and critical evaluation (Sinclair, 2013; Vonderwell and Boboc, 2013, p. 22). This paper is based on the findings of a HEA funded project which considers the benefits, challenges and wider applicability of a form of assessment included in the Open University module A332 ‘Why is religion controversial?'. This form of assessment requires students to digitally record an oral presentation and submit it electronically as an audio file. My investigation centres on the following three research questions:
How can digital technology be effectively utilised in the development and assessment of students' oral communication skills? How do students approach and experience the delivery and assessment of digitally recorded oral presentations at a distance?
What are principles of good practice in the assessment of oral presentations at a distance?
87
077: The use of stakeholder-informed simulation in assessment: sharing experience from an undergraduate medical student disability awareness programme. Adam Wilson, Anand Gidwani, Christopher Meneilly, Vivienne Crawford, David Bell Queen's University Belfast, Northern Ireland, UK Amongst outcomes in GMC's Tomorrow's Doctors, medical graduates should be able to communicate effectively with physically-disabled patients. Lack of exposure and common misconceptions about the physically-disabled and their healthcare needs present potential barriers to appropriate communication and management within the doctor-patient relationship. Local research (Gidwani A, MMedEd Thesis 2012) revealed that due to timetabling constraints and scarcity of physicallydisabled patients within the clinical environment during hospital attachments only some medical students during their undergraduate training received opportunities to interact directly and specifically with such individuals; exposed students had significantly enhanced insight into the unique needs of this section of society and felt more confident in their own ability to manage such professional encounters appropriately in future. We undertook focus-group research with physicallydisabled young adults to explore their experiences, both good and bad, of previous interactions with healthcare professionals. Their experiences were intended to better inform development (Nestel D et al, Med Teach 2008; 30:534-6; Anderson ES et al Med Teach 2011; 33: 44-52) of authentic role-plays based on real examples of good and bad practice they had encountered. These role-plays were then filmed with the subjects' assistance and marking schemes devised to assess medical students' ability to communicate and manage treatment of physicallydisabled patients presenting with common clinical conditions. The resource was evaluated by recruiting two cohorts of medical students: the first viewed the roleplays and related materials before themselves undertaking a role-play with a physically-disabled person, the second proceeded directly to the role-play. Those who had accessed the resource performed notably better in the subsequent roleplay than those who had not, avoiding common mistakes and demonstrating superior communication skills and patient management. They valued opportunities to learn through self-directed study before active participation to reinforce and evaluate learning. The materials have subsequently been made available as a training resource for all students. This study demonstrates the value of recording role-plays for self-directed study, and also the use of simulation for formative and/or summative assessment. In agreement with similar studies (Moroz A et al, Med Teach 2010; 32; 360-364) such strategies foster appropriate attitudes and behaviours towards the physically-disabled within the entire medical student body, when access to such patients in the clinical setting is limited. Our approach is readily applicable to other disciplines and endorses involvement of external stakeholders in informing development and review of learning resources and their participation in assessment of students' acquisition of attitudes and behaviours.
88
078: Identifying potential English language teachers from a cohort of MA students in order to meet the requirements of an external validation authority Susan Sheehan University of Huddersfield, Huddersfield, UK This presentation will describe an innovative approach to test development. The University is developing a test of language awareness for potential teachers of English. The test is part of the admission process for a teacher training certificate. The certificate is an entry-level course for those who wish to teach English as a foreign language. The candidates have to demonstrate a level of language awareness, appropriate for someone who wishes to teach English, through a handwritten test and an oral interview. The test candidates are all international students for whom English is a second or foreign language. To ensure the test is at the appropriate level the test developers will draw on the British Council – EAQUALS Core Inventory for General English and the Word Family Framework (WFF). These two publications sought to state explicitly what language characterises each Common European Framework of Reference (CEFR) level. The Core Inventory focussed on a grammatical progression through the levels of the CEFR. While the WFF classified lexis into CEFR levels. The proposed presentation is an empirical investigation of Core Inventory which claimed to “present a simple overview of the apparent consensus on what constitutes the most important for teaching and learning at each level” (North, Ortega, Sheehan: 2010:20) The first two sections of the test are based on the Core Inventory and the WFF. The presentation will demonstrate how such resources can be used by test developers. The written test has 5 sections. Section 1 - will be a multiple-choice test of knowledge of grammar and lexis. Section 2 - will test knowledge of lexis and through this knowledge of word class and parts of speech. Section 3 - will ask candidates to write a commentary on a piece of learner writing. Section 4 - will ask them to reflect on their experiences as a language learner. Section 5 - will ask them to state why they are interested in becoming an English language teacher. The interview will be based on questions about their reasons for applying for the course and for becoming and English teacher. This presentation describes an innovative test development project. The methodology is of interest to those who need to develop new tests within a Higher Education context. The project is also of interest to those who are required to develop tests for international students.
89
079: How to assess our students well: innovative approaches for addressing the challenges of assessment and feedback Yue Zhao The University of Hong Kong, Hong Kong Education enters a new world of assessment with the advances in technology, psychometrics, and the learning sciences. Innovative assessment approaches have been developed in support of the development, analysis and scoring of a new generation of assessments. Technology provides solutions to many of the challenges faced by assessment programmes, for instance, from computer delivery and innovative items to automated scoring, adaptive testing and providing diagnostic feedback. Assessment and feedback has long been an essential issue and a major concern in the practices in higher education. For instance, the integration of assessment into course design, the prevision of reliable and valid professional judgments and the balance between formative and summative assessment are among the list of challenges (The Higher Education Academy, 2012). The presentation will demonstration practices in utilizing creative and innovative assessments to address some of the challenges of assessment and feedback, specifically: (a) using diagnostic assessment to provide timely and constructive feedback, to improve the balance between formative and summative assessments, and to support constructive alignment between learning and assessment, (b) using adaptive assessment to tailor assessment to student learning needs, to monitor student learning progress and to support the use of technology for facilitating student learning, and (c) using dynamic assessment to provide direct evidence for the assessment of learning outcomes (e.g, collaborative problem solving) and to support the assessment of experiential learning. It is the aim of the presentation that examples will be shown to demonstrate how innovative assessment approaches and modern technology can facilitate and enhance assessment practices so that our students will be assessed in a more efficient, effective and innovative way, with more reliable, valid and fair assessment data, and be benefited from more diagnostic and adaptive feedback. In addition to current practices and challenges, future prospects will also be covered in the presentation. 080: How can admissions testing better select candidates for professional programmes? Belinda Brunner Pearson VUE, London, UK Objectives 1. Communicate common test constructs of well-known higher education admissions tests 2. Discuss influences on admissions test construct definition and design and discuss research on related to factors influencing success in academic study 3. Discuss how admissions tests can be used to identify relevant talent
90
4. Examine how admissions test can be used to facilitate educational mobility and inform selection decisions when the prerequisite curricula is not standardised Observations Generally speaking, constructs of admissions tests can be placed along a continuum from curriculum-related knowledge to more general reasoning abilities. For example, subjectspecific achievement tests are more closely aligned to a prescribed curriculum, while reasoning tests are typically not associated with a specific curriculum. This session will draw reference from the test-constructs of well-known international higher education admissions tests, such as the UK clinical aptitude test (UKCAT) which is used for medicine and dentistry admissions. Conclusions The purpose of academic admissions testing is to identify potential students with the prerequisite skills set needed to succeed in the academic environment, but how can the test construct help achieve this goal? Determination of the appropriate test construct for tests used in the admissions selection decisions should be influenced by a number of factors, including the preceding academic curricula, other criteria influencing the admissions decision, and the principal purpose for testing. Attendees of this session will learn the types of aptitudes and knowledge that are assessed higher education admissions tests and will have the opportunity to gain insight into how careful and deliberate consideration of the desired test constructs can aid in identifying potential students with the greatest likelihood of success in medical school. 081: A quantitative analysis of student engagement with online feedback. Claire Moscrop1,2 1 Edge Hill University, Lancashire, UK, 2Lancaster University, Lancaster, UK The purpose of this small scale research study was to test the level of student interaction with their online written feedback in the Turnitin Grademark庐 tool. The Grademark tool notes to tutors when students have opened their feedback for more than thirty seconds. A purely quantitative data mining approach was taken, with data being extracted from five modules across three levels of study, from first year undergraduates (level 4), to third years (level 6). The results were analysed using the Pearson Chi-squared analysis technique. The study showed that, while 63% of students studied (n250) spend more than thirty seconds viewing their feedback, there is a great disparity across the levels of study. Only 34% of first years accessed their feedback for more than 30 seconds, compared to 64% of second years and 91% of third years. This marked difference raises a number of questions for further study: 路 Do first years perceive feedback to be less valuable than third years, and if so, why? 路
Are there any barriers to first year engagement with feedback?
91
路
What changes between first year and third year increase the interaction with feedback so markedly?
The study also shows a statistically significant difference when looking at the levels of student attainment (Grades) and their likelihood of reading feedback, with those in the highest grade band accessing the feedback 72% of the time, compared to 29% of those in the lowest grade band. This paper is envisaged to be the first stage of an extended study into student engagement with online feedback, and the paper concludes with a number of avenues suggested for this further research. 082: The AsSET toolkit: developing assessment self-efficacy to improve performance Sue Palmer-Conn, David McIlroy Liverpool John Moores University, Liverpool, UK A recent study indicated that student dissatisfaction with assessment and feedback may be attributable to a piecemeal rather than integrated programme wide approach (Jessop et al., 2013). This study will develop those findings by offering a self-report measure (Assessment Self-Efficacy) linked to a toolkit of learning activities for tutorials to facilitate integrated assessment mapping and planning. The aim is to develop assessment and feedback in partnership with students as coproducers and co-evaluators in the design and dissemination of resources readily adaptable to the broad HE sector, with the expected departmental benefits to contribute to best practice across the sector. 083: Assessment representations and practices in Italian higher education context: Hints from a case study Serafina Pastore1, Monica Pentassuglia2 1 University of Bari, Bari, Italy, 2University of Verona, Verona, Italy Nowadays, higher education systems are being called to reconsider the aims of assessment if we want that students develop skills and competencies for their future personal and professional life. Pursuing such aims involves the active participation of students in the assessment process. The dissemination of outcome-based education shows the need to review educational policies, structure of higher education systems, and instructional design. Hence the need to define a different assessment model for the teaching-learning process. This raises a variety of questions: How much does assessment improve student learning? Do teachers provide useful, appropriate, and timely feedback? Do they allow students to recognise and understand elements that can lead to an improvement in their performance? Although assessment holds an essential position in the higher education field and even though recent production reflects swings in the practice and in the context in which it works, educational research still seems inchoative. Current studies, especially on an international level, are moving towards the revision of traditional
92
modalities of testing, the individuation of alternative forms of assessment and above all, the analysis of representations and perceptions that teachers and students have about assessment. The present research is oriented towards this last point: this case study, conducted at the School of Education at University of Bari, Italy, analyses teachers’ and students’ representations on assessment. The study was qualitative in nature: semi-structured interviews both for teachers and students have been used. Results are based on interviews of 15 teachers (selected from a population of 140 teaching staff) and 64 students (selected from a population of 255 participants). Atlas.ti software has been used for analyse collected data. Results indicate a great level of confusion about assessment both for teachers and students. This study implies that there is a critical need to relook into assessment practices in Italian higher education system with regards to aspects such as assessment literacy, alternative assessments, formative assessments and their contribution to students’ learning. Several suggestions are discussed for further improvements in the higher education context. 084: I don’t have time to attend a 2 hour training session: consequences and impact Neil Witt, Emma Purnell Plymouth University, Plymouth, Devon, UK Plymouth University recently introduced a new Digital Learning Environment (DLE), replacing all learning systems with Moodle integrated with a range of other learning technologies. The DLE project was used as a method of embedding learning technology across the institution with eAssessment promoted as a key area for enhancing the student experience and addressing issues raised regarding student feedback. Often discussion is around student assessment literacies, but staff assessment literacies are an area the Plymouth team found to be vital for investigation. Experience showed that Assessment issues are often a closed discussion, sometimes with only the module leader, sometimes the wider programme team (but not always) and even less often with support teams such as Technology Enhanced Learning or Educational Development. These ‘closed’ Assessment discussions prevent sharing of practice, the introduction of new methods or reviewing current practice. The DLE project allowed discussions with programme teams to introduce new technology options, and also opened some of the previously closed doors to allow discussion of wider assessment issues from design to delivery to evaluation. The work undertaken with programme teams from across all University faculties raised a number of issues such as:
93
What are core assessment literacies skills that staff need to have? Does or should technology make a difference to this skill set? A clear understanding is needed of all relevant processes: submission; marking; and feedback. These are potentially different processes or stages when assessment is taken online. When technology solutions are promoted some academics start to question existing practice. Some academic staff are resistant to change.
One of the most common comments was ‘I don't have time to do the training'. There was a misconception that training should be optional with a new tool or assessment method, or that staff could ‘figure it out' as they go. Experience showed that not enough consideration was given to the consequences if technology is not configured correctly or there were inadequate signposts to student support. The consequence of not attending a two hours training often resulted in hours of troubleshooting as assessments could be misconfigured and there was resultant damage to student confidence relating to eAssessment. Staff were similarly reluctant to try again, regardless of whether the fault may be with them originally. As a result of these initial experiences the Plymouth team developed a range of support processes and further advocacy measures to highlight the need to increase staff assessment literacies. 085: The Power of the "One-Pager": a simple idea for effective informal formative assessment Deborah Anderson, Rebecca Lees Kingston University, Kingston upon Thames, UK It is often the simplest ideas that have the strongest impact and in terms of effective, formative assessment, we argue that the power of the "one-pager" should not be underestimated. For some students, the writing of academic essays can be challenging and yet for many subjects, an opportunity to review of key ideas is an effective way of encouraging students to think critically about a topic. To reduce the stress levels and to ensure students are on the right track, we have introduced a simple ‘onepager’ exercise. The ‘one-pager’ comprises a 300 word discursive (not bullet point) summary of the student's key arguments containing at least two in-text citations. Students must also list five academic journal articles, properly referenced in Harvard style. ‘One pagers’ are brought to class shortly after the assessment briefing and discussed in small groups. Each student then writes their key argument on a post-it note which is collected and placed on a whiteboard, grouping similar approaches together. At this point in the assessment, some students will not yet have begun to think in detail and are planning to simply produce a descriptive ‘advantages and disadvantages’ summary. By using their (anonymous) post-its on the board, we are able to explain why this is not appropriate and hopefully inspire them to think
94
in more detail about a critical approach focussing much more clearly on a specific context. At the end of the session, ‘one-pagers’ are collected in, marked and returned one week later. We are able to comment on the proposed approach of the work, the ability to reference and the quality of the sources planned. Because there is only one page to look at, this is very quick to mark, yet can ensure the students are on the right track at a very early stage. It is worth just 10% of an individual portfolio, a very small proportion, but enough to ensure it is taken seriously. We argue that the ‘one-pager’ is a simple, practical tool which fulfils the conditions identified by Ruiz-Primo (2011) for an effective, informal formative assessment strategy. For example the use of the one-pager ensures that ‘effective assessment conversations are dialogic and interactive’ and ‘effective assessment conversations are supportive tools of social participation and social cognition’ (Ruiz-Primo, 2011, p.18). Although developed for a marketing module, the approach is relevant for all disciplines where critical academic writing is required. 086: Developing assessment literacy for Postgraduates who Teach: compliance or quality enhancement? John Dermo University of Bradford, Bradford, West Yorkshire, UK The assessment process is highly resource intensive for universities, and many taught programmes rely on their department’s postgraduate research students to assist with this assessment workload by employing them as graduate teaching assistants (GTAs). HE institutions need to ensure that these inexperienced academic staff are fully prepared for this high stakes activity (QAA, 2014 Quality Code for HE), and their central Educational Development units consequently offer mandatory provision, generally consisting of one or two days’ induction, plus an additional taught module, depending on specific responsibilities and institutional practices and regulations. This individual research paper reports on a four-year qualitative study into assessment literacy among GTAs at the University of Bradford, based on a theoretical underpinning from Price, Rust, O’Donovan, Handley and Bryant (2012). It explores GTA engagement in the assessment process, with particular reference to awareness of key issues such as reliability and validity, and core principles of formative and summative assessment (as described, for example in the tenets of the HEA document “A Marked improvement” in 2012 and in the QAA’s ‘Understanding Assessment: a guide for early career staff’, 2011). The study investigates the practices of four cohorts of GTAs enrolled on a 20-credit masters level module, gathering qualitative data via focus group discussion tasks as well as textual analysis of submissions in student learning and teaching portfolios submitted for HEA associate fellowship. This paper investigates the following research questions:
What assessment activities are GTAs normally engaged in? What areas and levels of assessment literacy might they be reasonably expected to reach?
95
How does current training and development provision prepare them for these?
The findings of the study concur with existing studies such as Sales (published in PRHE, 2013) that GTAs develop assessment literacy not by learning the facts about assessment via one-off training sessions but by reflection, and gradual development of their understanding of their own and others’ assessment practices. The author concludes that it is not sufficient to launch GTAs into marking and creating assessment tasks with only the most basic introductory mandatory training: such a compliance approach can begin to raise awareness of national and local regulations, but is not adequate fully to engage inexperienced staff in genuine reflective practice on their assessment practices. Instead, it is argued that institutions ought to give serious thought to the genuine development of GTAs’ educational skills with regard to assessment literacy. 087: To measure the unmeasurable: using Repertory Grid Technique to elicit tacit criteria used by examiners. Lars Björklund1, Karin Stolpe1, Mats Lundström2, Maria Åström3 1 Linköping University, Linköping, Sweden, 2Malmö University, Malmö, Sweden, 3 Umeå University, Umeå, Sweden Not only in artistic judgement but in all our ordinary judgements of the qualities of things, we recognise and describe deviations from a norm very much more clearly than we can describe the norm itself (Schön, 1987). Bloxham, Boyd, & Orr (2011) found in a recent study that many assessors made holistic rather than analytical judgements when they were examining student theses. ”Experts” often use holistic, pattern recognition strategies to assess and to take action(Dreyfus & Dreyfus, 1986). This phenomena has been explained using a dual system model (Björklund, 2007; Evans, 2008; Suto & Greatorex, 2008). This model distinguish two qualitatively different but concurrently active systems: System1 thought processes, which are intuitive and associative, and System 2 processes, which are analytic and rule-governed. System1 gives us an ability to recognize patterns and familiarity in an area of our own expertise. We may not always know why but intuitively we feel that something is good, bad, beautiful, sloppy, clear or original. If we want to elicit the System1 kind of implicit knowledge we can’t use ordinary interview techniques. The information is not stored in a verbal form and the interviewee maybe doesn't even know it’s there and is controlling his/hers decisions and actions (Polanyi, 1966). This study show how the Repertory Grid Technique (Kelly, 1955) can be used to help examiners verbalise the criteria which they use when they are assessing student theses. RGT has been used in education research to analyse how teachers grade and assess students in various subject areas like Design & Technology, Sports, Physics and Art (Björklund, 2007; Lindström, 2001; Svennberg, Meckbach, & Redelius, 2014) and seems to be able to elicit even tacit, implicit knowledge. Using a mixed method approach the authors of this study tried to analyse 25 individual assessors personal criteria and to correlate these criteria with their holistic assessment of a set of theses. The 45 resulting criteria from different assessors was later used as input to a Q-sort survey, addressing the question on
96
how different communities of practice may influence examiners. Three different methods were used in the study: 1. The holistic assessment of theses using eScape (Kimbell, et al., 2009) 2. Finding individual criterias using the Repertory Grid (Björklund, 2008) 3. Ranking the resulting set of criteria using the Q-methodology (Shemmings, 2006; Stephenson, 1953) Results from the RGT interviews will be presented complemented with a presentation of the System1 model. 088: Preparing international students for the diversity of UK assessment within a UK-China articulation agreement. Katie Szkornik, Alix Cage, Ian Oliver, Zoe Robinson, Ian Stimpson, Keziah Stott, Sami Ullah, Richard Waller Keele University, Keele, UK The internationalisation of UK higher education (HE) has seen many institutions expanding their research and education activities across international borders (McBurnie and Ziguras, 2007). Within this context the development of articulation programmes (Hou and McDowell, 2014) is becoming increasingly popular, especially between UK and Chinese institutions (Zheng, 2013). As these initiatives have developed, there is an increasing need to consider how these students are adapting to a very different educational system in the UK (Andrade, 2006). This is especially true of students from China, because almost all assessment in China is via traditional examinations and students have limited, if any, experience of the broad variety of assessment used in UK HE. To enable students from China to successfully integrate into the final year of a UKbased degree programme six subject-specific bridging modules were developed by UK-based academics, with two modules delivered by distance learning and four modules delivered face-to-face in China by UK-based staff (flying faculty; Smith, 2014). Their aims are threefold: (1) to ensure that the students become familiar with the broad range of UK teaching and assessment methods, (2) to address any gaps in subject content and, (3) to provide significant additional exposure to ‘academic' English alongside the students English language lessons. There is a careful ‘ramping up' of assessments throughout the six bridging modules to ensure that the students are prepared for the different types and standard of assessment in the UK. In 2013/14, twelve (out of thirteen) students from China successfully completed the final year of an undergraduate degree programme in Environment and Sustainability in the UK. Analyses of the students' summative performance suggests a stronger performance in assessments in semester two compared to semester one, which we attribute to improvements in the students' ‘academic' English language skills and an initial (cultural) reluctance to seek guidance and formative feedback from staff. Perhaps surprisingly, the students' summative performance was also stronger in coursework assessments, including dissertations, compared to exams. These results from our first cohort of Chinese students suggest that the bridging modules are successful in terms of preparing students for a much broader range of assessment found in UK HE. However, there are still
97
differences in attainment between the UK-based and Chinese student cohorts taking the same assessments which require further intervention. 089: Gender differences in completion and credit on physical science modules Niusa Marigheto, Victoria Pearson, Pam Budd, Jimena Gorfinkiel, Richard Jordan, Sally Jordan The Open University, Milton Keynes, UK Examination of historic data since 2009 for a Level 2 (FHEQ Level 5) “gateway” physics module at the Open University, UK has identified that barriers to success may exist for women compared to men; there are significant statistical differences between men and women for both completion and credit. For the total student cohort of women registered over 5 years (n=946), the completion and pass rates are both 10 percentage points less than the corresponding figures for men (n=2415). The numbers of women registered are slightly higher than sector averages; typically 28% of students on this module are women compared with the sector average of 23% (Institute of Physics, 2012). Similar trends were seen on other Level 2 modules in the physical science curriculum compared to equivalent Level 2 “gateway” modules in other science disciplines. Interestingly, investigations of data for Level 3 physical science modules indicated that women perform slightly better than men. For the most recent presentation (2013/14) of the Level 2 physics gateway module, the performance of women has further and significantly diminished; there was a 19 percentage point difference in completion between men and women and a 22 percentage point difference in pass rate. A number of University-wide changes have been made recently that required adaptations to teaching mechanisms, however no other Level 2 gateway module shows such a decline in women’s success; local changes, e.g. to assessment, may be contributing factors. Gender differences in attitude to assessed tasks have been reported (e.g. Gipps & Murphy, 1994) whilst Bates et al. (2013) report on a similar gender difference in outcomes on the Force Concept Inventory (Hestenes et al., 1992), a well-established inventory of understanding of physics concepts whose reliance on multiple-choice questions has been questioned (Rebello & Zollman, 2004). We will present the results of a study that compares men and women’s success in physical sciences in relation to: submission rates and scores for individual continuous assessment tasks, performance in initial computer-marked assessment by question, performance in separate parts of the examination (with different question types). We will also present comparisons with “feeder” modules, where data is available. Non-assessment factors such as online forum activity will also be considered. The work aims to improve the experience for women students on this and other modules, and, through greater understanding of the factors affecting success, to improve learning for all.
98
090: Learning diversity in higher education: Comparison of learning experiences among cross cultural student populations in a Hong Kong university Yue Zhao The University of Hong Kong, Hong Kong, Hong Kong Diversity in academic learning environment is crucial to student learning, and it brings the opportunities of enhancing the interactions among cross cultural student body and encouraging students to enhance their capabilities in intercultural understanding and global citizenship. Previous studies have well documents the diversity in learning style preferences and approaches to learning between Asian and Western student populations in a Western classroom, such as Australia (Amburuth & Mccormick, 2001). However, little is known about the similarity and difference of learning experiences among diverse student populations in a learning environment with Eastern traditions. Hong Kong is a modern Asian city in that students grow up in an environment that is exposed to both traditional collectivistic Confucius culture and individualistic Western values. Hong Kong has implemented a major reform in its educational system with an aim at providing multi-disciplinary courses and global perspectives in curriculum. The Hong Kong higher education system was hence chosen as the study context in the present study. The purpose of this study was to examine the extent to which students' perceptions of learning environment vary among Hong Kong Chinese, mainland Chinese and non-Chinese students in a Hong Kong university. Data was based on a sample of 2079 first year students enrolled at a Hong Kong university and responded to a student learning experience questionnaire, which measure students' perceptions of the learning environment. Measurement invariance was established in support of its validity of the instrument. In the comparison across the three diverse groups, statistical significance was found in the latent mean differences for some of the scales, such as, feedback from teachers. The findings draw attention to dimensions of learning diversity that may be present in tertiary classrooms with internationalized curriculum and Eastern traditions, and could have implications for fine-tuning the curriculum as well as teaching these diverse groups. 091: Using an evidence based approach to transform academic approaches to assessment. Courtney Simpson, Caroline Speed, Alexandra Dimitropoulos, Janet Macaulay Monash University, Clayton, Australia Australian Higher Education has an increased focus on learning outcomes and standards. In particular, evaluating the performance of higher education providers in relation to the Teaching and Learning Standards and developing broad disciplinary Threshold Learning Outcomes (TLO) for undergraduate degrees (http://www.olt.gov.au/resource-learning-and-teaching-academic-standardsscience-2011) and for specific disciplines (eg http://www.cubenet.org.au/).
99
However, the development of the TLOs alone will not ensure that graduating students have met the required standards. Hence, there is a critical need to ensure the quality and appropriateness of assessments. An additional hurdle faced is that although most academics acknowledge that assessment of learning is a critical aspect of teaching and learning, which drives and guides the approaches students take to their learning, academic decisions affecting assessment regimes are not always based on evidence and there are often barriers and or an unwillingness to change current practices (Brownell, S.E. and Tanner, K.D. 2012). Although there is much data and discussion of appropriate assessment modalities (eg Brown, S and Race, P (2012) , Boud, D. and Associates (2010), Race, P, Brown, S and Smith, B (2005)) many research active academics are not engaged with the educational literature and find it distant from their own practices. To address these issues we have developed a tool to map the detailed assessment practices within our university. We have piloted the tool to map the assessment regime in a Biochemistry and Molecular Biology major at a large Australian University. Assessments have been mapped to a range of criteria including: assessment type, format, timing, assessors, provision of feedback, level of learning (Bloom's), approaches taken to planning assessment. A major advantage of the mapping tool is its ability to integrate the data and provide insight into the systematic development of higher order learning and skills progression throughout a program of study. In addition, the mapping tool enables the alignment of the current assessment practices with learning outcomes across subjects, majors and degree programs, Australian TLOs and international disciplinary standards.. The mapping of current assessment practice provides the evidence to initiate and support discussions of assessment and raise staff awareness of the importance of, and the integral nature of assessment. Aspects of assessment which are identified as areas in need of improvement can then be addressed. Improved assessment practices and the data they provide can be used as evidence of teaching and learning and student attainment of the appropriate standards. 092: Walking the Assessment Talk: Aligning what we believe, say, and do John Delany Christchurch Polytechnic Institute of Technology, Christchurch, New Zealand â&#x20AC;&#x2DC;The values underlying approaches to assessment are so deeply embedded in academic practices developed over many years that it is often extremely difficult to change them without challenging fundamental and often competing assumptions about the nature of teaching and learning across the institution.â&#x20AC;&#x2122; AUTC. Assessing Learning in Australian Universities. â&#x20AC;&#x2DC;Renewing Policy and Practice: Frameworks for Institutional, Faculty and Department Action'. Downloaded from http://www.cshe.unimelb.edu.au/assessinglearning/02/index.html Effective assessment that leads to improved student success requires consistency between what we believe about assessment, how we express that in terms of principles, and how we apply those principles in practice: there needs to be consistency between policy, principles and practice.
100
Assessment practices - the strategies and activities we use to assess students' learning - are an integral part of learning and teaching. If those engaged in learning and teaching are to understand the impact of assessment practices on student learning and address the need for changes in those practices, they must align what they do in assessment with what they believe about learning and teaching. This poster reports on the first phase of an "Assessment Mapping Project" carried out at Christchurch Polytechnic Institute of Technology to identify the institutional assessment landscape and aspects of assessment practice that need to change in order to improve student success. Three questions were posed: 1. What does current theory and research say we should be doing? 2. What do we say we do? 3. What do we actually do? The poster addresses the first of these with reference to a range of assessment principles and best practice guidelines developed for higher education in New Zealand, Australia and the UK. A conceptual framework is suggested for unpacking statements about assessment in terms of core beliefs, principles and practices, and achieving consistency between these. A set of best practice guidelines is synthesised from the literature and it is expected that these guidelines will make it possible to identify changes in assessment practice at CPIT that will improve student success.
093: Increasing assessment literacy through institutional change Rachel Fosyth Manchester Metropolitan University, Manchester, UK A large post-92 University in the UK carried out a review of the University's assessment policy and practice. This concluded that existing procedures worked adequately from the point of view of compliance with the UK Quality Code (QAA, 2013). However, they did not provide a framework for the consistent provision of information about assignment tasks, submission, feedback and moderation across the university; there was a wide variety of practice. Evidence from external examiner reports, academic appeals, and staff and student comments indicated that this lack of consistency could lead to confusion about expectations in relation to the assessment process. This kind of confusion may be an indicator of a lack of assessment literacy. It has been suggested that improved assessment literacy can also lead to improved assessment outcomes (Smith et al., 2011)and that it can improve the effectiveness of feedback (Price et al., 2010). The development of an institutional framework for the provision of information about assessment would enable more consistent discussion about all of these elements with both students and staff. The University's Code of Practice for Assessment was rewritten to improve clarity and to reduce confusion between the requirements of the institutional framework
101
which needs to support effective processes and the maintenance of academic standards, and decision-making about academic issues such as choices of assignment type and size, and feedback strategy, which need to be retained within programme teams. This presentation will summarise the changes and the outcomes of an evaluation of their impact.
Bloxham, S. (2012) ‘You can see the quality in front of your eyes': grounding academic standards between rationality and interpretation, Quality in Higher Education, 18, pp.185-204. Price, M., Handley, K., Millar, J. & O'Donovan, B. (2010) Feedback : all that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35, pp.277-289. Price, M., Rust, C., O'Donovan, B., Handley, K. & Bryant, R. (2012) Assessment literacy: the foundation for improving student learning, ASKe, Oxford Centre for Staff and Learning Development. QAA. (2013) UK Quality Code for Higher Education Chapter B6: Assessment of students and accreditation of prior learning [Online]. Gloucester: Quality Assurance Agency for Higher Education. Available: http://www.qaa.ac.uk/en/Publications/Documents/quality-codeB6.(Accessed: 04 January 2014). Smith, C. D., Worsfold, K., Davies, L., Fisher, R. & McPhail, R. (2011) Assessment literacy and student learning: the case for explicitly developing students ‘assessment literacy’, Assessment & Evaluation in Higher Education, 38, pp.44-60. 094: Marking on and off line - a university wide pilot Sue Gill, Christie Harner Newcastle University, Tyne and Wear, UK For the academic year 2014/15 we are running a pilot which invited academic colleagues from any part of the university who wished to trial online marking using GradeMark on the iPad to join in. Introducing change into a research intensive university can be a challenge, too often it is perceived by colleagues as being top-down and overly bureaucratic; this pilot is working with enthusiasts from all disciplines (science, humanities and medicine) and across a range of types of assessment to develop and support key outcomes:
Document (and disseminate) examples of what worked and what didn't work Collect and use Staff and student responses to the software and the notion of online assessment and how these responses shape usage, from confidence with the software to the practicalities of releasing marks to inform future use of the technology Bring together a group to support each other and future users Develop shared resources, such as libraries of comments
102
Drawing on the work of Price and colleagues and the ASKe group, David Nicol and others a key part of the support for staff using the technology, is centred on the ways in which they can increases the assessment literacy of their students (and maybe themselves?) and in doing so make the process of assessment and feedback, a richer, more dialogic and a more learner centred process. This is work in progress and we look forward to sharing our experiences and learning from others in the sector. 095: The assessment challenge: an end-to-end solution Paolo Oprandi, Carol Shergold, David Walker, Catherine Jones University of Sussex, East Sussex, UK The University of Sussex is in the midst of a change process regarding assessment. We are moving the submission point of text-based assignments from department offices to our learning platform based on Moodle. The project's focus has been on improving the students' experience and their access to timely and quality feedback. In order to achieve this, we have made it easy for tutors to provide feedback by automating administrative tasks. We have put a high priority on the user interface and ensuring a good user experience. Another key component has been establishing that the process runs on robust, scalable and high performance infrastructure. The project has entailed integrating systems and streamlining administrative processes. It has required that we adhere to the Universityâ&#x20AC;&#x2122;s complex assessment rules, avoid duplication of effort, put the feedback where students expect to find it, create notifications to students when feedback has been released and present the feedback in a format it is easy to consume. The technical challenge has been creating interfaces between our student record system, our learning platform and Turnitin and Grademark. This poster illustrates the function of each of these systems and how we have integrated the services. This represents the first phase of a 3-year managed implementation. Further developments are required to support assessment requirements for later years and an expansion of assessment types. 096: Leading Enhancements in Assessment and Feedback (LEAF Scotland) Dave Morrison1, Hazel Marzetti2 1 University of Glasgow, Glasgow, UK, 2University of Edinburgh, Edinburgh, UK LEAF is a multi-university collaboration (Birmingham, Edinburgh, Glasgow, Nottingham) looking at efficiency and effectiveness of assessment and feedback at the full-programme level, and across multiple faculties/colleges (History, Biosciences, Economics, et al.). The goal has been to work directly with students and staff across a whole degree programme, initially using the TESTA methodology, to find an ideal balance of effective assessment for learning and efficient assessment for workloads [student responses to assessment]. For
103
example, the value of formative assessment has been well documented in recent years, but how to include it in modules without overloading staff and students is less obvious. What can seem difficult from a module perspective though, can be less so from a full-programme view and with insights shared across several universities and subjects. This poster charts the unique experience and findings of the two Scottish universities on LEAF. For example, TESTA was designed around English institutions and some modification was needed to suit the Scottish four year/flexible electives system. We also explored ways to extend the TESTA audit focus into the enhancement and change driven approach common to the Scottish sector [assessment research method/critique]. Initially the project held a strong focus on collaboration across the institutions, but only within each subject. In practice though, some of the strongest insights and engagement came from shifting the focus to interdisciplinarity and institutional approaches to assessment and feedback, as a result of coordinating staff and students experiences across faculties and colleges [institutional level change]. Student responses across the subjects show a uniform desire to make more sense of module-based assessment. Taking a full-programme approach has foregrounded ways in which continuous, ipsative, and longitudinal approaches to assessment and especially to feedback could be developed into a form of PDP. The process of students learning to locate, chart, and respond to feedback patterns over time may provide a practical means of using feedback to teach transferable skills not easily imparted in a single module [learning by assessment]. 097: Standardising Assessment at the Institution to Increase Student Learning Stuart Blacklock LiveText, La Grange, IL, United States Minor Outlying Islands According to JISC - the Joint Information Systems Committee - the term electronic management of assessment is increasingly being used to describe the way that technology can be used to support the management of the whole assessment and feedback lifecycle, including the electronic submission of assignments, marking, feedback and the return of marks and feedback to students. The use of technology coupled with the implementation of a best assessment practice process is an effective way to standardise assessment across an institution. Doing so will help to ensure parity for learners across different parts of the institution, as well as increase student achievement of core learning competencies. This session will focus on methods for best-practice assessment and describe the benefits of implementing technology to manage the process in a standard way. When institutional assessment is formalized, supported by, and managed with technology, several benefits occur: ď&#x201A;ˇ
Ease of access to accurate, up-to-date assessment data is essential for effective decision making. The ability to continuous re-evaluate and analyze data on the teaching and learning processes allows administrators to make date-based changes that improve long-term curriculum.
104
ď&#x201A;ˇ
ď&#x201A;ˇ
ď&#x201A;ˇ
Assessment in this fashion provides opportunities for the institutional community to engage in self-reflection of its learning goals, to determine the degree to which these goals align with student and marketplace needs, and to evaluate if students' overall performances coincide with the institution's expectations; Usable feedback... we are interested in exploring ways feedback can be recorded and accessed in one place for students, allowing for reflection and engagement. Information is more transparent to students about the knowledge, skills, and other attributes they can expect to possess after successfully completing coursework and academic programmes. Efficient management of assessment data can mean improving student learning without increasing workload (i.e. having a timely overview of what has been learned and understood; being able to give feedback/feed forward in timely and effective ways; and it can also aid course review and validation processes.)
In the end, technology does not drive assessment; but rather the technology responds to the assessment requirements. It can integrate campus systems and business processes at an institution. 098: Assessment timing: student preferences and its impact on performance Richard McManus Canterbury Christ Church University, Canterbury, UK The impact of assessment on student learning, performance and engagement is widely known and has been studied extensively: see for example Snyder (1971), Miller and Parlett (1974) and Dochy and McDowell (1997). This has led to a small but growing literature on students' preferences with respect to assessment design and how these relate to performance. A preference towards multiple-choice questions has been observed (see for example Ben-Chaim and Zoller; 1997), especially in male students (Gellman and Berkowitz; 1993), those students who are more anxious of assessments (Birenbaum; 2007), and for surface learners (Birenbaum and Feldman; 1998). These preferences are believed to result from a perception that such questions are easier to prepare for (Zoller and Ben-Chaim; 1989) and to receive a higher mark (Traub and MacRury; 1990). Females and those students with deeper learning styles, on the other hand, tend to prefer opened ended questions and coursework (Furnham et al.; 2008), and a preference for a mixed assessment strategy has also been observed (Iannone and Simpson; 2014). This study innovates this literature in two distinct ways: first, we consider the timing rather than the type of assessment; and second, rather than surveying students on hypothetical choices, preferences are revealed using real life decisions which effect how the students are assessed. Students on a first year undergraduate economics module were given the choice of when to sit their first assessment to determine preferences over assessment timing, and the impact of timing on performance. Clear preferences of having this option were shown (only 2% of students stated to be indifferent) with those more comfortable and engaged in the module electing to take the earlier sitting of the assessment. Those who took the early test performed better on average compared to those who took it later, however after controlling for attendance there is no statistical link. There was however evidence that later assessment caused lower attendance and
105
evidence of a legacy effect of this timing where the out-performance of the early cohort grew over later tests, which all students took at the same time. 099: Peer and Public Pressure: Using Assessment to Raise Confidence and Ambitions amongst Undergraduate History and Sports Students Lee Pridmore, Ruth Larsen, Ian Whitehead University of Derby, Derby, UK At the University of Derby the History and Sport programmes have pioneered strategies for building confidence, developing communication skills and instilling academic ambition through modules which take exciting approaches to challenging, supporting and inspiring the learner. Using Murphy's ideas of authentic assessment (Murphy, 2006) the modules both use ‘real world' learning approaches while also maintaining academic challenge. This paper will explore the two modules and demonstrate how they, through assessment, are innovative, inspire the learners, and have added impact for both the students and wider stakeholders. It will explore how two different disciplines have learned from each other's good practice to further enhance our students' learning, and will present ideas about how this approach can be used by other discipline areas. The module ‘Public History: Marketing and Presenting the Past' enables students to undertake a piece of collaborative research, culminating in the writing and delivery of a conference paper. They contribute to the publicity for the conference through a variety of formats, which may include posters, leaflets and websites. The assessment develops their academic skills as well as highlighting essential transferable skills, such as team work, thus enhancing employability. The sports module, Introduction to Sports Management, includes an innovative ‘Dragon's Den' based assessment, which promotes a competitive learning and applied environment. Both these modules use elements of peer learning and assessed group work, and the paper will consider the ways in which innovative approaches to group work can enhance students' transferable and academic skills. It will argue that there is a need to move away from the ‘traditional individualistic conception of assessment' (Boud, Cohen & Sampson 2001, p. 67) within some modules to challenge students. It will also consider effective ways of encouraging peer learning and fairly assessing this. Using Boud's ideas on how a focus on practice can have a positive impact on assessment (Boud, 2009), this paper will consider how experiential learning and assessment can have a long lasting impact on the students' confidence and assessment, both inside and outside of the University. Boud, D. (2009) 'How can practice reshape assessment' in Joughin (ed.) Assessment, Learning and Judgement in Higher Education Boud, D., Cohen, R. & Sampson, J. (2001) 'Peer learning and assessment' in Boud, D., Cohen, R. & Sampson, J. (Eds) Peer Learning in Higher Education Murphy R. (2006) 'Evaluating new priorities for assessment in higher education', in Bryan, C. & Clegg, K. (Eds) Innovative Assessment in Higher Education
106
100: An evaluation of the student and staff experience of the introduction of audio feedback for undergraduate assessment Nick Purkis, Sandy Stockwell, Jane Jones, Pam Maunders, Kirsty Brinkman The University of Winchester, Winchester, UK This project aims to evaluate student and staff experience of the introduction of audio feedback for undergraduate assessment. It will also consider the implications of delivering assessment feedback in this way for colleagues in higher education. The NUS/HSBC Student Experience Full Report (2010/2011) showed that the feedback students are given by their lecturers is not in their preferred format: 67% would have found individual audio/verbal feedback the most useful; only 24% receive feedback in this way. This cross faculty project at the University of Winchester aims to deliver audio feedback following student submission of written assignments (level 4); blogs (level 5) and drafts of final year (Level 6) dissertations. Feedback has been recognised as being the most single influence on student learning and achievement (Gibbs and Simpson, 2004; Hattie, 1987; Black and William, 1998). Weaver (2006) identified areas of written/typed feedback which students felt were unhelpful as falling into 4 areas; 'comments too general', 'lacking guidance', 'focusing on the negative or 'comments unrelated to the assessment criteria'. Lizzio and Wilson (2008) who investigated students' perceptions of written feedback found that students valued feedback which was developmental and gave them guidance to improve for future assignments. Epistemologically however, students may not understand traditional written feedback given (Gibbs and Simpson 2004) as the 'language' used may not support future improvement and may not be developmental (feed-forward) (Ferguson, 2011, Price et al., 2010). Dowden et al. (2011) identified that students value the importance of the emotional response when receiving feedback, which is often lost in the writing. It may be difficult to express emotion through the written or typed feedback, however, delivery of assignment feedback in an audio fashion can express more nuance and expression in an emotional form (Ice et al. 2007; Merry and Orsmond, 2008). Students will be given audio assessment feedback on their assignments. 12 semistructured interviews will be conducted to capture the student 'lived-experience'. Interview data will be transcribed and analysed for themes using thematic analysis. Staff will keep a blog of their 'lived-experiences' of their journey of audio feedback delivery, analysis and feedback. Pilot Project A pilot in early 2014 interviewed 3 students who has received audio assessment feedback.. Results from this showed their feedback was more personal, it was developmental, was easy to use, and showed that the tutor had read the whole essay (Purkis 2014).
107
101: The Impact of Commercial Involvement on the Development Of Academic Processes And On The Quality of Outcomes: A Case Study Theme: Learning and contemporary higher education assessment Ufuk Cullen, Zach Thompson Greenwich School of Management, London, UK At the beginning of 2014, GSM (Greenwich School of Management) has started a project involving the senior students and a professional business organisation; in order to, first and foremost, blend academic and commercial expertise and improve employability skills of the students of Greenwich School of Management (GSM). The project has been started by engaging with the company where students could complete a project for the companies that counts towards their degree. KONE has been selected as the professional business organisations which is one of the global leaders in the lift and escalator industry. KONE was founded in 1910 and provides innovative and eco-efficient solutions for lifts, escalators and automatic building doors. The subject has been selected in order to improve business performance as well as to improve organisational weaknesses by the project committee. The students have aimed to generate ideas to improve or remove the selected weaknesses. All the necessary resources, such as knowhow, technical support and so on, provided by GSM in order to complete the project successfully. The performance criteria of the students have been identified for every phrase of the project through establishing a consensus between GSM and the company. In order to monitor student performance and progress, Career and Employability Team has been deployed along with the subject lecturers throughout the project. The project has been completed recently and the data analysis process is still proceeding. The research data has been gathered by means of questionnaire, participant observations and interviews throughout the project. The given data gathering methods have been used to measure: the degree to which students have improved their employability skills, the degree to which the targeted weakness of the company has been improved, student perception about the skills they have improved through the project, the company perception about the contribution of the project to business performance. The project has generated significant inputs for its three main stakeholders which are GSM, KONE and the students. Following the completion of the ongoing data analysis process, the impact of the project on its stakeholders will be clear and the final draft of this paper will be completed. 102: Assessment Strategy: Online Distance Education Elaine Walsh, James Brunton Dublin City University, Dublin, Ireland The OEU was established in 1982 as the National Distance Education Centre (NDEC) and since then has evolved and adapted to being DCU's main provider of
108
online, â&#x20AC;&#x2DC;off-campus' programmes. The Open Education Unit is now located within DCU's National Institute for Digital Learning (NIDL). The poster will detail a project, undertaken by the OEU, with the aim of improving the quality of assessment writing in an online Bachelor of Arts (Hons) in Humanities Programme. The project began, in 2012, with the development of a guide on designing and writing assessments for online distance education students. The next stage involved conducting a review of the programme learning outcomes and the current assessment types. At this time, the selection of assessment type was the remit of the assessment designer. The review highlighted, that while some variety in assessment type was evident, it often tended to rely on essay-style assignments and end-of-year examination. An outcome of the review involved the alignment of appropriate assessment type to programme learning outcome. In order to ensure this alignment, the selection of assessment type was determined by the Humanities Programme Team rather than the writer. This resulted in a structured and transparent programme-level plan (the assessment matrix) for the assessments across all modules within the programme. The introduction of a variety of assessment types resulted in the need for training of assessment writers. In 2014, the previous guide for designing and writing assessments was converted into an online course and a workshop was run for all assessment writers. This poster will describe the staffing mode utilized by the OEU and will detail the challenges and strengths of contracting a range of off-campus, part-time academic subject experts as writers of online distance education assessments. It will present an account of undertaking an audit of programme learning outcomes; identification of the appropriate assessment types for the achievement of these learning outcomes; how and why the assessment matrix was developed; the quality control measures in place to ensure academic standards; the development of an online resource for assessment writers and related workshop; and some of the modifications to the structure of assessment documentation following the first iteration of this process. The final section of the poster will outline the future developments currently planned for 2015/2016. 103: â&#x20AC;&#x2DC;Skills Passport' for Life Sciences at Edinburgh Napier University: Helping students to help themselves Janis MacCallum, Samantha Campbell-Casey, Patricia Durkin, Anne MacNab Edinburgh Napier University, Edinburgh, UK A survey of Life Sciences employers in Scotland revealed problematic skills gaps with graduate recruits (Life Sciences Scotland, 2010). To address this, Life Sciences staff at Edinburgh Napier alongside Student and Academic Services developed the Skills Passport to help students work on key employability skills. The project builds upon research by Parry et al (2012) and Speake et al (2007) outlining the effectiveness of reflection in enhancing students' practical skills in Bioscience. The Skills Passport aims to develop a mechanism to support students in development of and reflection upon skills identified by employers as lacking. Whilst the Skills Passport was trialled in Year 1 in Life Sciences in 2013/14 the objective is an approach that spans all four UG years, supporting and preparing students for the future workplace.
109
At the core of this approach is the Skills Evidence and Evaluation Record (SEER), which facilitates students in understanding the skills required by industry and identifying their personal skills gaps. A set of principles underpin the use of SEER in the Skills Passport and put responsibility for participation in the process, and completion of associated activities/documentation firmly with the student. Guidance is provided within class settings and through Personal Development Tutor (PDT) meetings for students to reflect on their skills using STARL (Situation, Task, Action, Result and Learning) as a model. This reflection requires them to examine their experience and consider alternative approaches. This reflects common competency-based recruitment practises allowing students to build a body of evidence to give them a head start in gaining graduate employment. We have been carrying out pilot evaluation on the effectiveness of this approach by asking students in all years of the current academic year (2014-15) to provide feedback on their use of the Skills Passport to inform our future development of this tool as an approach to aid skills development in our graduates. This paper will present initial pilot data obtained and discuss our findings and future directions. References Life Sciences Scotland (2010). Scottish Life Sciences Employer Skills Survey 2010. Accessed 25.9.13 from: http://www.lifesciencesscotland.com/connections/news/newscontent/scottish-life-sciences-employer-skills-survey-2010.aspx Parry, D., Walsh, C., Larsen, C., Hogan, J. (2012). Reflective practice: a place in enhancing learning in the undergraduate bioscience teaching laboratory. Bioscience Education, 19:35-44. DOI: beej.2012.19000004. Speake, T., Fostier, M. and Henery, M. (2007) The use of reflective practice to support a final year team research project in biosciences. Proceedings of the Science Learning and Teaching Conference 2007 ftp://www.bioscience.heacademy.ac.uk/events/sltc07/proceedings_full.pdf 104: The effect of the test re-do process on learner development in higher education foreign language courses Kristen Sullivan Shimonoseki City University, Shimonoseki, Yamaguchi, Japan This poster will introduce and discuss the use of test re-dos in an English as a Foreign Language course held at a public university located in regional Japan. The students taking this optional TOEFL iBT preparation course were non-English majors; however, the great majority had ambitions to study abroad as part of their degree. A test re-do is a test in which the learner is given the opportunity to literally "redo" the test after conducting reflection and receiving feedback. Test re-dos are theoretically based upon the disciplines of self-regulated learning and learningoriented assessment, and their primary aim is to provide an opportunity for learners to reflect on their performance and to use both internal and external feedback to bridge the gap between their current performance and their desired performance. Through this process we aim to support and encourage learner development in terms of language proficiency development and the development of their ability to self-regulate their learning. The latter is considered crucial for
110
learners to continue their language learning outside of the classroom and after graduation, especially in consideration of the fact that foreign language learning is indeed a lifelong endeavour. Test re-dos are also one method by which teachers can help students who suffer from anxiety in testing situations, particularly those which require them to speak in the target language, which was the case in the course in question. Some teachers may be concerned about the time involved in conducting test redos, or with issues of validity. In order to encourage the wider use of test re-dos, it is thus important to show that they can contribute to improved outcomes. This presentation will investigate this through an examination of test re-do results for an Independent Speaking test task conducted as part of a university-level TOEFL iBT preparation course. Three aspects of learner performance will be considered: improvements in learners' test performance (comparing the original performance with performance on the test re-do), connections between learner reflections and their performance on the test re-do, and the content of learner reflections. The presenter will use this analysis to show in what ways test re-dos can contribute to learner development, as well as their potential limitations. Based on results and observations of another test re-do conducted earlier within the same course, the presenter will suggest more effective ways of setting up the activity and supporting learner reflection. 105: Assessment Feedback Practice In First Year Using Digital Technologies â&#x20AC;&#x201C; Preliminary Findings from an Irish Multi-Institutional Project Lisa O'Regan1, Mark Brown2, Moira Maguire3, Nuala Harding4, Elaine Walsh2, Gerry Gallagher3, Geraldine McDermott4 1 Maynooth University, Maynooth, Co. Kildare., Ireland, 2Dublin City University, Glasnevin, Dublin 9., Ireland, 3Dundalk Institute of Technology, Dundalk, Co. Louth., Ireland, 4Athlone Institute of Technology, Athlone, Co. Westmeath., Ireland This poster reports on a baseline review of Assessment Feedback practices at four Irish third level institutions as part of the Supporting Transition: Enhancing (Assessment) Feedback in First Year Using Digital Technologies project. This two year project (2015-17) is funded by the (Irish) National Forum for the Enhancement of Teaching and Learning and is a collaborative initiative between the Irish Higher Education Authority (HEA) cluster partners: Maynooth University, Dublin City University, Athlone Institute of Technology, and Dundalk Institute of Technology. The aim of this project is to identify and pilot approaches to enhance assessment feedback in first year undergraduate programmes using digital technologies. This study is particularly timely in the Irish context given our strategic focus on improving the transition to Higher Education and the first year experience (Hunt 2011). While evidence shows that regular and frequent formative feedback in first year is associated with student success (Tinto 2005, Nicol 2009) the reality, particularly in large cohorts, can be quite different. The Irish Survey of Student Engagement (ISSE) 2013, found that nationally, 45.1% of first year
111
undergraduate students never, and 23.3% only sometimes, received timely written or oral feedback from teachers on academic performance. This poster reports on the first phase of the project, particularly on the research undertaken in the first half of 2015 to identify current assessment feedback practice in first year undergraduate programmes, which consisted of; (i) an anonymous online survey of lecturers teaching on first year undergraduate programmes and (ii) focus group interviews with first year undergraduate representatives. Specifically, the current practice review addressed the following: ● How assessment feedback is provided in first year undergraduate programmes, ● The timing and frequency of assessment feedback, ● The types of assessment feedback provided to students, ● The digital technologies utilised in the provision of assessment feedback, ● Lecturer views on assessment feedback processes and practices, ● Student views on assessment feedback in first year. In this poster, we will provide an overview of the project and it’s rationale and describe the research approach taken for the current practice review. Data collection is ongoing and we will present preliminary findings and outline initial conclusions and next steps. 106: Visualising the Narrative: Assessment through a programmatic lens Bryan Taylor, Mark Russell King's College London, London, UK Assessment designs are typically created within a module. This is helpful since it helps the alignment of the assessment with the teaching and the intended learning outcomes. Focusing on the module, however, without considering the assessment design of other modules (and indeed the programme), can lead to some rather unhelpful unintended consequences. Such unintended consequences are not always evident in reading assessment narratives described within programme handbooks, and must rather be teased out using other means. As part of an institution-wide assessment and feedback project at King’s College London, we have created an Assessment Visualisation Suite. The Suite, centred on the programme, offers varying but complementary graphical representations of the programme’s assessment design. In creating the Assessment Visualisation Suite, we represent some of the assessment aims and outcomes in a readily accessible format. Thus several dimensions of the assessment design can be presented in a manner suited to the depiction and resolution of particular programme issues, or in describing
112
programme design to varying audiences (e.g. students in a programme induction session). The Assessment Visualisation Suite includes various approaches, it… · Shows assessment activities and their interrelationships as temporal entities; objects on a timeline, which allow workload, distribution of effort and bunching to be explored · Depicts assessments as module components each with a varying contribution to the overall module mark. We use the terminology of ‘no-stakes’, ‘low-‘, ‘medium-‘ and ‘high-stakes’ to acknowledge the design as seen through the lens of the student, where the ‘stakes’, in terms of effort and attainment in an assessment activity, may be more meaningful than use of the terms ‘formative’ and ‘summative’ assessment. · Shows the blend of assessment types across a programme, on a module-bymodule basis, allowing programme teams to explore the balance between methods of assessment, and the weight they carry. · Demonstrates feedback links between assessment activities, and modules, weighted to credit score, throughout a programme, bringing focus to assumptions of these activities’ purpose as a stimulus for, and measurement of, learning · Our work reinforces the importance of creating visual and programmatic representations to provide programme teams with a clear overview of the programme assessment design and aids exploration of its strengths and gaps. Responses from programme teams has been positive; they report that the Assessment Visualisation Suite has illuminated their work, helped with discussions and has aided engagement with students. 107: Student and staff experiences of peer review and assessment in undergraduate UK HE settings Denise Carter, Julia Holdsworth University of Hull, HULL, UK This poster presents preliminary analysis of data gathered from UG students and staff regarding their experience of, and attitudes towards, different forms of peer review and assessment. The data presented arise from a UoH Innovations in Student Learning project that explores the use of different forms of peer assessment during 2014-2015. The poster focuses on two case studies; one in which student involvement is developmental and the other in which student peer assessment forms part of the summative mark. This project is being run in a context in which feedback consistently receives the lowest levels of student satisfaction, the latest figure being only 72% (NSS 2014 cited in HEFCE, 2014). Much attention has been devoted to addressing feedback issues in recent years but it remains a ‘troublesome issue’ (Nicol et al, 2014). Many of the strategies put in place to address student dissatisfaction have
113
focussed on the mechanics of providing feedback such as timeliness, legibility, use of language or ‘amount’ of feedback. Research shows however, that, using a model where academic experts transmit information to students about the strengths and weaknesses of their work severely constrains learning and understanding opportunities for students. Students often don’t understand feedback or how to make use of it, focus on the summative mark awarded rather than formative comments, or even fail to collect the feedback at all (see for example, Giles et al, 2014; Ferguson, 2011; Gibbs and Simpson, 2004; Maclellen, 2001; Winter and Dye, 2004; Higgins et al., 2001). Peer assessment has been suggested as a way to address many of these concerns and provide a more constructive experience for students which provides timely feedback and promotes critical reflection and communication skills; empowering students as ‘self-regulated learners’ (Nicol et al., 2006: 199; Nicol et al., 2014; Hattie and Timperley, 2007). However, against this are set a number of concerns about student capacity to engage in peer assessment, the role of experts and expert knowledge, confidentiality and others. The poster presents data on a number of key questions including: What do students think about peers assessing their work? Do students have confidence in their ability to assess the work of peers? What sorts of methods of peer assessment do students think work best? What are the main drivers for, and obstacles to, peer assessment and peer review? What benefits do students gain from engaging in peer review processes? 108: Reflective activities and summative assessment in an open university access to higher education module Carolyn Richardson The Open University, Walton Hall, UK Key words - Inclusion, Collaboration, Reflection, Reflective Assessment, Nontraditional Students, Access to Success, Study Experience, Summative Assessment ‘’How can I see who I am until I see what I do? How can I know what I value until I see where I walk” Karl Weick The paper gives an insight into the Open University’s Centre for Inclusion and Collaborative Partnerships Reflective Assessment Tasks research project. The project forms part of ongoing work originating in the Study Experience Programme. This is one of several scholarship projects being funded across the Open University as part of the New Models of Assessment project. This research relates to the value of assessing reflective activities carried out by students as part of summative assessment of specific Access to Success modules were transcribed and the findings of the research will inform assessment policy over all Access to Success programmes.
114
The research consisted of interviewing a sample of tutors on the viability and value of incorporating summative assessment into Access to Success modules and developing a learning design to reflect the findings of that research. Access to Success modules are targeted at non-traditional students with few post secondary education qualifications who wish to gain an Open University degree. The research was motivated by prior research highlighting the potential learning gains from reflective practice in student learning experience and aimed to critically evaluate the inclusion of reflective tasks in the formal assessment strategy of Y031, the ‘Arts and languages Access module’, and establish principles of good practice in the development and assessment of reflective skills. ‘To reflect is to consider in more detail, drawing on our thoughts to address something for which there is not an immediate solution’ (Moon, 2006). Barnett (1994): ‘Only in that moment of self-reflection can any real state of intellectual freedom be attained… Only through becoming a continuing ‘reflective practitioner’ can the student gain a measure of personal integrity.’ The paper ends with a consideration of the linkage between ‘understanding’ as a core competence in human experiences of learning. References Moon, Jennifer (2013) Reflection in Learning and Professional Development: Theory and Practice. New York; Routledge. Dyment, Janet E. and O’Connell, Timothy (2011) ‘Assessing the quality of reflection in student journals: a review of research’, Teaching in Higher Education, 16(1), pp. pp.81-97. Garrison, D. Randy and Vaughan, Norman D. (2008) Blended Learning in HE. San Francisco: Jossey-Bass. Race, Phil (2010) ‘Reflecting evidence: Putting ‘w’ into reflection’, Higher Education Academy: Resources Centre. Available at: http://www.heacademy.ac.uk/resources/detail/subjects/escalate/(accessed 14 January 2014). 109: EFL tutors and students perceptions of written assessment, feedback and criteria across six English departments at a Libya University Imad Waragh University of Sunderland, Sunderland, UK In every aspect of life there is a need for a tool or technique that can be used to assess progress. In the context of an EFL, there is a wide consensus among researchers that assessment methods can play a vital role in developing teaching and learning. Chappuis and Stiggins (2002) indicate that classroom assessment is seen as a healthy part of effective teaching and successful learning. Coombe and Barlow (2004) also state that methods of assessment are significant in gaining a dynamic picture of students' learning and linguistic improvements. Therefore, assessment has received attention in higher education with the acknowledgment that assessment drives learning as well as teaching (Hughes et al., 2011).This research explores the participants’ perceptions of assessment methods, feedback
115
and assessment criteria that EFL tutors use in assessing their written work. The participants are 12 tutors and 207 fourth-year university students across the six English Language departments in a Libyan university. A questionnaire, which is part of a larger study that also collects qualitative data, collected the quantitative data from tutors and students. The analysis of tutors and students’ questionnaires reveals important and interesting results concerning their perceptions about assessment, written feedback and assessment criteria. For example, the results indicate that a large number of students are not positive about using some methods of assessment such as peer and self- assessment. This is due to some factors that affect them when they use these methods. Additionally, interesting findings show that students tend to receive written feedback from their tutors rather than from classmates. Regarding assessment criteria students’ beliefs show that their tutors do not provide them with assessment criteria for every piece of written work. The findings of tutors’ questionnaires reveal that tutors do not use several methods of assessment and also they do not provide students with assessment criteria. The analysis also indicate that there some factors that affect their choices of assessment methods such as class size, motivation, tutors’ knowledge experience and students’ cultural. To conclude, the findings may provide important suggestions which may be of value to university tutors when using assessment, written feedback and assessment criteria. The findings could also have a potential impact on tutors’ thinking which may lead to change or shift their way of using assessment methods in written work. 110: Measuring tertiary students' progress in English for specific purposes courses with self-assessment Dietmar Tatzl FH Joanneum, University of Applied Sciences, Graz, Austria This contribution introduces a self-assessment task designed to measure learners' progress through participation in class and discusses its feasibility for the author's English for Specific Purposes (ESP) courses (Tatzl, 2012). This self-assessment technique should promote learner autonomy by means of self-reflection, empower students by involving them in the grading process and prepare them for the workplace by encouraging a self-directed monitoring of their performance. The research process was divided into three phases: the self-assessment task, the analysis of its results and its evaluation by students and the author. The results tentatively suggest that learners employ sincere judgement when assessing their progress through participation. Although students approve of self-assessment in principle, feedback also raises concerns about its appropriacy, meaningfulness, time consumption, necessity, accuracy and objectivity. Self-assessment, therefore, remains controversial among learners, but it deserves more attention by ESP and higher-education researchers and practitioners because it can contribute to the education of autonomous and self-reflective professionals for the global workplace. Although the results of this research are not generalisable to other settings owing to the difficulty of objectifying subjective assessment methods, they still allow ESP and higher-education practitioners to draw parallels to their own educational situations and may serve as an encouragement to pursue similar assessment modalities.
116
111: Overcoming Assessment Challenges - Tipping the Balance Ruth Sutcliffe, Rachel Sparks Linfield, Ros Geldart Leeds Beckett University, Leeds, UK It is well known to primary teachers that effective assessment of children requires a multi-faceted approach (Sparks Linfield 1994). Equally, written feedback on a piece of work is often not understood by the pupils themselves (Sparks Linfield 1995). As one proceeds through secondary and tertiary education, this situation changes little, with the best attempts to set ‘perfect' assessments still leading to misinterpretation by students. It is also true that students often do not always recognise what is meant by the term ‘feedback' and have difficulty in interpreting and understanding the feedback that they receive, even with the most careful and targeted advice in advance. (Sutcliffe et al 2014) In 2010 the National Union of Students released a ‘Charter for Assessment and Feedback' which outlined ten principles for effective assessment and feedback. Despite this charter, the National Student Survey (NSS) in 2014 still showed twenty-eight percent of students were not satisfied. ‘Assessment and feedback was again rated the lowest by students, with just seventy-two percent saying they were satisfied with this, the same level as last year.' (Grove 2014) This poster considers research carried out in 2014 when the Year 2 cohort of students on a Bachelor of Arts Primary Education course were asked to complete a questionnaire inviting views on feedback on assessment they found most helpful in clarifying things they did not understand. Analysis of completed questionnaires revealed that although students' experiences of feedback and assessment within their first year of study had broadly matched the principles outlined within the NUS Charter, twenty-five percent of students still were not satisfied. Results from the cohort showed a desire for a range of types of feedback including a wish for face-to-face discussion to enable them to both assess their understanding of feedback comments and feed-forward actions. In addition, a common theme emerged: a lack of perception by students of their own roles and responsibilities within the assessment/feedback cycle. Recommendations will be made for ways to overcome the challenge to provide assessment feedback that aims to give total satisfaction. 112: Feeding forward from feedback with Business and Food first years Jane Headley, Pam Whitehouse Harper Adams University, Shropshire, UK Purpose: To investigate the question: does a requirement for students to link the feedback from one module to another encourage them to reflect and implement the feedback received from the earlier module? The poster displays the findings from a small study undertaken in 2014 involving 116 Business and Food first year students who were asked to explain their use of
117
assignment feedback received on one module in the preparation of an assignment for a second module. The key findings were: Students did appear to accept the feedback from the first module, commenting on changing their habits as a result. Seeking support to improve their work was a common theme, from reading university guides on referencing and report writing to requesting assistance from the Academic Guidance team. Some had ‘learnt the rules' of report writing and referencing, noting their transition from A-level work. The need to be critical was frequently mentioned. Evidence of use of technology to assist their work from spelling and grammar checks to finding synonyms in Word and using table of contents creation tools. Some students were over-confident on their achievements, with comments suggesting "just need to..." implying simple solutions were sufficient. Other students recognised an improvement in the quality of their work through using feedback and an improvement in the grade achieved. The improvement areas ranged from procedural to evaluative. Likewise, the depth of reflection varied from the superficial to the transformative. Specific quotes from students will be used to illustrate the range of responses. Potential audience: those involved in the transition/first year of programmes or with an interest in linking learning across modules. Contribution: building on existing literature (Hepplestone and Chikwa, 2014; Parkin et al., 2013; Poulos and Mahony, 2008) findings from a modularised curriculum working with both FdSc and BSc students across a range of programmes in a small scale institution. As a consequence of these findings the messages are that facilitating the use of feedback to feed forward appeared to have a positive effect on students. They proactively engaged with their feedback and noted tangible results. This is a good habit to develop, especially at a transition point, and it is planned to repeat this activity and seek to develop it further in their subsequent years of study. 113: Student perceptions of oral and written feedback Anna Steen-Utheim BI Norwegian Business School, Oslo, Norway Research topic/aim Research has demonstrated the potential positive effects feedback can have on learning (Black and Wiliam 1998, Hattie and Timperley 2007) because it can help students clarify misconceptions and faults (Sadler 1989). However, the power of feedback to influence learning depends on student use of feedback. Knowing that student dissatisfaction with feedback is considerable (Weaver 2006; Huxham 2007), it is of interest to investigate the contribution feedback makes to students learning. This paper reports from a case study at a bachelor course in international business communication at a Norwegian university college. The aim is to explore
118
student perceptions of written and oral feedback, how they utilize the feedback, which type of feedback is preferable and why. Theoretical framework A sociocultural perspective on learning and assessment in combination with a dialogical view on human action, communication and cognition frames this study (Linell 1998; 2009, Vygotsky 1978). Assessment and learning is considered as interdependent processes and feedback as interaction and a complex form of communication. Methodology/data material The study is framed within a qualitative research approach using content analysis (Krippendorff 2004; Silvermann 2006). Using web based logs, 12 key informantsâ&#x20AC;&#x2122; perceptions on 7 questions about their written and oral feedback practices were collected within a time frame of six months. Findings The findings suggest that the emotional support the students get from the teacher is important for how the studentsâ&#x20AC;&#x2122; perceive the feedback, regardless of type of feedback. The students prefer the oral feedback, reporting certain traits about the feedback they find useful for improving their work. The studentsâ&#x20AC;&#x2122; report of the written feedback is that it is difficult to understand and therefore difficult to act on. 114: Designing assessments to develop academic skills while promoting good academic practice and limiting students' use of purchased or repurposed materials Ann Rogerson University of Wollongong, Wollongong, NSW, Australia With the increase of technology options to aid and abet students in circumventing the need to write their own assignments, it has become increasingly important that assessment tasks are designed to minimise the opportunity for students to present non-original work. Assessment design is critical to ensuring that students meet the learning outcomes of assessment tasks, subjects and courses while adhering to principles of ethical academic conduct enshrined in codes and policies. Moving assessment tasks away from traditional essays and reports exposes students to elements of research processes and can stimulate new ways of thinking while minimising the opportunity to prosper from the use of essay mills or file-swapping sites. The pressures applied to academics in meeting increasing responsibilities with reduced resources can mean that assessment tasks maybe tweaked or recycled from year to year which unintentionally supports the providers of non-original materials. With a little creativity assessment tasks can be redesigned not only to limit the potential for students to seek outside paid assistance to complete tasks but actually engage students in their learning and prepare them for future study tasks and real life responsibilities. This paper outlines the positive results of changes made to assessment design in some post graduate subjects undertaken by international students. Students were introduced to understanding the construction and purpose of annotated
119
bibliographies, extracting key information from journal articles, and integrating a series of annotated bibliographies into a coherent passage via a series of group work tasks. Later tasks developed skills in reflection, linking theory to practical experiences in multi-national groups. Reflective tasks related to in-class discussions or experiences are difficult for others to replicate and where a reflection is purchased, copied or borrowed key elements are usually missing. All the tasks were designed to achieve the learning outcomes, while minimising opportunities to submit plagiarised, borrowed or purchased materials but in the end achieved much more than originally anticipated. Reflections from students included their experiences in finally understanding how to read, extract, and use information from journals, recognising the relationship between methodological processes and results and how they had applied these learning experiences to other subjects achieving improved results. Spending time up front in redesigning assessment tasks can ultimately save time in the long run as the likelihood of plagiarised, borrowed or repurposed materials is reduced, while benefiting the students, not only in their current subject but in their overall learning experience. 115: Developing pedagogy: Using Pecha Kucha as formative assessment in two undergraduate modules Nicola Hirst Liverpool John Moores University, Liverpool, UK This presentation builds on Action Research conducted with my students and it is based on the pedagogical belief that opportunities for formative assessment of group work is a supportive mechanism in HE. My previous research meandered down a well -trodden path where ‘the student' is a recognisable object of scrutiny within the literature around feedback in higher education. The antithesis of the good student appears to be the lazy or poorly regulated student waiting for feedback to be done to them, waiting for a biological and environmental ‘readiness' to enable them to comprehend or accept feedback (Dowden, Pittaway, Yost & McCarthy 2011) or the tacitly threatening ‘consumer' wanting to ‘have' a degree with little concept of the pedagogical implications of their demands (Molesworth, Nixon & Scullion 2009). This extended research sets out to explore students' ideas surrounding 'Pecha Kucha' with particular emphasis on the value or purpose of the methodology as formative assessment in 2 undergraduate modules. Dialogue with students focused on the Pecha Kucha at the end of the module delivery and emerging themes were used to inform module design. To build on this and to develop systematically the learner's capacity for self regulation the paper argues for the use of Pecha Kucha to create more structured opportunities for self monitoring and the judging of progression towards goals. The judging however is not to ensure the governability of ‘the student' by ‘the tutor' but to work at assessment dialogues that move beyond the perception of parental nagging or ‘drumming' home.
120
116: Making the formative feedback effective:Feed-forward feedback: Study of student’s perception on the video assignment guidance and its influence on their learning Harish Jyawali GSM London, London, UK Since students’ learning is largely driven by the assessment (Boud 2014, Biggs 2003 and Gibbs 2006). It’s reasonable to offer the feed-forward feedback, facilitating students by clarifying goals, criteria and expected standards rather than only being reactive. Also, not all institutions have a student body that largely demonstrates an advanced capacity for organising their own learning. Rather they tend to rely on the tutor, specifically for the clarification and guidance on their assessment. The challenge is to enhance their level of confident and construct their motivation by offering explicit information and guidance on the task. Therefore, there is no doubt in the value that feed-forward feedback can offer in student’s learning, however the challenge is how to make it effective, considering the large class sizes and limited resources. Similarly, ensuring the consistency in term of communicating the assessment task, mainly when the same module is taught by many tutors across different locations can be challenging. In order to support students’ learning through assessment, and address the above mentioned challenges, the feed-forward feedback was introduced to provide the recorded video guidance on the assignment task. The module leader recorded the video assignment guideline and made it available on VLE to all students enrolled in the module. The assignment video guidance was 15-20 minutes long. Categorically, the video assignment guidance has two objectives; cognitive (instructional) and psychological (motivational). Research was conducted among the Level 5 students who studied Human Resource Management module at GSM London in 2013 over two semesters. Data were collected by using online questionnaire, disseminated via Blackboard. More than 100 students (109) students responded the questionnaire. Both closed and open questions were used to collect information. Research has demonstrated that offering the video guidance on the assignment task has positive influence in their learning. Essentially, it enhances students selfconfident to do well through motivational and instructional constructs. Research has also demonstrated that it offers students opportunity to self-assess their progress by revisiting the video number of times. Research finding has also shown that how the video guidance can be an effective tool to offer feed-forward feedback to the students with large class sizes regardless the locations. 117: Developing collegial relationships: Students providing feedback on staff member's teaching and assessment practices. Jennifer Scoles, Mark Huxham Edinburgh Napier University, Edinburgh, UK A new project conducted at Edinburgh Napier University reconsiders traditional peer review processes by exploring alternative feedback relationships. Students as Colleagues in the Review of Teaching Practices invited students to professionally
121
review the teaching practices of a staff member from a different discipline. Through collaborative training workshops, students and staff members entered into a collegial relationship based on a non-hierarchical, shared platform of cooperation and understanding. This poster showcases this initiative, but pays particular attention to one of the teaching practices under review, which is concerned with assessment and feedback. The paired student was asked to review marked coursework assignments (anonymised) and constructively comment on the feedback provided by the staff member. This poster will highlight some of the instances that students identified as helpful, or unhelpful, feedback practices. In doing so, authentic student voices are recognised as part of institutional processes designed to encourage a shared understanding between staff and students' perceptions of feedback. 118: Students Love Assessment: Using assessment to improve engagement Toby Carter, Nancy Harrison, Julian Priddle Anglia Ruskin University, Cambridge, UK For a Level 6 module requiring the development of the mathematical skills used in population ecology, student engagement and achievement have historically been disappointing. Previous strategies to improve engagement on this module have included a field trip and increasing the use of tutorial sessions. These had mixed success in that we were able to increase satisfaction scores but improvements in other metrics for engagement such as attendance and achievement were less marked. We therefore embarked on a change in assessment pattern and support by increasing the use of in-class tests and providing preparation sessions. Every week had some form of assessment related activity. The results were dramatic more than a 10% increase in overall student satisfaction and a 10% increase in the average mark on the previous year. The most important change was a 25% increase in the average weekly attendance on the previous year and the highest attendance record of any level 6 module in the Faculty, where attendance was not mandated. Perhaps more importantly, when answering the question 'What did you like best about this module?' the student responses shifted from a majority mentioning the field trip to 82% citing the assessment as their favourite part of the module. 119: How do students engage with personalised feedback from a summative clinical examination? Beverley Merricks University of Birmingham, Birmingham, UK Poster is relevant to delegates from other disciplines because it will: â&#x20AC;˘ Provide an introduction to the self regulated learning model and how it relates to engagement with feedback. â&#x20AC;˘ Suggest methods of enhancing the use of feedback by students. â&#x20AC;˘ Show examples of feedback, including a method of providing individualised
122
feedback from clinical examinations that may be of use to other health care professions. Abstract Birmingham Medical School provides students with five types of feedback after the end of 4th year summative clinical examination. However, it was not known how our students engaged with it in order to improve their knowledge and skills as undergraduates and to inculcate themselves into the practice of effectively engaging with feedback that will also be expected of them in their professional life (General Medical Council, 2009). Various studies have shown that students do not necessarily use feedback (e.g. Orsmond, Merry, & Reiling, 2005 and Sinclair & Cleland, 2007) I undertook an exploratory study in Autumn 2014 to begin to understand how learners at different levels of performance engaged with the feedback and which types of feedback they preferred. Currently we do not provide a dialogical framework to enhance learners' ability to make sense of and act on the feedback (Nichol, 2006 and Archer, 2010) and I was also interested to explore if senior students who are embedded in a strongly bounded community of practice (Lave and Wenger, 1991) felt this was needed. A semi-structured interview schedule covering the components of a model of selfregulated learning (Nicol & Macfarlane-Dick, 2006 and Sandars & Cleary, 2011) was used to capture the views of students from three performance levels (top = 4/15, borderline pass = 4/16 and fail = 3/14, so 11 students interviewed out of potential 45 from a cohort of 380) who volunteered to share their experiences. Individual interviews were transcribed and analysed using a constant comparison (Gibbs, 2007) thematic coding method to identify, interpret and synthesise patterns in the data via NVivo software. This was a pilot study to ascertain if the proposed self regulated learning model was appropriate and if barriers to engaging with feedback reported in other studies (e.g. Sargeant et al., 2008) were applicable to this context. The poster will reveal the results of my pilot study, discuss the limitations of the study and outline my plans to improve student engagement with feedback. 120: Joining the pieces: using concept maps for integrated learning and assessment in an introductory Management course. Heather Connolly, Dorothy Spiller University of Waikato, Hamilton, New Zealand This paper discusses and evaluates an action Research initiative to introduce and embed concept maps into the teaching, learning and assessment of a large introductory Management course at the University of Waikato, New Zealand. The paper convener introduced concept mapping into this course as one strategy to manage the multiple challenges associated with teaching an introductory paper with a large student cohort. Concept mapping offered a tool to address problems of disconnection and fragmentation that are often associated with courses like this, as well as opportunities to integrate learning and assessment. Specifically, the goals for introducing concept maps were as follows:
123
To help students to move away from fragmented learning and build links and connections between multiple ideas and topics To help students to learn to use a tool for identifying and articulating relationships and connections between ideas for the long term To provide a pathway for student learning of a wide range of material at an introductory level To provide an assessment tool that enables both teachers and students to get feedback on learning progress To integrate assessment and course learning The strategies for teaching concept mapping and the assessment will be described and evaluated in relation to these goals and refinements for improvements in next year's course will be outlined. Evaluation draws on students' feedback and on the observations of both lecturer and tutors on the quality and usefulness of the students' concept maps. Samples of students' concept maps are analysed to evaluate the extent to which they demonstrate the capacity to identify connections and relationships across topics, ideas and theories. Evaluation indicates the potential of the concept mapping strategy, acknowledges the challenges of moving learners away from a linear, processed- package model of learning and argues for the need to embed the initiative in mutltiple aspects of the course. 121: Decision-making theory and assessment design: a conceptual and empirical exploration Gordon Joughin1, David Boud2, Phillip Dawson3, Margaret Bearman3, Elizabeth Molloy3, Sue Bennett4 1 Higher Education Consultant, Brisbane, Australia, 2Deakin University, Melbourne, Australia, 3Monash University, Melbourne, Australia, 4University of Wollongong, Wollongong, Australia Academics in higher education typically have considerable discretion in deciding how their students' learning should be assessed. While recent years have seen an extraordinary growth in research, theorising, writing and documenting of best practice in assessment (eg Kreber, Anderson, Entwistle & McArthur, 2014; Boud & Malloy, 2013; Merry, Price, Carless & Taras, 2013), there remains a significant disjunction between what have come to be regarded as best practices in assessment and actual assessment â&#x20AC;&#x2DC;at the coalface'. This paper considers this disjunction through the lens of decision making theory and research. Can such theory and research illuminate and help to improve decision making in relation to the design of assessment tasks? This paper addresses this question by outlining aspects of decision-making highlighted in decision-making literature which seem likely to be pertinent to the process of decision-making in assessment (eg Kahneman, 2011; Etzioni, 2001; Simon, 1982). These aspects provide the framework for analysing 31 interviews with academics across four universities and diverse disciplines which sought to illuminate the nature of assessment decision-making â&#x20AC;&#x2DC;on the ground'. The first section of this paper outlines the conceptual and organisational context of
124
assessment planning, noting the abundance of literature and the complex demands placed on assessment in the contemporary higher education environment. The second section summarises decision-making theory and research that would seem to have direct implications for assessment decisionmaking in a complex environment. The third section considers the findings of interviews with the 31 Australian academics who were interviewed in relation to aspects of their assessment planning, noting how their planning processes align with or are challenged by the theories and research discussed previously. The final section of the paper considers the implications of decision-making theory and research for affirming, challenging and improving assessment planning in higher education. 122: Domains influencing student perceptions of feedback Margaret Price, Berry O'Donovan, Birgit den Outer, Jane Hudson Oxford Brookes University, Oxford, UK Whether feedback is effective (in that it leads to more learning) or seen as ‘good' (by students and in NSS scores) is a complex matter that is shaped by its assessment context, the markers, and the students. This presentation highlights the main domains that are influential in determining whether feedback is seen as good. Awareness of these domains and of the kinds of patterns of influence that might be present on a particular course can help to diagnose problems and support decision-making about improving feedback in different contexts. The presentation reports on HEA-sponsored research investigating what leads learners to consider feedback to be ‘good' or ‘bad'. Student research assistants (SRAs, n=8) carried out the recruitment, data collection, coding and initial analysis for this research at two different disciplines at two contrasting institutions. The SRAs instructed the 32 student participants to select two pieces of feedback, one which they considered to be ‘good' (useful) and the other which they considered to be ‘bad' (not useful). They then conducted semi-structured interviews with students around the two pieces of written feedback. Initial findings from this phase informed the development and trialling of feedback initiatives within modules at both institutions. Findings suggest that common generalisations about the salient issues in feedback are misleading for many contexts. Such generalisations can constrain more nuanced solutions or even impose practices that cause problems. For example, many generic policies focus on the technical aspects of feedback - what markers write down on students' assignments, and when they hand it back. The research reported here finds that such technical aspects form only a small part of the complex phenomenon of good feedback. Changing technical aspects alone is unlikely to change whether feedback is effective (in that it leads to more learning) or whether it is seen as ‘good' (by students and in NSS scores). Instead, solutions may involve developing students as well as developing teachers, changing employment practices as well changing quality assurance policy, and redesigning assessment patterns within course units as well as across degree programmes.
125
123: Challenges and benefits of assessing reflection Stefanie Sinclair, John Butcher, Anactoria Clarke The Open University, Milton Keynes, UK This paper is based on the findings of a funded scholarship project at The Open University, involving the critical evaluation of the inclusion of reflective tasks in the formal assessment strategy of the Open University's Arts and languages Access module. This module is a 30 week part-time cross-disciplinary course, that is delivered through blended distance learning and aimed at students from a Widening Participation background (in Open University terms, economically disadvantaged adult learners with low prior qualifications taking first, tentative steps in higher education). The assessment strategy on the module has been informed by the principles of inclusive assessment. Drawing on key literature around reflection in HE (Cowan, 2013; Dyment & O'Connell, 2011; Moon, 2001, 2006; Race, 2010), inclusive assessment in HE (Butcher et al, 2010; Gravestock, 2006; Hockings, 2010; McDowell, 2008) and formative feedback in HE (Evans, 2013; Nicol, 2007, 2009; Nicol & McFarlane-Dick, 2006; Shute, 2008), the research explored student and tutor perceptions of a range of non-assessed and assessed reflective tasks. 337 students on the first presentation of this module were invited to take part in an online survey, which aimed to establish how students new to HE experienced assessed and non-assessed reflective tasks, how they understood the assessed learning outcomes, and how they responded to tutor feedback on their reflections (126 responses were received). Following the analysis of student data, samples of assessment comments from 24 tutors were closely examined and interviews with a representative volunteer sample of tutors (N=8) were conducted to establish tutors' views on the challenges of assessing and providing effective feedback on reflective tasks. Our investigation centres on the following key questions: How does the assessment of reflective tasks affect students' perception of and engagement with reflective tasks? How do tutors view the transition to a more formal assessment of reflective tasks? How can the assessment of reflective tasks support Widening Participation and enable effective inclusion? 124: Effective Extensions: managing the lived experience of online students Susanna Chamberlain, David Baker, Danielle Zuvela Griffith University, Brisbane, Australia This paper deals with the results of a research project which examines two years and 10,000 students who have undertaken an online course for Open Universities Australia, through Griffith University in Brisbane, Australia. The students come from across Australia and many from overseas, and have a very wide range of life and learning styles.
126
In the early iterations of the unit, we established an online process for requesting and granting extensions of time for assessment items. This consisted of a quiz with the opportunity to upload supporting documentation. The results of the quizzes over two years were analysed and showed a number of key results, demonstrating that the extension process allowed students to complete and submit assignments, with two thirds of the requests being followed by submissions. One of the more intriguing findings is that many of the submissions after extension tended to be of very high quality. While the literature on assessment does not concentrate on the value of extra time, there is some material about time management available. The research also uncovered the complexity of the lived experience of students who undertake online learning at university level. This paper offers our insight into what kinds of complex situations are experienced by online learners, and what offers support for completion of units. We were able to distinguish several major issues for students, ranging from medical, housing, family, work and others, and to see the overlap where students were dealing with more than one issue at a time. One of the more intriguing findings is that the chance of submission after extension fell dramatically when the combination of circumstances included moving home, family or relationship problems and a change in working conditions. This is a large scale study into online learners and their circumstances, and this does seem to be a rarity in the literature. The material has been analysed using both statistical analysis and a qualitative reflection on practice process which one of the authors is developing as a research methodology. I have not included any of the statistics here, hoping instead to give an overview of project and some findings. 125: Valid and reliable assessment of students' academic writing using Comparative judgement Liesje Coertjens, Tine van Daal, Marije Lesterhuis, Vincent Donche, Sven De Maeyer University of Antwerp, Antwerp, Belgium Assessing competences in a valid and reliable way is a major challenge for today's higher education. A crucial aspect of performance assessment is relating some kind of appreciation (e.g., a numerical score) to a performance. Commonly, assessors use a scoring rubric (a list of criteria that are scored separately) to assist the rating of representations. Literature shows that this practice often results in poor credibility (Bramley, 2007; Pollitt, 2012): the scores do not adequately reflect the assessee's level of competence. Thus, assessing competences in a credible fashion is up to present an important challenge. Recently, Comparative Judgement (CJ) emerged as a promising alternative. CJ is a holistic assessment approach based on Thurstone's work on comparative judgement (Thurstone, 1927). Thurstone claims that people are more reliable in comparing two elements in terms of which one is better than another than in assigning scores. Putting this CJ into practice by applying multiple pairwise comparisons to the assessment of students' performances generates an interval-
127
scale that ranks students' work according to its quality (Bramley, 2007). Educational research based on the CJ method demonstrates the capacity of the assessment method to measure a wide range of complex competences such as writing, mathematics and design & technology (e.g., Heldsinger & Humphry, 2010; Jones & Alcock, 2012; Kimbell et al., 2009). Within these studies, authentic pieces of students' work that cover the results of students' work are compared. In general, reliable rank-orders were detected. However, though it is claimed that holistic judgment, by means of CJ, leads to more valid assessments (Pollitt & Crisp, 2004; Pollitt, 2012), empirical evidence on this remains scarce. The research aims are therefore: 1. Does grading using CJ prove to be reliable? 2. Does grading using CJ prove to be valid? This paper presents a study on the reliability and validity of CJ to assess the competence â&#x20AC;&#x2DC;academic writingâ&#x20AC;&#x2122; in a pre-master's course at a Flemish university (Belgium). 11 judges judged 41 tasks by making 236 comparisons. This resulted in a reliable ranking (G = 2.33; Îą = .84). The judges provided a rational for choosing a certain paper allowing to investigate to which degree judges judged the papers on relevant aspects regarding the assessed competence. Analysis of these judgements in light of validity will be presented on the poster.
128
Parallel Session by Theme Assessment research: theory, method and critique Examine student theses - similarities and differences in relation to examiners' experience Mats Lundström*, Lars Björklund, Karin Stolpe, Maria Åström Day 2: Parallel Session 8 - 09.50am
I wish I could believe you: the frustrating unreliability of some assessment research Tim Hunt*, Sally Jordan Day 1: Parallel Session 1 - 11.20am
Ipsative assessment for student motivation and longitudinal learning Gwyneth Hughes* Day 1: Parallel Session 2 - 12 noon
To measure the unmeasurable: using Repertory Grid Technique to elicit tacit criteria used by examiners. Lars Björklund*, Karin Stolpe, Mats Lundström, Maria Åström Day 2: Poster Session 3: Assessment Research: theory,method and crtiique - 11.30am
The Abstract Labour of Learning and the Value of Assessment Paul Sutton* Day 2: Parallel Session 9 - 10.35am
Why is formative assessment so complicated? What is behind the push-me, pull-you relationship between theory and practice and how can we all move forward? Donna Hurford* Day 2: Parallel Session 7 - 9.15am
Assessment literacies A quantitative analysis of student engagement with online feedback. Claire Moscrop* Day 2: Poster Session 2: Assessment Literacies - 11.30am
Assessment representations and practices in Italian higher education context: Hints from a case study Serafina Pastore*, Monica Pentassuglia Day2 : Poster Session 2: Assessment Literacies - 11.30am
Charting the assessment landscape: preliminary evaluations of an assessment map Anke C. Buttner*, Carly Pymont Day 1: Parallel Session 4 - 15.30pm
Developing assessment literacy for Postgraduates who Teach: compliance or quality enhancement? John Dermo* Day 2: Poster Session 2: Assessment Literacies - 11.30am
Dialogue+: Promoting first year undergraduate students' understanding of, and participation with assessment and feedback processes Rebecca Westrup* Day 1: Parallel Session 5 - 16.10pm
I don’t have time to attend a 2 hour training session: consequences and impact Neil Witt, Emma Purnell* Day 2: Poster Session 2: Assessment Literacies - 11.30am
Investigating the feedback gap(s) in pre-service language teacher education: What is the Emperor really wearing (and who will tell)? June Starkey* Day 1: Parallel Session 1 - 11.20am
129
The AsSET toolkit: developing assessment self-efficacy to improve performance Sue Palmer-Conn*, David McIlroy Day : Poster Session 2: Assessment Literacies - 11.30am
The Power of the "One-Pager": a simple idea for effective, informal formative assessment Deborah Anderson*, Rebecca Lees Day 2: Poster Session 2: Assessment Literacies - 11.30am
Using exemplars to develop assessment literacy: what do students learn to notice during pre-assessment workshops? Kay Sambell*, Linda Graham Day 1: Parallel Session 2 - 12 noon
Assessment challenges in disciplinary and professional contexts ‘Another brick in the wall’? Teachers’ representations about assessment and teacher education processes. Serafina Pastore*, Monica Pentassuglia Day 1.: Parallel Session 1 - 11.20am
Assessing Student Learning: A Source of Ethical Concern for Higher Education Teachers Luc Desautels*, Christiane Gohier, France Jutras, Philippe Chaubet Day 1: Parallel Session 6 - 16.50pm
Assessment for Employment: introducing "Engineering You're Hired" Patricia Murray*, Andrea Bath, Russell Goodall, Rachel Horn Day 1: Parallel Session 5 - 16.10pm
Challenges and benefits of assessing reflection Stefanie Sinclair*, John Butcher, Anactoria Clarke Day 2: Parallel Session 10 - 13.30pm
Conceptualising Fellowship of the Higher Education Academy (HEA) as an assessment process Nicola Reimann*, Ian Sadler Day2 : Parallel Session 7 - 9.15am
Custom essay writing and other paid third parties in Higher Education; what can we do about it? Symposium see also Mary Davis and Irene Glendinning
Phil Newton*
Day 2: Parallel Session 9 - 10.35am
Getting traction on assessment development: what can we learn from a professions' (Law; Medicine) perspective? Chris Trevitt* Day 1 Parallel Session 4 - 15.30pm
How to assess our students well: innovative approaches for addressing the challenges of assessment and feedback Yue Zhao* Day 2: Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
How can admissions testing better select candidates for professional programmes? Belinda Brunner* Day 2: Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
Identifying potential English language teachers from a cohort of MA students in order to meet the requirements of an external validation authority Susan Sheehan* Day 2: Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
Incorporating digital technologies in the assessment of oral presentations at a distance Stefanie Sinclair* Day 2: Parallel Session 9 - 10.35am
130
International postgraduate students and academic integrity: challenges and strategies to support Symposium see also Irene Glendinning and Phil Newton
Mary Davis*
Day 2: Parallel Session 8 - 09.50am
Meeting the challenge of assessment when personal transformation is the outcome Annette Becker* Day 1: Parallel Session 3 - 14.45pm
The use of stakeholder-informed simulation in assessment: sharing experience from an undergraduate medical student disability awareness programme. Adam Wilson*, Anand Gidwani, Christopher Meneilly, Vivienne Crawford, David Bell Day 2: Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
Diversity and inclusivity Culturally Responsive Assessment: Modifying Assessment Processes to Meet Diverse Student Needs Natasha Jankowski*, Erick Montenegro Day 1: Parallel Session 2 - 12 noon
Effective Extensions: managing the lived experience of online students Susanna Chamberlain*, David Baker, Danielle Zuvela Day 2: Parallel Session 10 - 13.30pm
Gender differences in completion and credit on physical science modules Niusa Marigheto*, Victoria Pearson, Pam Budd, Jimena Gorfinkiel, Richard Jordan, Sally Jordan Day 2: Poster Session 4: Diversity and Inclusion - 11.30am
Learning diversity in higher education: Comparison of learning experiences among cross cultural student populations in a Hong Kong university Yue Zhao* Day 2: Poster Session 4: Diversity and Inclusion - 11.30am
Preparing international students for the diversity of UK assessment within a UK-China articulation agreement Katie Szkornik*, Alix Cage, Ian Oliver, Zoe Robinson, Ian Stimpson, Keziah Stott, Sami Ullah, Richard Waller Day 2: Poster Session 4: Diversity and Inclusion - 11.30am
Written Assessment and Feedback Practices in Postgraduate Taught Courses: an international perspective Victor Guillen Solano* Day 1: Parallel Session 1 - 11.20am
Institutional change in assessment policy and practice A sensible future for moderation? Sue Bloxham*, Lenore Adie, Clair Hughes Day 1: Parallel Session 5 - 16.10pm
Applying assessment regulations equitably and transparently Marie Stowell*, Harvey Woolf Day 1: Parallel Session 6 - 16.50pm
Changing the Assessment Imagination: designing a supra-programme assessment framework at Faculty level. Jessica Evans*, Simon Bromley Day 2: Parallel Session 7 - 9.15am
Decision-making theory and assessment design: a conceptual and empirical exploration Gordon Joughin*, David Boud, Phillip Dawson, Margaret Bearman, Elizabeth Molloy, Sue Bennett Day 2: Parallel Session 10 - 13.30pm
131
Developing and Embedding Inclusive Assessment across Plymouth University Pauline Kneale, Jane Collings* Day 1: Parallel Session 1 - 11.20am
E-marking: institutional and practitioner perspectives Carmen Tomas* Day 1: Parallel Session 6 - 16.50pm
From practice oriented and academic traditions to academic professional qualifications – A historical view of Swedish teacher education Karin Stolpe*, Mats Lundström, Lars Björklund, Maria Åström Day 2 : Parallel Session 8 - 09.50am
Formative thresholded assessment: Reflections on the evaluation of a faculty-wide change in assessment practice Sally Jordan* Day, theme etc.: Parallel Session 5 - 16.10pm
Grade Point Average: Outcomes from the UK pilot Higher Education Academy Day 1: Parallel Session 3 - 14.45pm
Helping the horses to drink: lessons learned from an institution-wide programme designed to enhance assessment Andy Lloyd* Day 1: Parallel Session 4 - 15.30pm
How can an institution increase the assessment quality of its examiners? Remko van der Lei*, Brenda Aalders Day 1: Parallel Session 4 - 15.30pm
How mature are your institutional policies for academic integrity? Symposium see also Mary Davis and Phil Newton
Irene Glendinning*
Day 2: Parallel Session 7 - 9.15am
Increasing assessment literacy through institutional change Rachel Forsyth*
Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Institutional approach to improving feedback and assessment practices using TESTA at the University of Greenwich Monika Pazio*, Duncan McKenna Day 2: Parallel Session 2 - 12 noon
Leading Enhancements in Assessment and Feedback (LEAF Scotland) Dave Morrison*, Hazel Marzetti
Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Marking on and off line - a university wide pilot Sue Gill, Christie Harner* Day :2 Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Standardising Assessment at the Institution to Increase Student Learning Stuart Blacklock* Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Student understandings and use of learning outcomes in higher education Tine Sophie Prøitz*, Anton Havnes Day 2: Parallel Session 9 - 10.35am
The assessment challenge: an end-to-end solution Paolo Oprandi*, Carol Shergold, David Walker, Catherine Jones
Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Transforming the Experience of STAFF through Assessment Eddie Mighten*, Diane Burkinshaw Day 1: Parallel Session 3 - 14.45pm
132
Using an evidence based approach to transform academic approaches to assessment. Courtney Simpson, Caroline Speed, Alexandra Dimitropoulos, Janet Macaulay* Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Walking the Assessment Talk: Aligning what we believe, say, and do John Delany*
Day 2: Poster Session 5: Instutional change in assessment policy and practice - 11.30am
Learning and contemporary higher education assessment An evaluation of the student and staff experience of the introduction of audio feedback for undergraduate assessment Nick Purkis*, Sandy Stockwell, Jane Jones, Pam Maunders, Kirsty Brinkman Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Animate to communicate: using digital media for assessment Jenny Fisher*, Hayley Atkinson Day 1.: Parallel Session 3 - 14.45pm
Assessment Feedback Practice In First Year Using Digital Technologies â&#x20AC;&#x201C; Preliminary Findings from an Irish Multi-Institutional Project Lisa O'Regan*, Mark Brown, Moira Maguire, Nuala Harding, Elaine Walsh, Gerry Gallagher, Geraldine McDermott Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Assessment timing: student preferences and its impact on performance Richard McManus*
Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Case-Based Assessments in Business Management: Think Local, Not Global Carl Evans* Day, 1: Parallel Session 5 - 16.10pm
Embedding key transferable skills for success during and after University through innovative assessment Joanne Hooker*, Jayne Whistance Day 2: Parallel Session 9 - 10.35am
Employer Led Problem Based Learning: Developing and Assessing Employability Skills for Success Ron Cambridge* Day 1: Parallel Session 2 - 12 noon
Enhancing Engagement through Collaboration in Assessment Daniel Russell*, Barry Avery Day 2: Parallel Session 8 - 09.50am
Higher education teachersâ&#x20AC;&#x2122; assessment practices: Formative espoused but not yet fully implemented Ernesto Panadero*, Gavin Brown Day 2: Parallel Session 8 - 09.50am
Impact on Student Learning: Does Assessment Really Make A Difference? Natasha Jankowski* Day 1: Parallel Session 1 - 11.20am
Joining the pieces: using concept maps for integrated learning and assessment in an introductory Management course. Heather Connolly, Dorothy Spiller* Day 2: Parallel Session 10 - 13.30pm
Measuring the impact of high quality instant feedback on learning. Stephen Nutbrown*, Su Beesley, Colin Higgins Day 1: Parallel Session 1 - 11.20am
133
On-line Assessment and Personalised Feedback - Some Novel Approaches Jill Barber* Day 1: Parallel Session 2 - 12 noon
Oral forms of assessment and the nature of the spoken word: Insights from the world of acting and actor training Gordon Joughin*, Eliot Shrimpton Day 1: Parallel Session 4 - 15.30pm
Peer and Public Pressure: Using Assessment to Raise Confidence and Ambitions amongst undergraduate History and Sports Students Lee Pridmore, Ruth Larsen*, Ian Whitehead Day : Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Peer Reflection within Sports Coaching Practical Assessments Martin Dixon*, Chris Lee, Craig Corrigan Day 1: Parallel Session 3 - 14.45pm
Placement for Access and a Fair Chance of Success in South African Higher Education Institutions Robert Prince* Day 2: Parallel Session 7 - 9.15am
Student and staff experiences of peer review and assessment in undergraduate UK HE settings Denise Carter, Julia Holdsworth* Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Reflective activities and summative assessment in an open university access to higher education module Carolyn Richardson* Day 2:
Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Standardised Assessment to Increase Student Learning and Competency Ida Asner* Day 1: Parallel Session 3 - 14.45pm
â&#x20AC;&#x2DC;Skills Passport' for Life Sciences at Edinburgh Napier University: Helping students to help themselves Janis MacCallum*, Samantha Campbell-Casey, Patricia Durkin, Anne MacNab Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Structuring peer assessment and its evaluation by learning analytics Blazenka Divjak* Day 2: Parallel Session 9 - 10.35am
The constrained impact of a capstone dissertation assessment on the continuing workplace learning of master teachers Pete Boyd*, Hilary Constable Day 1: Parallel Session 4 - 15.30pm
The effect of the test re-do process on learner development in higher education foreign language courses Kristen Sullivan* Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
The Impact of Commercial Involvement on the Development Of Academic Processes And On The Quality of Outcomes: A Case Study Theme: Learning and contemporary higher education assessment Ufuk Cullen*, Zach Thompson Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
The impact of the assessment process and the international MA-TESOL course on the professional identity of Vietnamese student teachers David Leat, Tran Thanh Nhan* Day 2: Parallel Session 7 - 9.15am
134
The journey to digital storytelling and artifact-based assessment in Psychology: lessons to be learned from the arts-based disciplines. Diane Westwood* Day 1: Parallel Session 6 - 16.50pm
Assessment Strategy: Online Distance Education Elaine Walsh*, James Brunton
Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Using participatory photography as an assessment method: the challenges Gwenda Mynott* Day 1: Parallel Session 5 - 16.10pm
Valid and reliable assessment of students' academic writing using Comparative judgement Liesje Coertjens*, Tine van Daal, Marije Lesterhuis, Vincent Donche, Sven De Maeyer Day 2: Parallel Session 10 - 13.30pm
Visualising the Narrative: Assessment through a programmatic lens Bryan Taylor*, Mark Russell
Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Student responses to assessment A moving target: assessing the process and progress of learning One hour session
John Couperthwaite* Day 2: Parallel Session 8 - 09.50am
An alternative explanatory framework for what students want from feedback, what they actually use, and what tutors think they need Mark Carver* Day 1: Parallel Session 2 - 12 noon
Chinese Tutor and Undergraduate Responses to an Assessment Change Jiming Zhou* Day 1.: Parallel Session 6 - 16.50pm
Designing assessments to develop academic skills while promoting good academic practice and limiting s tudents' use of purchased or repurposed materials Ann Rogerson* Day 2: Poster Session 7: Student responses to assessment - 11.30am
Developing collegial relationships: Students providing feedback on staff member's teaching and assessment practices. Jennifer Scoles, Mark Huxham* Day 2: Poster Session 7: Student responses to assessment - 11.30am
Developing pedagogy: Using Pecha Kucha as formative assessment in two undergraduate modules Nicola Hirst* Day 2: Poster Session 7: Student responses to assessment - 11.30am
Domains influencing student perceptions of feedback Margaret Price*, Berry O'Donovan, Birgit den Outer, Jane Hudson Day 2: Parallel Session 10 - 13.30pm
EFL tutors and students perceptions of written assessment, feedback and criteria across six English departments at a Libya University Imad Waragh* Day 2: Poster Session 7: Student responses to assessment - 11.30am
Experiences of co-creating marking criteria Nicky Meer*, Amanda Chapman Day 1: Parallel Session 6 - 16.50pm
135
Exploring students' perceptions about peer-evaluation: a case study Elizabeth Ruiz Esparza Barajas* Day 1: Parallel Session 2 - 12 noon
Feeding forward from feedback with Business and Food first years Jane Headley*, Pam Whitehouse Day 2: Poster Session 7: Student responses to assessment - 11.30am
From research to practice: The connections students make between feedback and future learning Stuart Hepplestone*, Helen J. Parkin Day 2: Parallel Session 8 - 09.50am
How do students engage with personalised feedback from a summative clinical examination? Beverley Merricks* Day 2: Parallel Session 10 - 13.30pm
Improving Communication of Assessment Task Requirements and Expectations Through Improving Assignment Brief Design. Garry Maguire*, Fiona Gilbert Day 2: Parallel Session 8 - 09.50am
Investigating student preferences for a novel method of assessment feedback: A comparison of screencast and written feedback through questionnaire and focus group methods David Wright*, Damian Keil Day 1: Parallel Session 4 - 15.30pm
Learner engagement with Interactive Computer Marked Assignments on beginners’ language modules Anna Proudfoot*, Anna Comas-Quinn, Ursula Stickler, Qian Kan, Tim Jilg Day 1: Parallel Session 1 - 11.20am
‘Leave me alone, I’m trying to do my work’ - The discrepancies between staff and students’ perceptions of feedback and assessment practices Monika Pazio*, Duncan McKenna Day 2: Parallel Session 7 - 9.15am
Live Peer Assessment: Its Effects and After Effects Steve Bennett*, Trevor Barker Day 2: Parallel Session 8 - 09.50am
Making the formative feedback effective:Feed-forward feedback: Study of student’s perception on the video assignment guidance and its influence on their learning Harish Jyawali* Day 2: Poster Session 7: Student responses to assessment - 11.30am
Making use of assessment feedback: Students' perceptions of the utility of interventions for supporting their engagement with feedback. Naomi Winstone*, Michael Parker, Robert Nash Day 1: Parallel Session 2 - 12 noon
Marketing Downloads : Student response to a learning and assessment innovation at Kingston Business School Hilary Wason*, Nathalie Charlton, Debbie Anderson Day 1: Parallel Session 3 - 14.45pm
Measuring tertiary students' progress in English for specific purposes courses with self-assessment Dietmar Tatzl* Day 2: Poster Session 7: Student responses to assessment - 11.30am
Overcoming Assessment Challenges - Tipping the Balance Ruth Sutcliffe*, Rachel Sparks Linfield, Ros Geldart Day 2: Poster Session 7: Student responses to assessment - 11.30am
Phenomenographically exploring students' utilisation of feedback Edd Pitt* Day 2: Parallel Session 9 - 10.35am
136
Preconceptions surrounding automated assessment - A study of staff and students. Stephen Nutbrown*, Su Beesley, Colin Higgins Day 1: Parallel Session 6 - 16.50pm
Portraying Assessment: The Fear of Never Being Good Enough Peter Day, Harvey Woolf* Day 1: Parallel Session 4 - 15.30pm
Student Perceptions of different Assessment Modes in Computer Programming Courses Suraj Ajit* Day 2: Parallel Session 9 - 10.35am
Student perceptions of oral and written feedback Anna Steen-Utheim*
Day 2: Poster Session 7: Student responses to assessment - 11.30am
Students Love Assessment: Using assessment to improve engagement Toby Carter*, Nancy Harrison, Julian Priddle Day 2: Poster Session 7: Student responses to assessment - 11.30am
Students' responses to formative and summative online feedback generated using a statement bank: Outcomes from two quantitative studies Philip Denton*, David McIlroy Day 1: Parallel Session 5 - 16.10pm
Students' responses to learning-oriented assessment David Carless* Day 2: Parallel Session 7 - 9.15am
The influence of students' epistemic beliefs on their satisfaction with assessment and feedback Berry O'Donovan* Day 1: Parallel Session 1 - 11.20am
137
Presenter Index Ajit, Suraj University of Northampton, UK Student Perceptions of different Assessment Modes in Computer Programming Courses Day 2: Parallel Session 9 - 10.35am
Anderson, Deborah Kingston University, UK The Power of the "One-Pager": a simple idea for effective, informal formative assessment Day 2: Poster Session 2: Assessment Literacies - 11.30am
Asner, Ida LiveText Consultant, USA Standardised Assessment to Increase Student Learning and Competency Day .: Parallel Session 3 - 14.45pm
Barber, Jill University of Manchester, UK On-line Assessment and Personalised Feedback - Some Novel Approaches Day 1: Parallel Session 2 - 12 noon
Becker, Annette Utica College, USA Meeting the challenge of assessment when personal transformation is the outcome Day 1, theme etc.: Parallel Session 3 - 14.45pm
Bennett, Steve University of Hertfordshire, UK Live Peer Assessment: Its Effects and After Effects Day 2 : Parallel Session 8 - 09.50am
Bjรถrklund, Lars Linkรถping University, Sweden To measure the unmeasurable: using Repertory Grid Technique to elicit tacit criterias used by examiners. Day 2 : Poster Session 3: Assessment Research: theory, method and critique - 11.30am
Blacklock, Stuart LiveText, United States Minor Outlying Islands Standardising Assessment at the Institution to Increase Student Learning Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Bloxham, Sue University of Cumbria, UK A sensible future for moderation? Day 1: Parallel Session 5 - 16.10pm
Boud, David Deakin University, Australia Rethinking feedback for greater impact on learning Day 1: Master Classes - 09.30am
Boyd, Pete University of Cumbria, UK The constrained impact of a capstone dissertation assessment on the continuing workplace learning of master teachers Day 1: Parallel Session 4 - 15.30pm
Brunner, Belinda Pearson VUE, UK How can admissions testing better select candidates for professional programmes? Day 2 : Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
138
Buttner, Anke C. University of Birmingham, UK Charting the assessment landscape: preliminary evaluations of an assessment map Day 1: Parallel Session 4 - 15.30pm
Cambridge, Ron London Metropolitan University, UK Employer Led Problem Based Learning: Developing and Assessing Employability Skills for Success Day 1.: Parallel Session 2 - 12 noon
Carless, David University of Hong Kong, Hong Kong Designing and carrying out effective assessment Day 1: Master Classes - 09.30am
Students' responses to learning-oriented assessment Day 2: Parallel Session 7 - 9.15am
Carter, Toby Anglia Ruskin University, UK Students Love Assessment: Using assessment to improve engagement Day 2: Poster Session 7: Student responses to assessment - 11.30am
Carver, Mark University of Cumbria, UK An alternative explanatory framework for what students want from feedback, what they actually use, and what tutors think they need Day 1: Parallel Session 2 - 12 noon
Chamberlain, Susanna Griffith University, Australia Effective Extensions: managing the lived experience of online students Susanna Chamberlain, David Baker and Danielle Zuvela Griffith University Day 2: Parallel Session 10 - 13.30pm
Coertjens, Liesje University of Antwerp, Belgium Valid and reliable assessment of students' academic writing using Comparative judgement Day 2: Parallel Session 10 - 13.30pm
Collings, Jane Plymouth University, UK Developing and Embedding Inclusive Assessment across Plymouth University Day 1: Parallel Session 1 - 11.20am
Couperthwaite, John Pebble Learning Ltd, UK A moving target: assessing the process and progress of learning Day 2: Parallel Session 8 - 09.50am One hour session
Cullen, Ufuk Greenwich School of Management, UK The Impact of Commercial Involvement on the Development Of Academic Processes And On The Quality of Outcomes: A Case StudyTheme: Learning and contemporary higher education assessment Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Davis, Mary Oxford Brookes University, UK International postgraduate students and academic integrity: challenges and strategies to support Day 2: Parallel Session 8 - 09.50am Symposium see also Irene Glendinning and Phil Newton
139
Delany, John Christchurch Polytechnic Institute of Technology, New Zealand Walking the Assessment Talk: Aligning what we believe, say, and do
Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Denton, Philip Liverpool John Moores University, UK Students' responses to formative and summative online feedback generated using a statement bank: Outcomes from two quantitative studies. Day 1: Parallel Session 5 - 16.10pm
Dermo, John University of Bradford, UK Developing assessment literacy for Postgraduates who Teach: compliance or quality enhancement? Day 2: Poster Session 2: Assessment Literacies - 11.30am
Desautels, Luc Cégep régional de Lanaudière, Canada Assessing Student Learning: A Source of Ethical Concern for Higher Education Teachers Day 1: Parallel Session 6 - 16.50pm
Divjak, Blazenka University of Zagreb, Faculty of Organization and Informatics, Croatia Structuring peer assessment and its evaluation by learning analytics Day 2: Parallel Session 9 - 10.35am
Dixon, Martin Staffordshire University, UK Peer Reflection within Sports Coaching Practical Assessments Day 1: Parallel Session 3 - 14.45pm
Evans, Carl University of St Mark & St John, UK Case-Based Assessments in Business Management: Think Local, Not Global Day 1: Parallel Session 5 - 16.10pm
Evans, Jessica The Open University, UK Changing the Assessment Imagination: designing a supra-programme assessment framework at Faculty level. Day 2 : Parallel Session 7 - 9.15am
Fisher, Jenny Manchester Metropolitan University, UK Animate to communicate: using digital media for assessment Day 3: Parallel Session 3 - 14.45pm
Forsyth, Rachel Manchester Metropolitan University, UK Increasing assessment literacy through institutional change
Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Glendinning, Irene Coventry University, UK How mature are your institutional policies for academic integrity? Day 2: Parallel Session 7 - 9.15am Symposium see also Mary Davis and Phil Newton
Guillen Solano, Victor Sheffield Hallam University, UK Written Assessment and Feedback Practices in Postgraduate Taught Courses: an international perspective. Day .: Parallel Session 1 - 11.20am
140
Harner, Christie Newcastle University, UK Marking on and off line - a university wide pilot
Day 2 : Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Higher Education Academy Heslington, York Grade Point Average: Outcomes from the UK pilot Day 1: Parallel Session 3 - 14.45pm
Headley, Jane Harper Adams University, UK Feeding forward from feedback with Business and Food first years Day 2: Poster Session 7: Student responses to assessment - 11.30am
Hepplestone, Stuart Sheffield Hallam University, UK From research to practice: The connections students make between feedback and future learning Day 2: Parallel Session 8 - 09.50am
Hirst, Nicola Liverpool John Moores University, UK Developing pedagogy: Using Pecha Kucha as formative assessment in two undergraduate modules Day 2: Poster Session 7: Student responses to assessment - 11.30am
Holdsworth, Julia University of Hull, UK Student and staff experiences of peer review and assessment in undergraduate UK HE settings Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Hooker, Joanne Southampton Solent University, UK Embedding key transferable skills for success during and after University through innovative assessment Day 2: Parallel Session 9 - 10.35am
Hughes, Gwyneth Institute of Education, UK Ipsative assessment for student motivation and longitudinal learning Day 1: Parallel Session 2 - 12 noon
Hunt, Tim Information Technology, The Open University, UK I wish I could believe you: the frustrating unreliability of some assessment research Day 1: Parallel Session 1 - 11.20am
Hurford, Donna University of Southern Denmark, Denmark, University of Cumbria, Lancaster, UK Why is formative assessment so complicated? What is behind the push-me, pull-you relationship between theory and practice and how can we all move forward? Day 2: Parallel Session 7 - 9.15am
Huxham, Mark Edinburgh Napier University, UK Developing collegial relationships: Students providing feedback on staff member's teaching and assessment practices. Day 2: Poster Session 7: Student responses to assessment - 11.30am
141
Jankowski, Natasha University of Illinois Urbana-Champaign, USA Culturally Responsive Assessment: Modifying Assessment Processes to Meet Diverse Student Needs Day 1: Parallel Session 2 - 12 noon
Impact on Student Learning: Does Assessment Really Make A Difference? Day 2: Parallel Session 1 - 11.20am
Jessop, Tansy University of Winchester, UK Transforming the Experience of Students through Assessment (TESTA) Day 1: Master Classes - 09.30am
Jordan, Sally The Open University, UK Formative thresholded assessment: Reflections on the evaluation of a faculty-wide change in assessment practice Day 1: Parallel Session 5 - 16.10pm
Joughin, Gordon Higher Education Consultant, Australia Decision-making theory and assessment design: a conceptual and empirical exploration Day 2: Parallel Session 10 - 13.30pm
Oral forms of assessment and the nature of the spoken word: Insights from the world of acting and actor training Day 1: Parallel Session 4 - 15.30pm
Jyawali, Harish GSM London, UK Making the formative feedback effective: Feed-forward feedback: Study of student’s perception on the video assignment guidance and its influence on their learning Day 2: Poster Session 7: Student responses to assessment - 11.30am
Larsen, Ruth University of Derby, UK Peer and Public Pressure: Using Assessment to Raise Confidence and Ambitions amongst Undergraduate History and Sports Students Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Lloyd, Andy Cardiff University, UK Helping the horses to drink: lessons learned from an institution-wide programme designed to enhance assessment Day 1: Parallel Session 4 - 15.30pm
Lundström, Mats Malmö University, Sweden Examine student theses - similarities and differences in relation to examiners' experience Day 2: Parallel Session 8 - 09.50am
Macaulay, Janet Monash University, Australia Using an evidence based approach to transform academic approaches to assessment. Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
MacCallum, Janis Edinburgh Napier University, UK ‘Skills Passport' for Life Sciences at Edinburgh Napier University: Helping students to help themselves Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
142
Maguire, Garry Oxford Brookes University, UK Improving Communication of Assessment Task Requirements and Expectations Through Improving Assignment Brief Design. Day 2: Parallel Session 8 - 09.50am
Marigheto, Niusa The Open University, UK Gender differences in completion and credit on physical science modules Day 2: Poster Session 4: Diversity and Inclusion - 11.30am
McManus, Richard Canterbury Christ Church University, UK Assessment timing: student preferences and its impact on performance
Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Meer, Nicky University of Cumbria, UK Experiences of co-creating marking criteria Day 1: Parallel Session 6 - 16.50pm
Merricks, Beverley University of Birmingham, UK How do students engage with personalised feedback from a summative clinical examination? Day 2: Parallel Session 10 - 13.30pm
Mighten, Eddie Sheffield Hallam University, UK Transforming the Experience of Staff through Assessment Day 1: Parallel Session 3 - 14.45pm
Morrison, Dave University of Glasgow, UK Leading Enhancements in Assessment and Feedback (LEAF Scotland) Day 2: Poster Session 5: Institutional change in assessment policy and practice - 11.30am
Moscrop, Claire Edge Hill University, UK A quantitative analysis of student engagement with online feedback Day 2: Poster Session 2: Assessment Literacies - 11.30am
Murray, Patricia University of Sheffield, UK Assessment for Employment: introducing "Engineering You're Hired" Day 1: Parallel Session 5 - 16.10pm
Mynott, Gwenda Liverpool John Moores University Using participatory photography as an assessment method: the challenges Day 1: Parallel Session 5 - 16.10pm
Newton, Phil Swansea University, UK Custom essay writing and other paid third parties in Higher Education; what can we do about it? Day 2 : Parallel Session 9 - 10.35am Symposium see also Mary Davis and Irene Glendinning
Nutbrown, Stephen University of Nottingham, UK Measuring the impact of high quality instant feedback on learning. Day 1: Parallel Session 1 - 11.20am
Preconceptions surrounding automated assessment - A study of staff and students. Day 1: Parallel Session 6 - 16.50pm
143
O'Donovan, Berry Oxford Brookes University, UK The influence of students' epistemic beliefs on their satisfaction with assessment and feedback Day 1: Parallel Session 1 - 11.20am
Oprandi, Paolo University of Sussex, UK The assessment challenge: an end-to-end solution Day 2 : Poster Session 5: Institutional change in assessment policy and practice - 11.30am
O'Regan, Lisa Maynooth University, Ireland Assessment Feedback Practice In First Year Using Digital Technologies – Preliminary Findings from an Irish Multi-Institutional Project Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Palmer-Conn, Sue Liverpool John Moores University, UK The AsSET toolkit: developing assessment self-efficacy to improve performance Day 2: Poster Session 2: Assessment Literacies - 11.30am
Panadero, Ernesto Universidad Autónoma de Madrid, Spain Higher education teachers’ assessment practices: Formative espoused but not yet fully implemented Day 2: Parallel Session 8 - 09.50am
Pastore, Serafina University of Bari, Italy ‘Another brick in the wall’? Teachers’ representations about assessment and teacher education processes. Day 1: Parallel Session 1 - 11.20am
Assessment representations and practices in Italian higher education context: Hints from a case study Day 2: Poster Session 2: Assessment Literacies - 11.30am
Pazio, Monika University of Greenwich, UK Institutional approach to improving feedback and assessment practices using TESTA at the University of Greenwich Day 1: Parallel Session 2 - 12 noon
‘Leave me alone, I’m trying to do my work’ - The discrepancies between staff and students’ perceptions of feedback and assessment practices Day 2 : Parallel Session 7 - 9.15am
Pitt, Edd University of Kent, UK Phenomenographically exploring students' utilisation of feedback Day 2: Parallel Session 9 - 10.35am
Price, Margaret Oxford Brookes University, UK Domains influencing student perceptions of feedback Day 2: Parallel Session 10 - 13.30pm
Help students to help themselves: developing assessment literacy Day, theme etc.: Master Classes - 09.30am
Prince, Robert University of Cape Town, South Africa Placement for Access and a Fair Chance of Success in South African Higher Education Institutions Day 2: Parallel Session 7 - 9.15am
144
Prøitz, Tine Sophie Buskerud and Vestfold University College, Norway Student understandings and use of learning outcomes in higher education Day 2: Parallel Session 9 - 10.35am
Proudfoot, Anna The Open University, UK Learner engagement with Interactive Computer Marked Assignments on beginners’ language modules Day 1: Parallel Session 1 - 11.20am
Purkis, Nick The University of Winchester, UK An evaluation of the student and staff experience of the introduction of audio feedback for undergraduate assessment Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Purnell, Emma Plymouth University, UK I don’t have time to attend a 2 hour training session: consequences and impact Day 2: Poster Session 2: Assessment Literacies - 11.30am
Reimann, Nicola University of Durham, UK Conceptualising Fellowship of the Higher Education Academy (HEA) as an assessment process Day 2: Parallel Session 7 - 9.15am
Richardson, Carolyn The Open University, UK Reflective activities and summative assessment in an open university access to higher education module Day 2: Poster Session 6 -
Learning and contemporary higher education assessment - 11.30am
Rogerson, Ann University of Wollongong, Australia Designing assessments to develop academic skills while promoting good academic practice and limiting students' use of purchased or repurposed materials Day 2: Poster Session 7: Student responses to assessment - 11.30am
Ruiz Esparza Barajas, Elizabeth Universidad de Sonora, Mexico Exploring students' perceptions about peer-evaluation: a case study Day, 1: Parallel Session 2 - 12 noon
Russell, Daniel Kingston University, UK Enhancing Engagement through Collaboration in Assessment Day 2: Parallel Session 8 - 09.50am
Sambell, Kay Northumbria University, UK Using exemplars to develop assessment literacy: what do students learn to notice during pre-assessment workshops? Day 1: Parallel Session 2 - 12 noon
Sheehan, Susan University of Huddersfield, UK Identifying potential English language teachers from a cohort of MA students in order to meet the requirements of an external validation authority Day 2: Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
145
Sinclair, Stefanie The Open University, UK Challenges and benefits of assessing reflection Day 2: Parallel Session 10 - 13.30pm
Incorporating digital technologies in the assessment of oral presentations at a distance Day 1.: Parallel Session 5 - 16.10pm
Spiller, Dorothy University of Waikato, New Zealand Joining the pieces: using concept maps for integrated learning and assessment in an introductory Management course. Day 2: Parallel Session 10 - 13.30pm
Starkey, June OISE/UT, Canada Investigating the feedback gap(s) in pre-service language teacher education: What is the Emperor really wearing (and who will tell)? Day 1: Parallel Session 1 - 11.20am
Steen-Utheim, Anna BI Norwegian Business School, Norway Student perceptions of oral and written feedback
Day 2: Poster Session 7: Student responses to assessment - 11.30am
Stolpe, Karin LinkĂśping university, Sweden From practice oriented and academic traditions to academic professional qualifications â&#x20AC;&#x201C; A historical view of Swedish teacher education Day 2 : Parallel Session 8 - 09.50am
Stowell, Marie University of Worcester, UK Applying assessment regulations equitably and transparently Day 1: Parallel Session 6 - 16.50pm
Sullivan, Kristen Shimonoseki City University, Japan The effect of the test re-do process on learner development in higher education foreign language courses Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Sutcliffe, Ruth Leeds Beckett University, UK Overcoming Assessment Challenges - Tipping the Balance Day 2: Poster Session 7: Student responses to assessment - 11.30am
Sutton, Paul University of St Mark & St John, UK The Abstract Labour of Learning and the Value of Assessment Day 2: Parallel Session 9 - 10.35am
Szkornik, Katie Keele University, UK Preparing international students for the diversity of UK assessment within a UK-China articulation agreement. Day 2: Poster Session 4: Diversity and Inclusion - 11.30am
Tatzl, Dietmar FH Joanneum University of Applied Sciences, Austria Measuring tertiary students' progress in English for specific purposes courses with self-assessment Day 2 : Poster Session 7: Student responses to assessment - 11.30am
146
Taylor, Bryan King's College London, UK Visualising the Narrative: Assessment through a programmatic lens
Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Thanh Nhan, Tran Newcastle University, UK The impact of the assessment process and the international MA-TESOL course on the professional identity of Vietnamese student teachers Day 2: Parallel Session 7 - 9.15am
Tomas, Carmen University of Nottingham, UK E-marking: institutional and practitioner perspectives Day 1: Parallel Session 6 - 16.50pm
Trevitt, Chris Australian National University, Australia Getting traction on assessment development: what can we learn from a professions' (Law; Medicine) perspective? Day 1: Parallel Session 4 - 15.30pm
van der Lei, Remko Hanze University of Applied Sciences, The Netherlands How can an institution increase the assessment quality of its examiners? Day 1: Parallel Session 4 - 15.30pm
Walsh, Elaine Dublin City University, Ireland Assessment Strategy: Online Distance Education Day 2: Poster Session 6: Learning and contemporary higher education assessment - 11.30am
Waragh, Imad University of Sunderland, UK EFL tutors and students perceptions of written assessment, feedback and criteria across six English departments at a Libya University Day 2: Poster Session 7: Student responses to assessment - 11.30am
Wason, Hilary Kingston University, UK Marketing Downloads : Student response to a learning and assessment innovation at Kingston Business School Day 1: Parallel Session 3 - 14.45pm
Westrup, Rebecca University of East Anglia Dialogue+: Promoting first year undergraduate students' understanding of, and participation with assessment and feedback processes Day 1: Parallel Session 5 - 16.10pm
Westwood, Diane University of Sunderland, UK The journey to digital storytelling and artifact-based assessment in Psychology: lessons to be learned from the arts-based disciplines. Day 1: Parallel Session 6 - 16.50pm
Wilson, Adam Queen's University Belfast, UK The use of stakeholder-informed simulation in assessment: sharing experience from an undergraduate medical student disability awareness programme. Day 2: Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
147
Winstone, Naomi University of Surrey, UK Making use of assessment feedback: Students' perceptions of the utility of interventions for supporting their engagement with feedback. Da 1: Parallel Session 2 - 12 noon
Woolf, Harvey University of Wolverhampton, UK Portraying Assessment: The Fear of Never Being Good Enough Day 1: Parallel Session 4 - 15.30pm
Wright, David Manchester Metropolitan University, UK Investigating student preferences for a novel method of assessment feedback: A comparison of screencast and written feedback through questionnaire and focus group methods Day 1: Parallel Session 4 - 15.30pm
Zhao, Yue The University of Hong Kong, Hong Kong How to assess our students well: innovative approaches for addressing the challenges of assessment and feedback Day 2: Poster Session 1: assessment challenges in disciplinary and professional contexts - 11.30am
Learning diversity in higher education: Comparison of learning experiences among cross cultural student populations in a Hong Kong university Day 2: Poster Session 4: Diversity and Inclusion - 11.30am
Zhou, Jiming The University of Hong Kong, Hong Kong Chinese Tutor and Undergraduate Responses to an Assessment Change Day 1: Parallel Session 6 - 16.50pm
148
Conference Exhibitors & Main Sponsors
www.cengage.co.uk www.cumbria.ac.uk
www.livetext.com
an imprint of Frontinus Ltd
www.pebblelearning.co.uk
www.speedwellsoftware.com
frontinus.org.uk
149
Delegate List First name
Last name
Institution
Brenda
Aalders
b.j.aalders@pl.hanze.nl
Hanzehogeschool Groningen
Gamar
Abdul Aziz
gamar@sp.edu.sg
Singapore Polytechnic
Suraj
Ajit
suraj.ajit@northampton.ac.uk
University of Northampton
Reza
Akbarinouri
aber.reza@gmail.com
Islamic Azad University
Alaa
Al Sebae
alaa.alsebae@coventry.ac.uk
Coventry University College
Al-Zadma
Al-Fourganee
University of Sunderland
Yahaya
Alhassan
Carrie
Allen
al-zadma.alfourganee@research.sunderland.a c.uk yahaya.alhassan@sunderland.ac.u k callen7@illinois.edu
Deborah
Anderson
d.anderson@kingston.ac.uk
University of Illinois at UrbanaChampaign Kingston University
Mandy
Asghar
e.sunley@yorksj.ac.uk
York St John University
Ida
Asner
kelly.grigus@livetext.com
LiveText
Maria
Åström
maria.astrom@edusci.umu.se
Umeå university
Svante
Axelsson
svante.axelsson@uadm.uu.se
Uppsala university
Jo-Anne
Baird
jo-anne.baird@education.ox.ac.uk
University of Bristol
Jill
Barber
jill.barber@manchester.ac.uk
University of Manchester
Adam
Barrigan
adam.barrigan@pearson.com
Pearson VUE
Caroline
Baruah
baruahc@westminster.ac.uk
University of Westminster
Hardeep
Basra
j.wilkins@lboro.ac.uk
Loughborough University
Annette
Becker
abecker@utica.edu
Utica College
Steve
Bennett
s.j.bennett@herts.ac.uk
University of Hertfordshire
Lars
Björklund
lars.bjorklund@liu.se
Linköping University
Stuart
Blacklock
stuart.blacklock@livetext.com
LiveText
Sue
Bloxham
s.bloxham@cumbria.ac.uk
University of Cumbria
Lizann
Bonnar
lizann.bonnar@strath.ac.uk
University of Strathclyde
Michaela
Borg
michaela.borg@ntu.ac.uk
Nottingham Trent University
David
Boud
david.boud@uts.edu.au
Deakin University/UTS
Pete
Boyd
pete.boyd@cumbria.ac.uk
University of Cumbria
Simon
Bromley
s.bromley@shu.ac.uk
Sheffield Hallam University
Belinda
Brunner
helen.henley@pearson.com
Pearson VUE
Tina
Bryant
t.bryant@herts.ac.uk
University of Hertfordshire
Diane
Burkinshaw
d.j.burkinshaw@shu.ac.uk
Sheffield Hallam University
Anke
Buttner
a.c.buttner@bham.ac.uk
University of Birmingham
Ron
Cambridge
r.cambridge@londonmet.ac.uk
London Metropolitan University
David
Carless
dcarless@hku.hk
The University of Hong Kong
Toby
Carter
toby.carter@anglia.ac.uk
Anglia Ruskin University
Mark
Carver
s0405526@uni.cumbria.ac.uk
University of Cumbria
Abby
Cathcart
abby.cathcart@qut.edu.au
Queensland University of Technology
University of Sunderland
150
Susanna
Chamberlain
s.chamberlain@griffith.edu.au
Griffith University
Amanda
Chapman
amanda.chapman@cumbria.ac.uk
University of Cumbria
Nathalie
Charlton
n.charlton@kingston.ac.uk
Kingston University
Esther
Chesterman
robinson.n@cie.org.uk
Liesje
Coertjens
liesje.coertjens@uantwerpen.be
Cambridge International Examinations University of Antwerp
Judy
Cohen
j.cohen@kent.ac.uk
University of Kent
Jane
Collings
jane.collings@plymouth.ac.uk
Plymouth University
Lynda
Cook
lynda.cook@open.ac.uk
The Open University
Lisa
Coulson
l.j.coulson@bham.ac.uk
University of Birmingham
John
Couperthwaite
john@pebblepad.co.uk
Pebble Learning
Anne Ufuk
Crook Cullen
a.c.crook@reading.ac.uk ufukalpsahin@yahoo.ca
Mary
Davis
marydavis@brookes.ac.uk
University of Reading Plymouth University Greenwich School of Management Oxford Brookes University
Peter
Day
peter.day14@btopenworld.com
University of Wolverhampton
Leanne
de Main
aa2796@coventry.ac.uk
Coventry University
John
Delany
john.delany@cpit.ac.nz
Philip
Denton
p.denton@ljmu.ac.uk
John
Dermo
j.dermo@bradford.ac.uk
Christchurch Polytechnic Institute of Technology Liverpool John Moores University University of Bradford
Luc
Desautels
luc.desautels@collanaud.qc.ca
Luke
Desforges
shu@claritytm.co.uk
Cégep régional de Lanaudière àL'Assomption Sheffield Hallam University
Blazenka
Divjak
bdivjak@foi.hr
University of Zagreb
Martin
Dixon
martin.dixon@staffs.ac.uk
Staffordshire University
Suleiman
Erateb
Coventry University College
Caroline Jessica
Essex Evans
Suleiman.Erateb@cuc.coventry.ac. uk c.essex@ucl.ac.uk jessica.evans@open.ac.uk
Carl
Evans
cevans@marjon.ac.uk
Anis
Fatima
aneezfatima@gmail.com
University of St Mark & St John Coventry University
Jenny
Fisher
j.fisher@mmu.ac.uk
Karen
Ford
k.ford@sheffield.ac.uk
Rachel
Forsyth
r.m.forsyth@mmu.ac.uk
Ros
Geldart
R.Geldart@leedsbeckett.ac.uk
Manchester Metropolitan University Leeds Beckett University
Fiona
Gilbert
f.gilbert@brookes.ac.uk
Oxford Brookes University
Irene
Glendinning
csx128@coventry.ac.uk
Coventry University
Margaret
Gough
ab8477@coventry.ac.uk
Coventry University College
Linda
Graham
l.graham@unn.ac.uk
Northumbria University
Katie
Grant
katiemarygrant@gmail.com
Royal Literary Fund
Jonathan
Green
j.r.green@bham.ac.uk
University of Birmingham
Victor
Guillen
victor.guillensolano@student.shu.a c.uk
Sheffield Hallam University
UCL The Open University
Manchester Metropolitan University University of Sheffield
151
Sarah
Hamilton
sarahhamilton@bpp.com
BPP University
Cecilia
cse@cardiffmet.ac.uk
Cardiff Metropolitan University
Nuala
HanniganDavies Harding
nharding@ait.ie
Athlone Instiute of Technology
Janet
Haresnape
Janet.Haresnape@open.ac.uk
Open University
Christie
Harner
christie.harner@ncl.ac.uk
Newcastle University
Anton
Havnes
anton.havnes@hioa.no
Jane
Headley
jheadley@harper-adams.ac.uk
Oslo and Akershus University College of Applied Sciences Harper Adams University
Stuart
Hepplestone
s.j.hepplestone@shu.ac.uk
Sheffield Hallam University
Joanne
Hooker
joanne.hooker@solent.ac.uk
Southampton Solent University
Gwyneth
Hughes
g.hughes@ioe.ac.uk
Institute of Education, London
Tim
Hunt
T.J.Hunt@open.ac.uk
The Open University
Donna
Hurford
dhu@sdu.dk
Centre for Teaching and Learning
Mark
Huxham
M.Huxham@napier.ac.uk
Edinburgh Napier University
Natasha
Jankowski
njankow2@illinois.edu
Tansy
Jessop
Tansy.Jessop@winchester.ac.uk
University of Illinois UrbanaChampaign University of Winchester
Kamilah Christine
Jane Jordan
Jones C.J.Jordan@hud.ac.uk
Jane.Jones@winchester.ac.uk University of Huddersfield
Sally
Jordan
sally.jordan@open.ac.uk
The Open University
Gordon
Joughin
g.joughin@bigpond.com
Higher Education Consultant
Harish
Jyawali
harish.jyawali@gsm.org.uk
GSM London
Tala
Kasim
ab7992@cuc.coventry.ac.uk
Coventry University College
Linda
Keesing-Styles
lkeesingstyles@unitec.ac.nz
Unitec Institute of Technology
Sarah
King
s.king.2@bham.ac.uk
University of Birmingham
John
Knight
john.knight@bucks.ac.uk
Bucks New University
Ruth
Larsen
r.larsen@derby.ac.uk
University of Derby
Kim
Leggett
kim.leggett@cengage.com
Cengage Learning EMEA Ltd
Neil
Lent
n.lent@ed.ac.uk
University of Edinburgh
Rachel
Lindfield
r.linfield@leedsbeckett.ac.uk
Leeds Becket University
Kate
Litherland
k.litherland@chester.ac.uk
University of Chester
Andy
Lloyd
lloyda@cardiff.ac.uk
Cardiff University
Phil
Long
p.long@uea.ac.uk
University of East Anglia
Cecilia
Lowe
cecilia.lowe@york.ac.uk
University of York
Mats
Lundstrรถm
mats.lundstrom@mah.se
Malmรถ University
Thi Hong Gam Jon
Luong
gamluong2012@yahoo.com
Southern Cross University
Lyons
jlyons@aubg.bg
American University in Bulgaria
Janet
Maculay
janet.macaulay@monash.edu
Monash University
Janis
MacCallum
j.maccallum@napier.ac.uk
Edinburgh Napier University
Garry
Maguire
gmaguire@brookes.ac.uk
Oxford Brookes University
Niusa
Marigheto
n.a.marigheto@open.ac.uk
The Open University
Duncan
Marson
D.W.Marson@derby.ac.uk
University of Derby
Hazel
Marzetti
hazel.marzetti@ed.ac.uk
University of Edinburgh
152
Teresa
McConlogue
t.mcconlogue@ucl.ac.uk
UCL
Mike
McCormack
m.mccormack@ljmu.ac.uk
Liverpool John Moores University
Richard
McManus
Nicky
Meer
richard.mcmanus@canterbury.ac.u k n.meer@cumbria.ac.uk
Canterbury Christ Church University University of Cumbria
Beverley
Merricks
b.a.merricks@bham.ac.uk
University of Birmingham
Eddie
Mighten
e.mighten@shu.ac.uk
Sheffield Hallam
Melania
Milecka Forrest
ab3842@coventry.ac.uk
Coventry University College
Sally
Mitchell
s.mitchell@qmul.ac.uk
Erica
Morris
dr.erica.morris@gmail.com
Queen Mary University of London University of East Anglia
Nathalie
Morris
nathalie.morris@pearson.com
Pearson
Dave
Morrison
david.talbot@glasgow.ac.uk
University of Glasgow
Claire
Moscrop
claire.moscrop@edgehill.ac.uk
Edge Hill University
John
Mullen
john.mullen@sunderland.ac.uk
University of Sunderland
Trish
Murray
p.b.murray@sheffield.ac.uk
University of Sheffield
Gwenda
Mynott
g.j.mynott@ljmu.ac.uk
Liverpool John Moores University
Phil
Newton
p.newton@swansea.ac.uk
Swansea University
Elizabeth
Noonan
elizabeth.noonan@teachingandlear ning.ie
Stephen
Nutbrown
psxsn6@nottingham.ac.uk
National Forum for the Enhancement of Teaching & Learning in Higher Education University of Nottingham
Berry
O''Donovan
bodonovan@brookes.ac.uk
Oxford Brookes University
Paolo
Oprandi
paolo@sussex.ac.uk
University of Sussex
Lisa
O''Regan
lisa.oregan@nuim.ie
Maynooth University
Jennie
Osborn
jennie.osborn@heacademy.ac.uk
The Higher Education Academy
Sue
Palmer-Conn
s.e.palmer-conn@ljmu.ac.uk
LJMU
Ernesto
Panadero
ernesto.panadero@uam.es
Michael
Parker
m.parker@surrey.ac.uk
Universidad Aut贸noma de Madrid University of Surrey
Serafina
Pastore
serafina.pastore@uniba.it
University of Bari
Monika
Pazio
m.pazio@gre.ac.uk
University of Greenwich
Monica Edd Margaret
Pentassuglia Pitt Price
monica.pentassuglia@gmail.com e.pitt@kent.ac.uk meprice@brookes.ac.uk
University of Verona University of Kent Oxford Brookes University
Julian
Priddle
julian.priddle@anglia.ac.uk
Anglia Ruskin University
Lee
Pridmore
r.thompson@derby.ac.uk
University of Derby
Robert Anna
Prince Proudfoot
robert.prince@uct.ac.za anna.proudfoot@open.ac.uk
University of Cape Town The Open University
Nick
Purkis
nick.purkis@winchester.ac.uk
The University of Winchester
Emma
Purnell
emma.purnell@plymouth.ac.uk
Plymouth University
Nicola Carolyn
Reimann Richardson
Nicola.reimann@durham.ac.uk c.richardson4@bradford.ac.uk
University of Durham University of Bradford
Karen
Robins
k.robins@herts.ac.uk
University of Hertfordshire
Francesca
Robinson
francesca.robinson@ruhl.ac.uk
RHUL
Ann Elizabeth
Rogerson Ruiz Esparza
annr@uow.edu.au elruiz@guaymas.uson.mx
University of Wollongong Universidad de Sonora
153
Barajas Diane
Rushton
d.rushton@shu.ac.uk
Sheffield Hallam University
Daniel
Russell
djrussell@kingston.ac.uk
Kingston University
Ian
Sadler
i.sadler@yorksj.ac.uk
York St John University
Kay
Sambell
kay.sambell@northumbria.ac.uk
Northumbria University
Nicola
Savvides
nicola.savvides@kcl.ac.uk
Kings College London
Lee Huang
Seng
leehuang@sp.edu.sg
Singapore Polytechnic
Susan
Sheehan
a.gamble@hud.ac.uk
University of Huddersfield
Eliot
Shrimpton
eliot.shrimpton@gsmd.ac.uk
Guildhall School of Music & Drama
Stefanie
Sinclair
stefanie.sinclair@open.ac.uk
The Open University
Dorothy
Spiller
dorothy@waikato.ac.nz
University of Waikato
June
Starkey
june.starkey@utoronto.ca
Robert
Stead
r.stead@herts.ac.uk
Ontario Institute for Studies in Education of the University of Toronto Hertfordshire Business School
Anna
Steen-Utheim
anna.steen-utheim@bi.no
BI Norwegian Business School
Sandy
Stockwell
sandy.stockwell@winchester.ac.uk
University of Winchester
Karin
Stolpe
karin.stolpe@liu.se
Linkรถping university
Marie
Stowell
m.stowell@worc.ac.uk
University of Worcester
Kristen
Sullivan
kristenmareesullivan@gmail.com
Shimonoseki City University
Ruth
Sutcliffe
r.e.sutcliffe@leedsbeckett.ac.uk
Leeds Beckett University
Paul
Sutton
rbarfoot@marjon.ac.uk
Katie
Szkornik
k.szkornik@keele.ac.uk
University of St Mark & St John Keele University
Maddalena
Taras
Dietmar
Tatzl
maddalena.taras@sunderland.ac.u k dietmar.tatzl@fh-joanneum.at
Bryan
Taylor
bryan.taylor@kcl.ac.uk
FH JOANNEUM University of Applied Sciences King's College London
Zach
Thompson
zach.thompson@gsm.org.uk
GSM London
Olu
Tokode
ac0221@coventry.ac.uk
Coventry University College
Carmen
Tomas
carmen.tomas@nottingham.ac.uk
University of Nottingham
Laure
Tordjmann
Speedwell Software
Nhan
Tran
ltordjmann@speedwellsoftware.co m n.tran@ncl.ac.uk
Chris
Trevitt
nranjit@brookes.ac.uk
Australian National University
Gillian
Ulph
jackie.horricks@manchester.ac.uk
University of Manchester
Remko
van der Lei
r.r.van.der.lei@pl.hanze.nl
Mira
Vogel
m.vogel@ucl.ac.uk
Hanze University of Applied Sciences UCL
Sharon
Waller
ania.norman@anglia.ac.uk
Anglia Ruskin University
Elaine
Walsh
elaine.walsh@dcu.ie
Dublin City University
Imad
Waragh
University of Sunderland
Hilary
Wason
imad.waragh@research.sunderlan d.ac.uk h.wason@kingston.ac.uk
Rebecca
Westrup
r.westrup@uea.ac.uk
University of East Anglia
University of Sunderland
Newcastle University
Kingston University
154
Diane
Westwood
University of Sunderland
Whistance
diane.westwood@sunderland.ac.u k jayne.whistance@solent.ac.uk
Jayne Ian
Whitehead
i.whitehead@derby.ac.uk
University of Derby
Emma
Whitt
emma.whitt@nottingham.ac.uk
The University of Nottingham
Kimberly
Wilder
a.akcan@napier.ac.uk
Edinburgh Napier University
Peter
Wildschut
p.j.wildschut@hr.nl
Hogeschool Rotterdam
Matthew
Williamson
m.williamson@qmul.ac.uk
Queen Mary University of London
Adam
Wilson
awilson70@qub.ac.uk
Queen's University Belfast
Naomi
Winstone
n.winstone@surrey.ac.uk
University of Surrey
Harvey
Woolf
harvey.w@blueyonder.co.uk
University of Worcester
Alan
Wright
awright@uwindsor.ca
University of Windsor
David
Wright
d.j.wright@mmu.ac.uk
Jiming
Zhou
jmzhou@hku.hk
Manchester Metropolitan University The University of Hong Kong
Southampton Solent University
155
etc. Venues: Floor Plans
etc. Venues: Floor Plan Second Floor
156
157
Notes
158
159