Capture
Volume 5, Spring 2017
Conversations about pedagogy and teaching underpinned by research enquiry
Contents Page 4
Preface
Page 4
Welcome from the Editors
Page 7
Page 22
Page 28
Page 42
Page 51
Prof. Elizabeth Stuart- First Deputy Vice Chancellor
Dr Alison James & Dr Stuart Sims- Editors
Our assessment and feedback conversation - an update
Dr Alison James-Acting Director of Academic Quality Development
Page 55
Assessment that enhances the student learning experience: measuring the effectiveness of the TESTA approach at Winchester
Juliet Williams- Senior Researcher in Learning & Teaching
Using a Partnership Approach through Student Fellows Projects as an Intervention to Assessment and Feedback
Cassie Shaw- Research OfficerStudent Engagement & Dr Stuart Sims- Head of Student Engagement
Delia Scalco- Student Fellow (Criminology & Psychology)
Alternative explanations of high and low module average marks Prof. Graham Gibbs
Page 68
Interview about Assessment and Technology Enhanced Learning Matt Elphick-TEL Research Officer
Engaging with Feedback for Learning
Page 77
Page 82
What are the links between assessment and creativity? With specific reference to practical assessment in the Drama programme.
Samuel Chivers- Vice President, Education Winchester Student Union
Programme Focused Assessment
Paul Jennings- Head of Accounting & Investment Julia Osgerby – Senior Lecturer in Accounting Alison Bonathan - Senior Lecturer in Accounting
Myth busting folklore around assessment
Nicolette Connon- Quality Officer (Regulations & Policies)
Advert for Capture Volume Six- 'The Lecture'
Preface
Welcome from the editors
We are delighted to relaunch the University of Winchester’s in-house journal on learning, teaching and pedagogic research in its new online format. This particular issue of Capture is dedicated to the institutional conversation that has been taking place this year on aspects of assessment and feedback. Assessment and feedback are key elements of pedagogy and we need to be reflecting constantly on their effectiveness. Some of the key themes of our conversation are encapsulated in the contributions included here and provide a platform to consolidate and extend the good practices already in use. This issue will also add to the resources that are freely available to, and created by, University staff to support our aspirations for excellent assessment as well as learning and teaching.
Welcome to our fifth issue of Capture, focussing on assessment and feedback – the challenges, the myths and the possibilities for innovative practice. Our contents this Spring are broad ranging and feature different voices; academic, manager, researcher, administrator and student. All of these support our joint endeavours to offer students a fair, purposeful, appropriate and developmental assessment experience.
Prof. Elizabeth Stuart First Deputy Vice Chancellor
As you can see from the Contents there is a review of the impact of TESTA (referred to in conversation recently by a well-respected colleague from another university as “the only game in town”); a collation of food for thought from the various assessment events held this year as part of our institutional conversation; a student view of feedback; an update on programmatic assessment; some lively myth-busting; a summary of Student Fellows’ projects on assessment; technological perspectives and an excellent contribution from Professor Graham Gibbs on the dilemmas of achieving the right balance when implementing standards . These contributions touch on just a few of the overarching topics that demand, and so many
more can be found within of the articles provided. If you want to explore any of these more deeply please contact the authors directly, or to find out how to develop your own practice please also feel free to contact the Learning & Teaching Development team here at Winchester. You will also notice at the end of the issue there is an invitation to contribute to our next edition, due out in Autumn 2017, which will address another major conversation we have been conducting at the University this year. This focuses on the lecture – what is it? What’s it for? Where does it go wrong? How can we do it well? And when doing it well, how can we ensure it remains a valid and motivating part of the student’s experience? If you would like to offer a contribution to Issue 6 on any aspect of this debate please get in touch with us. Finally, we hope you will find this issue a worthwhile read. If you do, we’d love to hear from you – if you don’t, please tell us what we missed, or what we can think about doing differently in our forthcoming editions. We also welcome ideas and suggestions about topics and themes for the future.
Best wishes, The Editors, Dr Alison James Acting Director of Academic Quality & Development
Dr Stuart Sims Head of Student Engagement, Academic Quality & Development capture@winchester.ac.uk
Our assessment and feedback conversation - an update Dr Alison James Acting Director of Academic Quality Development Alison.James@ winchester.ac.uk This article shares a number of impressions which have arisen from the institutional conversation on assessment and feedback we are holding at Winchester this year. So far this has taken the form of locally held discussions and institutional events, Learning Lunches, the collation and creation of resources, conducting of polls and production of this issue of Capture. It will culminate in the hosting of our Annual Learning and Teaching Day on June 8th, when we will debate philosophies and practices and celebrate successes with innovative and creative forms of assessment. Assessment and feedback are two things that we as academics are deeply invested in. They form part of the repertoire of teaching, learning and research activities which
contribute to the shape of our professional and specialist identities. They are not only important to us, but to students because “Assessment defines what students regard as important, how they spend their time and how they come to see themselves as students and graduates” (Brown and Knight, 1994, p12). Not only that, but Sambell (2016) paraphrases Mann (2001) who sees assessment as something that can provoke alienation, not engagement. Mann refers to the feelings of ‘compliance, powerlessness and subservience’ that students can experience with regard to assessment. Some of our conversation this year has been about how we can reduce any danger of these negative associations. As we know – not least thanks to the NSS – assessment and feedback are the most challenging aspects of academic practice to get right. We have co created the TESTA process which is already a sturdy means of transforming thinking about assessment used in more than 50 UK universities. The following eight impressions are ones that may already have been expressed during TESTA workshops, or through other channels. They
CAPTURE | VOLUME 5 |SPRING 2017
may look rather familiar, which reminds us that our consideration of assessment cannot remain static and requires regular review. The first impression is:
We are sometimes confused about what formative assessment is and how to increase opportunities to use it in conjunction with summative assessment. Journalism and many of the ‘production’ courses (media, film etc.) already provide plenty of built in formative work. These include weekly production tasks with an ongoing oral feedback process throughout the year which informs their summative assignment tasks. (Students were very positive about this process in their TESTA focus group.) TESTA already offers programmes the means of removing excessive summative or formal assessments and creating a more nuanced and developmental model of assessment within a programme. However our conversation reveals that competing interpretations exist as to what we mean by it. For TESTA purposes formative assessment is defined as that which is not graded but is required and solicits
feedback, however it seems we have not spelled out our University stance sufficiently in our regulations. As a result our common understanding is somewhat fuzzy. Often the boundaries between formative (developmental and ungraded activity) and summative (graded and counting towards progression or qualification) are indistinct. This suggests that greater clarity in terms of definition and application will be useful. (It also raises the question as to what other terms might seem obvious in meaning to us locally, but which would benefit from wider definition.) The production of an assessment glossary within a supporting handbook of resources on common topics of discussion and interest is one way of responding to this which has been adopted in other universities. Our own Assessment Handbook will be produced by July 2017 and is not about increasing regulation, but rather as offering resources which can help address thorny questions of language, or wicked problems. These might include examples of assessing group work, designing effective marking systems or innovative/effective use of formative assessment. Formative assessment also ties in very closely with impression 2:
We seem, overall to depend more on an assessment of learning model, rather than assessment for learning (AfL) approach, or a mixed economy. A key feature of AfL is greater emphasis on low stakes, formative assessment, in which the students have maximum chance to build their capabilities with little risk of failure or adverse consequence. This is instead of reliance on high stakes summative assessment as measurement of success and on which their academic futures rely entirely. Furthermore, AfL is associated with moves to ensure that students see assessment as something in which they are engaged, rather than that they have “done to” them. The six features of AfL which were celebrated in the work of the eponymously named Centre for Excellence in Teaching and Learning (CETL) at the University of Northumbria included an emphasis on ‘authenticity and complexity in the content and methods of assessment rather than reproduction of knowledge and reductive measurement’ (assessment of learning)and the sparing use of summative assessment. This differed from using assessment as the main spur for learning. They further stressed the development of
abilities to direct own learning, rich feedback opportunities and multiple opportunities for students to engage in ‘tasks that develop and demonstrate their learning’. Finally Sambell and others (2013) stress that AfL is essential to how high standards of learning can be achieved through assessment. At Winchester blogging is being used effectively on a number of programmes to facilitate an AfL approach, even if some of these cases are for summative assessment. On the MA Learning & Teaching blogging offers a formative means for students to explore their thinking on key texts and practices. As the CETLs were at their height in the middle to end of the Noughties (2006-11) many of the principles they carved out may now seem very familiar. However they all continue to be valid points of critical evaluation of practice. Some challenges just don’t go away. One of these relates to impression 3:
Students don’t always make proactive use of the feedback they receive. As part of our exploration of feedback practices Dr Naomi Winstone from the University of Surrey gave a Learning Lunch
CAPTURE | VOLUME 5 |SPRING 2017
talk in January 2017 on her DEFT toolkit, commissioned by the HEA. (The fact that over 40 people attended suggested a keen interest in finding alternative ways of encouraging students to investigate their feedback.) Delia Scalco describes it in greater detail in her Student Fellow report in this issue, based on her investigation of staff/student perceptions of feedback at Winchester. This shows that the challenge with feedback perceived by students is not necessarily the quantity and quality of feedback (although this can come into play), it is knowing how to recgonise it and to best to act on it. We may feel this is entirely self-evident, however Winstone’s research and student responses suggest otherwise. Consider the following extract from feedback to a student (not at Winchester): “the full body of work shows an intelligent approach to the project brief…your technical specs are particularly strong so I would consider continuing with this development into your final year. The project would have benefited from more risk taking in the sampling to explore a more conceptual and even more unique approach”
This is intended to be positive and encouraging and yet contains potential traps in every sentence if a student does not have a space in which to follow it up. What do ‘intelligent’, ‘strong’, ‘conceptual’ and ‘unique’ mean here? Would they know exactly what kind of steer they had been given or how to develop further? This is not simply a matter of needing step-by-step prescriptive instructions but rather indicates the extent to which language may look innocuous but obscure.
We are not always brave about trying out innovative assessment but play safe with traditional or familiar assessment types. In a constructively aligned curriculum (Biggs, 1999) programme design begins with the end in mind, to steal Stephen Covey’s famous mantra, i.e. with the desired outcomes overall and for each module. The author of the latter then works back from the assessment type and learning objectives to identify appropriate content and learning activities. However, quite often the process has been conducted in reverse, starting rather with content and level and then wondering about
assessment as a final step. Reasons of expediency may also come into the frame too, rather than purely pedagogic intent – such as how to assess large numbers of submissions with limited time and resource. We need to remind ourselves, when designing assessment, of its intended purpose and why we want to assess in that chosen way. For some of our disciplines the essay is still the preferred assessment format; while there are plenty of arguments as to why the essay is still valid we may also ruminate as to why it persists as a staple. Does tradition hold sway? Are we concerned that it remains the most academically credible or best method of finding out what a student knows? In what spaces do we challenge our choice of assessment? To what extent can the essay be deemed to be an ‘authentic’ mode of assessment (relating to real-life, diverse and work-relevant, as seen in impression 2). These kinds of ponderings lead us to reflect on how we can be more innovative in our assessment choices. For Erica Morris, writing in her presentation on innovative assessment here, this may mean authentically assessing, or using technology, or changing the nature of student engagement and participation in
some way. Stephen Brookfield and I (2014) have also emphasised that something can be innovative in one discipline which may be entirely ‘normal’ in another. Non-textual, 3D or multimedia formats may be the norm in the arts, but unusual in other fields. A radical example of this can be the dancing of a PhD which Stephen and I also refer to. In other contexts it might be unusual but gentler in form, such as Linda Watts’ (2017 pp 98-117) integration of ‘contemplative commentary’ into assessment of the caring professions, rooted in mindfulness practices. A third example might be Nina Johnson’s research into the use of paper finger labyrinths to reduce stress and anxiety at assessment time (in Sellers and Moss, 2016). Whether we use traditional or innovative approaches to assessment it is often the packaging of these that is problematic for students, as seen in impression 5:
Our assessment processes, despite our best efforts to the contrary, are not always clear and transparent. George Bernard Shaw is often quoted as saying “The single biggest problem in communication is the illusion that it has taken
CAPTURE | VOLUME 5 |SPRING 2017
place”. This is often lamented by lecturers who have instructed their students or their colleagues an infinite number of times as to where to hand work in or how to do X. And yet the information does not stick. A similar challenge presents with some assessment briefs, which we as tutors are convinced are limpid and self-explanatory, but by which the students are baffled. (The extent of their confusion is not always immediately apparent however, and sometimes too late). Over ten years ago Francis and LeMarquand (2006) wrote of their four year action research to demystify the language of assessment in an FE college. In 2008 they ran a series of Language of Assessment workshops with undergraduates at the London College of Fashion with the same goal. They drew on visual metaphors, and engaged students in creative and discursive activities designed to clarify information and instruction. They wrote: By observing students, and also through the later discussion, it is very clear that there is considerable variation in student and staff understanding of a whole sentence or even a single word. Students’ feedback has been
very similar each year – key words that recur are ‘confused, don’t understand, unfair’. These feelings of confusion and unfairness can persist in students today. Teaching postgraduates as well as undergraduates make clear that all students can experience anxiety, muddleheadedness or panic about assessment requirements. Through inviting students to mark up and voice instances of foggy language, opaque guidance, contradictory or interpretable advice, they became more confident and clearer as to what was needed. Visual embodiments of principles such as the integration of research, practice and academic convention through the interweaving of wool strands and use of colour and material enabled them to grasp the module requirements and develop their own capabilities. LEGO® workshops conducted at the University of the Arts London (Barton and James, 2017) also revealed in 3D form the diverse ways students can get stuck on the road to assessment. A creative and innovative approach to assessment might be as much about adopting appropriate practices to ‘unstick’ students and improve confidence in addressing what is required as it is about format of the task
itself. Whatever means you choose it is about placing the power in the hands of the student to dig into the meanings and requirements of assessment with lecturers as their guides, not just receive monologues about the contents of the module handbook.
Our design and use of assessment criteria and the other kinds of standard/performance statement can befuddle, rather than bolster understanding. Students often feel that in addition to accomplishing the assessment task they are also required to unpick a whole tangle of measurements against which the piece of work is considered. Our intention in providing various criteria is to be as specific as possible
in providing guidance. However, our use of multiple guidance frameworks in one assessment can result in an unholy alliance between learning outcomes, assessment criteria, grade descriptors, module content, support material and so on. As Gibbs identified in no 23 of his 53 Powerful Ideas there is such a thing as too much information. This is never more true than when trying to correlate multiple standards and requirements in a single assignment, akin for students to playing ‘rub your tummy pat your head’ with complex subject matter thrown in. In December 2016 a straw poll of approaches to marking at the University, which drew 35 responses, revealed multiple practices with regard to the use of marking criteria as shown in the table below.
CAPTURE | VOLUME 5 |SPRING 2017
While a variety of assessment modes can be energising and ‘different’ does not equate to ‘wrong’, this raises interesting questions as to how students know what criteria are at play in each of these situations. Are they given opportunities to unpack, explore and question the operational underpinnings of assessment? Are they given the space to explain what they think they are being asked to do, so that programme teams can ascertain the extent to which their intentions and student understanding coincide? Or are the students naturally so adept at dealing with the assessment machinery that none of this is a problem and they all do well? In addition to this table explorations of programme documentation and conversations with colleagues reveal that sometimes we use diverse terms – marking criteria, assessment criteria, grading descriptors – to mean slightly different things. A student who has not been fully briefed in their subtle variances may take them to be one and the same. Moreover, where all of these things are being used alongside learning outcomes some interesting dissonances can be spotted in the ways one may slightly grate on another, or the different measures not
fully mesh. The grading descriptor for levels 36 in the assessment regulations in the 40-49% bandwith refer to work that is ‘adequate’, while a programme may include the word ‘poor’ in their own variation of marking criteria in that band. The slippage between those words can be misleading. The exploration of their use and their interrelationships by team members together can reveal interesting but often hidden preconceptions or assumptions that may also indicate diverse interpretations. The language of such criteria can be misleading in its simplicity – what do ‘sound’, ‘solid’, ‘satisfactory’ and ‘strong’ all mean to individual markers? How is the term ‘research’ being interpreted? How is originality defined in a first year essay, as opposed to a doctoral piece? Are some criteria more important than others, even if a numerical percentage or value is not being explicitly attached? Are there any nonnegotiables? Do all markers know what they are? (Several years of conducting workshops in other institutions to achieve some common understanding of language and level as described in the next section reveals how often they don’t). How far are students ‘in’ on
discussions and clarifications of any aspects of these? Sometimes it is not just students who are not ‘in’ on a discussion; as assessing teams we can often have particular and personal interpretations of what is required or agreed when marking an assignment. This leads us to consider Impression 7:
We as teams and individuals don’t always explore our approaches to marking together, but when we do we find the process rewarding and revealing. As indicated in the reference to workshops earlier it is common sector practice to adopt approaches to benchmarking prior to assessment. A number of them have run this year at Winchester also. These are particularly useful in teams with numerous different markers, some of who may be hourly paid or have a more distant relationship with the programme than full time staff. At the University of the Arts London where I conducted these over several years they helped establish common understandings of what might constitute each different grade
and were a mechanism for airing the many different investments that markers bring to both their assessment of work and the defence of their judgements. Two cases where these workshops became bi annual practices (if not more frequent) were in Cultural and Historical Studies and in Business and Management. A team of up to 30 markers on a module would each grade three pieces of work and write up their feedback – typically the sample would include a good, an average and an ugly one. They would then come together in a meeting and each piece (anonymised) would be taken in turn. Tutors would write their marks on post-its and cluster them on a wall, creating an instant, visual depiction of what could sometimes be a wide range of grades. The challenge then was to discuss openly (and often – ahem - energetically) the thinking behind the decision making. This revealed diverse interpretations of marking criteria, assumptions of level, individual understandings of what terms like ‘research’ stood for in the context of that module, expectations of student capacity, marker preference (from strict and penalising to generous and seeking to reward). Such variety
CAPTURE | VOLUME 5 |SPRING 2017
does not call into question the academic and specialist capabilities of the individuals marking, but does reveal the gaps that can exist between how one person approaches marking compared to another. Sometimes marker interpretation of criteria and learning outcomes differ and at others discussion reveals something else at work which is being brought into play: “It’s their first assignment so I didn’t want to be harsh”; “It’s their research proposal, and I wanted to be encouraging” “they are third years and they should know by now” “X made me really irritated”. After a suitably lively and robust discussion of each student’s grades and feedback a final grade and commentary is agreed for each. With this experience fresh in their minds tutors then mark their own batch of assignments and, once complete, moderation and sampling takes place. Feedback from participants is that the activity offers an invaluable space for an academic conversation about assessment which also enables markers to appraise and monitor their own marking practices going forward. Just as we can give consideration to how we mark as teams and individuals, so perhaps it is wise to stand back from the assessment
landscape and think about the enormous variation in our marking models. This taps into a broader debate conducted recently on the National Teaching Fellow discussion list as to the complications of, and assumptions associated with, these. This explored tensions between using the whole and a specified range of marks, the merits of letters versus numbers, and the strangeness of dividing 100% into 40% for fail, a bunch of bandwidths for classifications in the middle and then 70% onwards for excellent work. In addition, many universities are adopting the Grade Point Average (GPA) alongside the existing qualification/grading structure; Winchester will be following this same path from 17/18, with briefings being held in March. It is fair to say, however, at this stage that what this looks like and how it will be understood and used by students is all uncharted territory. Much more conversation on this subject is going to be needed. The straw poll referred to earlier also indicated a wide variety of practices in marking; these included the use of categorical marking (where specified numbers are identified as available within the marking range, but not all), using the whole range of
marks, providing feedback but no marks and many others. An interesting feature was that 31 respondents said they used the full range of marks and 8 also said they used categorical marking, which, given the numbers breakdown means that at least a few respondents do both. This prompts the question as to whether both practices are adopted on the same module (which seems contradictory) or on different modules within the same course. It would further suggest the need to ensure that students understand when one model or marking is in play as opposed to another.
institution driven by its values means that personalised marking has come to illustrate the commitment that individuals matter and pledge to supporting student development. However concerns about ensuring that no student is discriminated against for whatever reason have led to requests for anonymous marking to be reconsidered. This has been discussed at Senate Academic Development Committee and Student Academic Council and papers on the topic and the extensive literature which accompanies it present a multitude of evidence and argument for and against.
We need to do more to reassure students about fairness and parity in marking.
The debate as to whether or not anonymous marking is feasible or desirable will not be continued here as it is already articulated in the SADC paper AD182 March 2017. What it does raise in terms of assessment is a number of issues. One relates to the umbrella term of inclusion: whether or not assignments offer scope for students to draw on their own cultural capital as part of the learning experience, or about providing accommodated assessment in the context of disability or specific learning difficulty. Sambell (2016, 2) also advocates ‘higher levels of choice and flexibility in terms of
Matters of parity extend far beyond the programme and into sector territory in terms of the comparison of degree outcomes in similar subject areas. Graham Gibbs’ paper which we include with his permission in this issue discusses the challenges of trying to achieve parity between departments or universities. Closer to home staff and students continue to be exercised as to how to ensure that assessment is fairly constructed and conducted. Being an
CAPTURE | VOLUME 5 |SPRING 2017
negotiating what is assessed and when’ although such an aspiration is often brought low by logistical constraints (or the perception thereof). This may be particularly important for student parents and carers who find that assessment practices (and curriculum organisation) do not take into consideration their needs. In other examples fairness and nondiscrimination may be about the extent to which we make our assessment practices explicit to students, that we honour our own standards of unbiased marking and recognise the human frailties that can creep into assessing work (tiredness, hunger, stress, workload, expectations, lack of time and the ‘horns and haloes’ effects of the order of marking – the mediocre piece that seems so much better after marking four lamentable ones and so on. It amounts once more to the need to involve students in dialogue not just about the ‘whats’ but the ‘hows’ and ‘whys’ of assessment. We assume sometimes that because they know where to find the regulations they will inform themselves of our moderation processes – or perhaps we only explain these when we have to i.e. when a student is already upset with their mark or by
potentially or perceived unfair treatment. It suggests that two things need to take place; that we invest much more energy in ensuring students understand how marking and moderation affect and protect them, and that renewed energy is directed at exposing malpractice or dangers of bias, prejudice or power abuse lurking in any assessment practices. To this end a Fairness in Assessment and Feedback Task and Finish Group is being established which will convene in April 2017. While each of these impressions has related to a specific aspect of assessment they all have one thing in common. They are all about how we can bring greater clarification, simplification and differentiation into our assessment practices. Through consideration of each of these aspects, scrutiny of our practice and reference to literature and pedagogic research on assessment we can synthesise certain key principles for effective assessment. These are that… • assessment should be a matter of deepening understanding, not just a matter of learning how to pass
• assessment is a matter of academic judgement, not simply computation and that over intricate marking schemes can simply serve to distance staff and students from grasping a true picture of performance on a module • students must be provided with full and accurate information on all aspects of their assessment • diverse modes of assessment offer students more than one way of demonstrating knowledge and capability • principles of inclusion are considered when designing assessment tasks in order to avoid discrimination against learners with protected characteristics Through this institutional conversation a few things have become apparent, some of which we have already sensed for a while. One of these is that within our assessment culture, as within many other aspects of our academic practice, Winchester is a university where a thousand flowers bloom. We have grown organically and developed or adopted a range of practices in local and disciplinary flowerbeds but have not always had the time or inclination to examine whether the flowers can thrive when grown together, or whether
one planting jars against another. This year’s conversation and its emerging impressions and topics for further enquiry show many more conversations and actions are needed at programme, Faculty and University level. Some aspects of this continued discussion need to relate to operational issues and processes (instructions, hand ins, deadlines) while other (much more interesting ones) explore the question asked by Boud and Associates (2010) as to how we induct students in assessment cultures and practices. This is not simply a matter of taming the essay format, getting study skills training or teaching students how to play the assessment game. Rather they are about developing an appreciation of the conventions, content and culture of scholarship in context. Most particularly with regard to this article, they are about the norms, values, conventions and cultural practices of learning and assessing at this university, in this subject. We have only just begun the debate.
References Biggs, J. (1999) Teaching for Quality Learning in HE. SRHE.
CAPTURE | VOLUME 5 |SPRING 2017
Boud and Associates (2010 ) Assessment 2020: seven propositions for assessment reform in higher education. Sydney. Australian Learning and Teaching Council. Available at http://www.uts.edu.au/sites/default/files/Ass essment-2020_propositions_final.pdf Brown, S. & Knight, P. (1994) Assessing Learners in Higher Education Kogan Page, London. Francis,P., and Le Marquand,. S. (2006) Finding the I in assessment. Reflections on a journey from assessment to evaluation. Unpublished. James A and Brookfield S (2014) Engaging Imagination: helping students become creative and reflective thinkers. San Francisco. Jossey Bass. Johnson, N. (2016) Finger labyrinth research: tracing a path to resilience, concentration and creativity. Ch.18 pp.128-137 in Sellers, J., and Moss,. B (2016) Learning with the Labyrinth. Creating Reflective Space in Higher Education. Palgrave. Sambell K, McDowell L and Montgomery. C (2013) Assessment for Learning in Higher Education. Abingdon. Oxon. Routledge.
Sambell K (2016) Assessment and feedback in higher education: considerable room for improvement? Student Engagement in Higher Education Journal. Vol 1, Issue 1, September 2016.
Interview about Assessment and Technology Enhanced Learning Who are you? I’m Matt Elphick and I am the University’s Technology Enhanced Learning Officer. I work as part of the ‘TEL’ or Technology Enhanced Learning team, which sits inside Learning and Teaching Development, which itself is part of Academic Quality and Development. In my area, I do a lot of work with mobile devices, particularly iPads, investigating how these devices can be embedded into the delivery of teaching and what impact this has on student creativity, their ability to collaborate with their peers and their digital capability skills.
Could you tell us a bit about TEL at Winchester? Technology is one of the key developmental themes of the university and we very much
believe that technology can and should be used to further student’s education. However, technology should never be used just for technology’s sake. It shouldn’t be a gimmick or used because it’s new and shiny. Technology should only be used if there is a real pedagogic benefit to it, if it enhances an aspect of the student learning experience or it solves a problem that exists. We believe in that very much here at Winchester, which is why projects such as the iPilot are looking at enhancing existing practice and Canvas is being used to improve upon the previous system. These are changes very much rooted in pedagogy and with enhancement in mind, and haven’t come about simply because the technology exists which makes them possible.
So what projects are running currently out of LTD in this area? As I mentioned, the iPilot is the main project that I am currently involved in. As part of this project the first year students on seven undergraduate programmes have been given an iPad, which is their device to keep, which they use in their lectures and seminars. The staff on these programmes have also been provided with iPads and have been tasked
CAPTURE | VOLUME 5 |SPRING 2017
with embedding the use of the devices into the way that they deliver their teaching. Another project the TEL team has been working on over the last year or so is the full rollout and implementation of Canvas, the University’s new VLE or Virtual Learning Environment. One of the great things about this project, and this goes to show just how important the student voice is, is that it was actually students that suggested we investigate this platform, after a small number of them saw a demonstration of it at an external event. So we’ve brought in Canvas, after an extensive 2 year pilot across about a third of the university, because it’s just a really good platform. We had some issues with the Learning Network where students sometimes found it wasn’t overly reliable and we occasionally had periods where it would be down for maintenance. Canvas is a cloud-based system which means that updates can be rolled out without it needing to be taken offline. Instructure, the company that make and manage Canvas, guarantee an uptime of 99.9% and so it’s always going to be there when our staff and students need it. Canvas also has support that is available 24 hours a day, seven days a
week, 365 days a year, and this can be accessed either through a free-phone number or by using the live-chat facility built into Canvas itself. This means that if students or staff have any issues, are getting an error message, or don’t understand how to do something on the system, they can get in touch with someone that can help them, regardless of the day or time. Canvas is a really nice system because it’s easy to use, it’s very intuitive and it’s straight forward to make pages look visually pleasing. You can embed images and videos so pages look enticing, so you’re not just faced with a wall of text, or a long list of links to word documents or Powerpoint presentations.
How can you get started using technology in the classroom to support learning and teaching? I think it’s really important to start by considering why you want to use technology. Is there a particular issue that you’re trying to get around, or are you looking to enhance a certain aspect of your classes? Once you know what you’re trying to achieve you can then go about identifying the right bits of technology to help you. You also need to assess how digitally capable you are. If you’re
someone who is incredibly competent with technology and you use it a lot anyway, you can go in at a much higher level than someone who is not so competent. Depending on how tech-savvy you are, I’d normally recommend that you pick one or two programmes or apps and really get to grips with them, before branching out into other pieces of technology. There is no point trying to over-stretch yourself. If you try and do something that is too advanced for your particular level all that’s going to happen is that things will start to go wrong and you’re not going to feel confident in using it. Confidence is a big thing in using technology, as you have to be confident enough to stand in front of a class and use it effectively or know how to handle the situation if something goes wrong. That said, there are some things you can do straight away to use technology in the classroom. An Ofcom Report in 2015 said that over 90% of 16-24 year olds have an internet enabled smart phone and so we’re looking at the majority of our students having devices which are more powerful than the computer that put man on the moon, and we can use these to our advantage. For example, there
are programmes like Socrative and Kahoot! which are online quiz apps,which can be used to test student understanding and are very simple to set-up. They introduce a bit of competition because you can set up leader boards and students can see how well they’re doing. The students answer the questions on their smart phones and I’ve seen this work really well in the university with students getting quite competitive about it and really engaging with the material. It’s a fabulous opportunity for staff to then go back through material and say ‘ok, most of you didn’t get this right so let’s go back over that’ or ‘everyone knew the answer to this particular question, so it’s clear that everyone is understanding what this is about and we don’t have to spend too much more time on it’. It’s a really nice and fun way of engaging students and assessing their understanding of the topics at the same time.
As this is a special edition of Capture about assessment and feedback, can you suggest any ways that people can enhance their assessment practices using technology?
CAPTURE | VOLUME 5 |SPRING 2017
When talking about assessments and technology one of the obvious places to start is with electronic submission and marking. Digital technology, such as platforms like Canvas, mean that our students can submit their assignments from anywhere in the world and that staff can mark them without needing to collect physical scripts. We have a fair few students who commute to the university and it saves them having to travel in just to submit their assignments, particularly if they don’t have a class on that same day. For staff, it means you don’t have lots of papers to carry around, you can do it all from your PC and once you’ve marked the assessments it’s very easy to return the marks and feedback. You can also do things such as record audio feedback for students or take it further by using screen-casting, which is almost like videoing the screen of your computer, so when people watch the video they can see what you are looking at and clicking on. This allows you to almost have an asynchronous tutorial with a student. So you’re going through the assignment and you can highlight bits of text and you can talk to the student about why that’s a really good passage or why they haven’t used that quote in quite the right way. The students then get to hear the tone
of voice, so it’s much more personal and less passive than just a block of text. If they can hear your voice then they know the way in which you’re saying things, so there’s very little chance of misinterpretation. A number of tutors from different programmes have been piloting this and the students seem to enjoy receiving their feedback in this format.
Are there any apps or software that you would particularly recommend for people to use? All staff and students have access to Office 365, so everyone has the access to the latest version of Microsoft Office which, importantly, comes with OneDrive. This is a really important tool because it means your files are all accessible to you no matter where you are, provided you have internet access. So for example, I have access to OneDrive on my phone, so if I’m in a meeting and I need to look up something quickly in a document I’ve written then I know it’s always there and I can get to it easily. This is also a really effective way of collaborating with your peers or colleagues too because you can set up a single document which a number of people can contribute to. These are tools we already all
have access to which more people can and should make use of. With regards to apps for smartphones, the best ones are those that are cross-platform because not all students have the same phone and operating system. If you want to embed technology into your classroom, for example, by using the digital interactive quizzes that I was mentioning earlier, you need to do it in a way that doesn’t exclude a particular group of students that have one phone instead of another. Kahoot! for example, is web-based and accessible through a browser, so if the students have any device with internet access they can access it and participate.
Are there any myths about TEL you’d like to bust? There’s a preconception that the majority of our students are digital natives and therefore know how to use technology straight away. While it’s true that many of them will have grown up using mobile devices and computers, this doesn’t necessarily mean that they know how to use them in a safe and professional manner. I’m used to seeing emails from students that are a single line in
length, without so much as a ‘Hi Matt’ and often not even including their own name. When they’ve finished their degrees and are working, this sort of poor email etiquette is going to give off a bad impression. We need to equip our students with the skills to survive and thrive after university and a big part of this is ensuring that they have the digital capabilities to get on in an ever more digital world. For more information contact Matt.Elphick@winchester.ac.uk
Assessment that enhances the student learning experience: measuring the effectiveness of the TESTA approach at Winchester Juliet Williams Senior Researcher in Learning & Teaching Juliet.Williams@ winchester.ac.uk
Introduction The University of Winchester has led the 'Transforming the Experience of Students through Assessment' (TESTA) approach, originally a National Teaching Fellowship Project, since 2009. TESTA considers assessment patterns holistically at a programme level, aiming to improve the student learning experience by better understanding patterns of assessment and developing assessment sequences that foster deeper learning through evidence-based and student-informed research (Jessop & El Hakim, 2013). Assessment is understood to be the central driver of students’ learning (Gillet
and Hammond 2009; Harland et al 2014; Jessop and Maleckar 2014; Price 2011) and yet assessment and feedback satisfaction scores (as measured by the National Student Survey and others) are consistently poor across the sector (Price, 2011). Early TESTA research found that taking a programmatic approach to assessment and feedback design created better sequenced, more logical and better connected assessment and feedback that countered cultures of students ‘dashing off’ modules and a reluctance to engage with feedback (Jessop et al, 2013). In this paper I will examine TESTA’s effectiveness at tackling common issues around assessment and feedback design, drawing upon literature in learning and teaching development as well as data from nine programmes at the University of Winchester who have engaged with the TESTA approach through its embedding in the re-validation process.
Assessment’s place in fostering authentic, deep and transformative learning Literature that examines learning in Higher Education and student engagement in the curriculum often proposes that university study must provide a learning environment
and experience which is transformative and ‘authentic’; a learning that shifts away from passive, grade-driven and ‘surface’ learning environments frequently associated with compulsory and Further Education, to a more active, empowered and deeper learning that requires participation, autonomy, criticality and higher-order thinking (Barentt, 2009; Bovill, 2011; Krause & Coates, 2008; Oblinger, 2007). Oblinger argues that when learning is authentic it puts the focus ‘back on the learner in an effort to improve the way students absorb, retain, and transfer knowledge’ (Oblinger, 2007: 4), further suggesting that authentic learning is that which engenders agency, helps students connect the dots across multiple strands of learning, is relevant to the world beyond university study, and connects learning to the ‘bigger picture’. This notion of ‘authentic’ learning then is closely linked with eliciting student engagement in the curriculum, suggesting that if students are able to ‘connect the dots’, situate their learning within the ‘bigger picture’ and have more agency in their study they will be more engaged in and empowered by the learning process (Bovill et al, 2011).
These notions of authenticity have too been applied to concepts of ‘good’ and ‘effective’ assessment design. Ashford-Rowe’s conceptualisation of authentic assessment places importance on the notion of assessment as a tool for rather than a measure of learning (Ashford-Rowe et al, 2013). The eight critical elements of authentic assessment outlined by Ashford Rowe, including the facilitation of the safe transfer of knowledge, the requirement of metacognition, that assessment tasks are appropriately designed, and that assessment allows for formative feedback opportunities, all reflect the elements of authentic learning explored above, and again place importance on creating a learning environment that stimulates engagement, deeper learning and autonomy. Furthermore, assessment and feedback are essential foundations in the ‘scaffolding’ of students’ learning experience; supporting students in their academic development and ability to engage with tasks or achieve goals otherwise out of reach (Wass, Harland and Mercer, 2011: 319). Given that assessment and feedback are understood to be the central component of a student’s learning experience, that assessment is what drives students’ attention, effort and energy,
and inevitably shapes their acquisition of knowledge, their working patterns, their academic and professional development, and their academic ‘success’ (attainment), it is important that we design assessment and feedback in ways that facilitate authentic and transformative learning.
Why assessment and feedback often fail to provide a transformative learning experience for our students: what TESTA findings tell us. Following the introduction and standardisation of modularised degrees in the UK in the 1990s there have been significant shifts in approaches to assessment design across the HE sector, predominantly through the isolation of module content (and allocation of credits), the need to assess learning across multiple and specialised modules, and in turn the significant increase in the number of assessment tasks a student will encounter over the course of their degree. Through TESTA research it is evident that the impact of this compartmentalised learning experience can be detrimental when it comes to students’ ability to engage with and negotiate assessment tasks. High volumes
of summative assessment disconnected by specialised module content, with deadlines habitually clustered at the mid and end points of the semester, inevitably leads to assessment tasks that compete for students attention and effort. Students who have taken part in TESTA focus groups at the University of Winchester have made the following observations regarding their engagement with assessment tasks: When I start getting stressed and there’s a lot of work to do I don't tend to do the readings.
We can be quite strategic with which lectures we attend… it sometimes feels like I could get away without showing up but still do the assignments, as long as I put the work in in my own time.
I don't know where to start because there’s loads – I can't organise myself so I'm like 'I can't do any of it'.
We have all these assignments and it’s just pushing away the dissertation… a lot of people would rather have less assignments just so we can focus on the dissertation because that makes up a considerable mark on our final grade.
I'll be so focused on an essay for a module that most of the time I won't be focusing on the content that’s being told in a lecture.
We've had loads of deadlines crammed into a 3 week space; we've kind of gone 'ok well we can't really prioritise any more'.
If someone said what did you learn on your degree, I'd basically sum it up as saying I learnt what I needed to learn for assignments; I learnt what I learnt because I needed to complete an assignment, rather than I learnt because I was really interested in the whole thing.
This sample of student feedback on assessment design illustrates the many challenges students face when it comes to high volumes of poorly designed, badly sequenced and often disconnected assessment tasks. It further highlights some of the strategies students use to overcome this challenge: skipping lectures, not engaging with wider reading and research, prioritising some assessment tasks over others, and procrastinating, all of which hinder students’ learning, attainment and development. A study conducted by Harland et al in 2014 also identified these key issues with modularised assessment design. The study found that students were being assessed constantly, leaving them little time to complete formative tasks and wider reading and research. They often missed teaching sessions in order to cope with their assessment workload, were stressed by high volumes of disconnected assessment tasks, and felt they were not achieving the grades they were capable of because they were unable to dedicate equal time and energy to each assignment (Harland et al, 2014: 3). Furthermore, and perhaps most significantly, the study found that the less time and energy
students were able to commit to module assignments, the more assignments were set by lecturers in an attempt to elicit better student engagement in the module. Harland et al dubbed this flawed cycle of assessment design ‘the assessment arms race’ (Harland et al, 2014), with each lecturer increasing assessment tasks on their module to force students to pay attention. Both Harland et al’s study and the TESTA findings indicate that lots of high stakes, bunched assessment tasks competing with each other for students’ attention and effort can lead to strategic, marks-driven, ‘surface’ learning, and as such does not foster the transformative and authentic learning experience we hope to create for our students. More importantly, what these findings suggest is that a whole programme approach is needed to combat issues, such as the ‘arms race’, that prevent assessment from scaffolding students’ learning, increasing engagement with the curriculum, and creating an environment that engenders authentic and autonomous learning. Universities must therefore invest in adopting approaches to assessment and feedback design that enable students to accomplish
what we expect of them. As Krause and Coates identify, ‘institutions are responsible for creating environments that make learning possible, and that afford opportunities to learn’ (Krause and Coates, 2008: 494), and this must include our design of assessment and the quality and quantity of feedback we give.
What impact has the TESTA approach had in our re-thinking and re-shaping of assessment and feedback? Measuring the impact and effectiveness of the TESTA approach in combatting the common issues surrounding assessment and feedback design is twofold. Firstly, we can measure TESTA’s impact quantitatively, comparing ‘before’ and ‘after’ data which measures the number of assessment tasks a student may encounter during their degree, as well as breaking this down to measure the number of summative and formative tasks and the number of different varieties of assessment a student will encounter. Secondly, we can measure qualitatively the impact TESTA has had on changing the thinking around assessment design using a whole programme approach as a part of the programme’s re-validation.
Measuring impact using quantitative data Overall, TESTA has had a positive impact on the re-design of assessment and feedback on programmes undergoing re-validation. Figures 1-4 below show changes made by programme teams, comparing ‘before’ and ‘after’ data on the total number of assessment tasks a student may encounter during their undergraduate study. This is further broken down into the number of summative and formative assessment tasks and the different varieties of assessment students may encounter. These figures are based on TESTA audit data distilled as part of the re-validation process (the ‘before’ data) and data from the re-validated, ‘new’ programme module descriptors (the ‘after’ data). It is important to note that the numbers of assessment tasks detailed below are based on the average student’s assessment experience which is subject to change depending on module choices.
Figure 1. The total number of assessment tasks a student may encounter. 90 80
Programme a Programme b Programme c Programme d Programme e Programme f Programme g Programme h Programme i
Before TESTA 46 85 28 48 50 49 75 51 46
After TESTA 46 30 28 36 34 40 43 27 41
70 60 50 40 30 20
Before After
10 0
Figure 2. The total number of summative assessment tasks a student may encounter. 60
Programme a Programme b Programme c Programme d Programme e Programme f Programme g Programme h Programme i
Before TESTA 43 31 24 36 50 44 48 44 44
After TESTA 32 23 27 34 34 40 42 23 40
50 40 30 20 10 0
Before After
Figure 3. The total number of formative assessment tasks a student may encounter. 60
Programme a Programme b Programme c Programme d Programme e Programme f Programme g Programme h Programme i
Before TESTA 3 54 4 12 0 5 27 7 2
After TESTA 14 7 1 2 0 0 1 4 1
50 40 30 20 10
Before After
0
*Note: for the purposes of TESTA formative assessment tasks are defined as those that are ungraded (that do not ‘count’ towards the final module grade) but are required to pass the module and elicit opportunities for feedback and mastery.
Figure 4. The different varieties of assessment a student may encounter.
Programme a Programme b Programme c Programme d Programme e Programme f Programme g Programme h Programme i
Before TESTA 14 16 13 13 19 9 13 15 16
After TESTA 10 19 14 11 16 6 12 7 20
25 20 15 10 5 0
Before After
The data above shows that on average programmes who engaged with the TESTA approach as part of their re-validation decreased the total number of assessments a student will encounter by 34.4% (n=17). Furthermore, on average programmes reduced the number of summative assessment tasks by 17.3% (n=7.6), the number of formative assessment tasks by 45.3% (n=9.3) and the number of varieties of assessments by 12.4% (n=4.4). The reduction in the number of assessment tasks, the rebalancing of formative and summative assessment and the re-thinking of assessment varieties all address issues identified by TESTA and Harland et al in the over-assessing of students. In turn, these changes address the issue of assessment tasks competing for students’ attention, effort and energy, easing students’ workload as well as the assessment pressure points in the semester. For some programmes these changes were more substantial than others and the contextualisation of this data is important. Changes to the number and varieties of assessment tasks were made based on TESTA audit data, student feedback, and the programme team’s consideration, and as such
the variation of change differs significantly from programme to programme depending on each programme’s needs. For some programmes it was necessary to dramatically reduce the number of summative and/ or formative assessment tasks, for others it was necessary to increase the number of assessment tasks, and for some improvements came with re-thinking the sequencing and patterns of assessment rather than a reduction or increase in the number of assessment tasks their students would encounter. It is here that we must look beyond the limitations of quantitative data and examine how the TESTA approach shifted programme team’s thinking about assessment and feedback design.
Measuring impact using qualitative data As part of TESTA’s embedding in the revalidation process, programme teams are asked to comment on any changes made to assessment and feedback design as a result of their engagement with the TESTA approach. Below is a sample of comments given by programme teams with regards to the thinking behind the changes they made to assessment design on their programmes:
CAPTURE | VOLUME 5 |SPRING 2017
TESTA allowed for the rebalancing of formative and summative assessment and was addressed through the moves toward programmatic assessment, the shift in emphasis to single, high-impact assessments and the embedding of formative work in lecture patterns to this end. Following TESTA the sequence of the assessments has been considered so that feedback from one assignment informs the next assessment to facilitate student learning and development. As a team, we met to discuss the [TESTA results] and have worked to implement formative tasks‌ and reduced the sheer number of individual assessments. The TESTA process and its outcomes have provided a useful reference point for all the discussions and considerations that are now part of the preparations for the revalidation of the programme. There has been a heightened awareness about the various approaches to learning and teaching in relation to students’
progression from their entry at Level 4 to their graduation subsequent to Level 6. The team has sought to provide a more evenly balanced approach to assessment patterns across the programme in response to the issues of feedback and student perceptions of goals and standards that the TESTA audit outcome highlighted. A reduction in the number of summative assessments and an increase in formative assessment is reflected in a number of module outlines in the revised programme. Summative assessments will be supported by one or more formative assessments to enable students to practise different approaches to assessment and receive feedback on their progress. The comments from programme teams above contextualise well the quantitative data, illustrating how programme teams engaged with TESTA findings to reflect, re-think and reshape assessment and feedback design strategically and in such a way that aims to enhance the student learning experience. The re-thinking of assessment patterns to create
planned cycles of learning for students across whole programmes, and providing students with more opportunities for formative feedback demonstrate a shift in thinking about assessment and feedback design and illustrate the effectiveness of a whole programme approach. Furthermore, for some programme teams TESTA confirmed areas of strength, providing strong grounds for the continuation of good practice based directly on student experience and feedback.
assessment that prevents authentic and transformative learning. Moving forward, and in light of the new Teaching Excellence Framework, it is essential that we continue to engage with our students when it comes to exploring effective assessment and feedback design, not just with regards to measures of satisfaction but (arguably more importantly) authentic and transformative learning experiences too.
Conclusion
Ajjawi and Bound, ‘Researching feedback dialogue: an interactional analysis approach’, Assessment & Evaluation in Higher Education, Vol. 42, No. 2 (2015). Pp252-265.
Through building opportunities for enhancement into an existing re-validation process, the TESTA approach has successfully engaged whole programme teams in a dialogue regarding effective assessment and feedback design. Using audit data and student feedback it has succeeded in addressing the ‘arms race’ of over-assessment and shifted programme approaches to assessment design, particularly with regards to the sequencing of assessment, assessment and feedback cycles and the inclusion of formative feedback opportunities. The impact of these programme-level changes to assessment and feedback design is undoubtedly a step towards overcoming issues of poorly designed
References
Ashford-Rowe et al, ‘Assessment & Evaluation in Higher Education: Establishing the critical elements that determine authentic assessment’, Assessment & Evaluation in Higher Education; 2013. Barnett, ‘Knowing and becoming in the higher education curriculum’, Studies in Higher Education, Vol. 34, No. 4 (2009). Pp 429-440. Bovill et al, ‘Engaging and Empowering FirstYear Students Through Curriculum Design: Perspectives From the Literature’, Teaching in
CAPTURE | VOLUME 5 |SPRING 2017
Higher Education, Vol. 16, No. 2; 2011. Pp 197-209. Gibbs and Dunbar-Goddet, ‘Characterising programme-level assessment environments that support learning’, Assessment & Evaluation in Higher Education, Vol. 34, No. 4 (2009). Pp 481-489.
Jessop and Tomas, ‘The implications of programme assessment patterns for student learning’, Assessment & Evaluation in Higher Education, August 2016. Pp1-9. Jessop and Yaz El Hakim, ‘Transforming assessment through the TESTA project’, Capture; Vol. 4 (Spring 2013). Pp 37-46.
Gillett and Hammond, ‘Mapping the maze of assessment: an investigation into practice’, Active Learning in Higher Education, Vol. 10, No. 2 (2009).
Krause and Coates, ‘Student engagement in first-year university’, Assessment & Evaluation in Higher Education, Vol. 33, No. 5 (2008). Pp 493-505.
Tony Harland et al, ‘An Assessment Arms Race and its Fallout: High Stakes Grading and the Case For Slow Scholarship’, Assessment & Evaluation in Higher Education, 2014. Pp 1-14.
Oblinger eds, ‘Authentic Learning for the 21st Century: An Overview’, Educause Learning Initiate: Advanced Learning Through IT Innovation, May 2007.
Jessop, El Hakim and Gibbs, ‘The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different programme assessment patterns’, Assessment & Evaluation in Higher Education, Vol. 39, No. 1 (2014). Pp 73-88.
Price et al, ‘If I was going there I wouldn’t start from here: a critical commentary on current assessment practice’, Assessment & Evaluation in Higher Education, Vol. 36, No. 4 (2011). Pp 479-492.
Jessop and Maleckar, ‘The influence of disciplinary assessment patterns on student learning: a comparative study’, Studies in Higher Education, Vol. 41, No. 4 (2014).
Sambell, ‘Rethinking feedback in higher education: an assessment for learning perspective’, ESCalate, HEA Subject Centre for Education (2011)
Wass, Harland & Mercer, ‘Scaffolding Critical Thinking in the Zone of Proximal Development’, Higher Education Research & Development, Vol. 30, No. 3; 2011. Pp 317328.
CAPTURE | VOLUME 5 |SPRING 2017
Using a Partnership Approach through Student Fellows Projects as an Intervention to Assessment and Feedback Cassie Shaw Research Officer Student Engagement Cassie.Shaw@ winchester.ac.uk
Dr Stuart Sims Head of Student Engagement Stuart.Sims@ winchester.ac.uk
The results of the National Student Survey on assessment and feedback has inspired much discussion around the topic of how to address this aspect of the student experience. Students across the sector are frequently dissatisfied with their experiences of assessment and feedback and academics are continually striving to fix their dissatisfaction. This article will show some of the Student
Fellows Scheme projects that have looked into assessment and feedback, in order to explore the student engagement approach to tackling assessment and feedback. This is done through working in partnership with students and this article highlight some of the key areas students are addressing to improve their experience of assessment and feedback.
Staff-student partnership The term student engagement has been conceptualised as a means to greater understand retention and decrease the effects of alienation and student boredom (Finn & Zimmer, 2012). A visible depiction of student engagement can be shown through The Student Participation Map, which demonstrates how definitions for student engagement can include varying levels of opportunities and initiatives, as it seeks to display a holistic perspective on a multitude of engagement pathways available for students (Shaw & Lowe, 2017). Since the introduction of Chapter B5 of the Quality Code from the Quality Assurance Agency, student engagement has been a key aspect of quality assurance within Higher Education Institutions (HEI). Chapter B5 explains the partnership approach to student engagement, whereby students work with staff towards shared goals and values. This is through a relationship based on a mutual respect from both parties, as they may offer
CAPTURE | VOLUME 5 |SPRING 2017
differing perceptions and experiences but they are working together to a common agreed purpose (QAA, 2013). The policy for partnership in student engagement in Chapter B5 is also emphasised in the Manifesto for Partnership from the National Union of Students. It requires HEI’s to employ partnership for the co-creation of solutions of problems identified by a collective and, importantly, to co-deliver that solution (NUS, 2012). The varying levels to which partnership can form in student engagement initiatives can be seen pictorially in Lowe, Shaw, Sims, King and Paddison’s (2017) partnership seesaws. Another exploration of the notion of partnership in student engagement was explored by The Higher Education Academy, where they suggests partnership is underpinned by eight values; authenticity, inclusivity, reciprocity, empowerment, trust, challenge, community and responsibility (Healey, Flint & Harrington, 2014). These values seek to embed the culture of partnership within learning and teaching practices in higher education, through the policy and ethos of the QAA and NUS respectively. The Student Fellows Scheme is an example of this partnership relationship within a HEI in action. It fosters the environment for students and staff to create meaningful demonstrable enhancements for the student’s educational experience. It gives them the opportunity to co-create solutions
with their staff partner in order to enhance the student experience.
Student Fellows Scheme The Student Fellows Scheme partners 60 students with members of staff across the university to work on educationally developmental and student experience enhancing projects. This initiative is driven by the ethos of partnership through its individual projects and through its co-direction between the University of Winchester (UoW) and Winchester Student Union (WSU). Both UoW and WSU are committed to supporting the student-staff partnerships and their projects through a range of training and workshops, designed to increase accessibility to the scheme through enabling students to gain valuable skills in research methods and data collection. The projects can address varying levels of the university, some are modular or programmatic, some address the wider faculty, or, the whole institution. This gives the Scheme the ability to be flexible for differentiating project needs, as long as they lead to a positive change for the student experience at Winchester. The projects run from September to May and the students are paid a £600 bursary, split into four instalments, that is subject to the completion of four progress review stages. The final progress review stage is a presentation at the Student Fellows Scheme Conference in May,
CAPTURE | VOLUME 5 |SPRING 2017
where staff and students are invited to present together for 20 minutes about their project and the change they have made a result. All members of the university community are invited to attend the conference, which provides the opportunity to disseminate the projects widely amongst the university to inspire and share the good practice from across the institution.
Engagement and assessment Providing students with a sense of autonomy over their student experience often motivates them to engage more deeply with their educational and institutional experience (McCombs, 1996). The notion of student autonomy is further emphasised in Bae and Kokka’s paper (2016) whereby the fifth recommendation to HEI’s is to provide opportunities for students to make choices that coincide with their own interests and needs. Assessment is a key area of the student experience and one that the Student Fellows Scheme can support their agency and ability to shape the outcomes. Students thrive under the empowerment to work in partnership with academic staff on aspects of their assessment experience. Studies have shown that students feel that working together with staff on forming an assessment marking criteria can feel more democratic, where their opinions are valued (Hernández, 2007), and it enables them to engage fully
with the assessment because they can refer directly to the criteria they helped to construct (Meer & Chapman, 2015).
Student Fellows working on Assessment and Feedback A review of Student Fellows project outputs since the scheme began indicates that Assessment and Feedback has not been a significant focus for the staff or students participating in the scheme. From a total of 125 completed SFS projects between 2013 and 2016, only 11 have a clear focus in this area. Whether this indicates a lack of interest in this area, a sense that there is little capacity for student-led change relating to assessment or other factors, it’s hard to say. The following section gives a brief overview of these projects and their outcomes including a range of student generated changes and ideas for improving assessment and feedback.
Feedback Focus Groups in Psychology Bianca Hyett & Merce Prat-Sala Student-led focus groups that broke down various positive and negative aspects of the course and suggested solutions Key findings, outcomes and recommendations • Students struggle with handwritten feedback
CAPTURE | VOLUME 5 |SPRING 2017
• • •
Clarity and potential interactivity of digital feedback was praised 100% exams seen as too pressurised Student would prefer more contextual assessment questions (“Like give us a case about a child and tell us what their problems are and ask us, how would you deal with that?”)
English Assessment and feedback working group
Lauren Guyver Mixed methods research into student preparedness for assessments Key findings, outcomes and recommendations • Most students felt inconsistency in the way information was given about assessments was confusing • Receiving information further in advance of assessments and dedicating one lecture per semester to explaining assessments was seen as a solution
Online methods of informal steady feedback in History
Kat Duffy and Ellie Woodacre Questionnaire feeding back on the use of an online ‘gobbet’ forum in History Key findings, outcomes and recommendations • There was low take up of this new online format of assessment because of timing and the limitations of the Learning Network • Canvas was seen as a more accessible alternative with a better layout and support for student involvement
Improving feedback perceptions amongst psychology students through the use of the personal tutor scheme
Kiia Huttunen and Kirsty Ross Student questionnaires after meetings with personal tutors to gain one-to-one or group feedback on assessments Key findings, outcomes and recommendations • Having a chance to verbally go through the feedback improves student engagement and their perceptions on feedback • Structuring feedback around a rubric was seen as unhelpful and not
CAPTURE | VOLUME 5 |SPRING 2017
•
personalized Students found general comments about what was good in their work the most useful form of feedback, followed by their grade
Explore what ‘feedback’ means for students, and develop a better understanding of students needs in relation to feedback mechanisms/styles
Patricia Munhumumwe & Dave Raper Survey and interviews about assessment and feedback in Health, Community and Social Studies Key findings, outcomes and recommendations • Many students felt the information they received did not prepare them completely for assessments • Respondents were split over a preference for written and oral feedback • Addressing Learning Outcomes in sessions helps to structure assessments • Individual difference in the style of feedback caused difficulties for some students
•
• •
Formative presentations and activities related to assignments were seen as useful in a process of assessment dialogue As long as it highlighted key areas for improvement, the length or style of feedback was unimportant Peer support can have a strong impact on confidence and attainment
Transferrable Skills and the Use of Assessments in BA English Studies
Elle Codling and Camille Shepherd Surveys and focus groups with students about programmes’ assessment structure Key findings, outcomes and recommendations • Students felt that having a range of assessments allowed them to develop transferable skills effectively • Exams are useful for working under pressure but for most respondents few other skills were perceived as being gained from this • Exams do not allow student to produce a fully researched response which was seen as limiting and ‘nonacademic’ • Presentations were praised for giving students more choice of topic and
CAPTURE | VOLUME 5 |SPRING 2017
• • •
•
style Essays help develop research skills Portfolios are perceived well but rarely used Feedback can be vague but participants also claimed that they tend to focus on and worry about the negative in audio feedback Desire for a single form of feedback for all assessments
An investigation into assignment and module feedback in the sports studies department
Vicki Wright, Jasmine Wyatt and Stewart Cotterill Cross-departmental focus groups Key findings, outcomes and recommendations • Students do not like receiving nonspecific feedback on their work • Class feedback discussions make most students feel uncomfortable • All lecturers should use the same method to ensure consistency ‘I just want to bring up a point about
the online Learning Activities’ Savannah King and Mick Jardine
Staff-student action to investigate and redevelop online learning activities, a recurring issue for student reps Key findings, outcomes and recommendations • Students are happiest with feedback when it is prompt and highlights where they went wrong • While feedback from them was seen positively, online learning activities were not a popular choice for summative assessments
Student perceptions of practical assessment techniques used within the BA Sociology programme
Emma Sherman and Paul Jackson Student focus groups asked about experiences of and preferences for assessment Key findings, outcomes and recommendations • Essays were a popular choice of assessment because the students felt comfortable and familiar with them and it’s ‘academic’ • The pressure of presentations was seen as a positive by ‘confident’ students • Presentations, reports and debates were seen as equipping students with
CAPTURE | VOLUME 5 |SPRING 2017
•
the most relevant skills Having a say in assessment formats through Student Representation was seen as very important because student feedback was acted upon
Health, Community and Social Studies-The student experience of audio assessment feedback
Kirsty Brinkman and Nick Purkis Focus group data with students about perceptions of audio feedback Key findings, outcomes and recommendations • The majority of students prefer audio feedback to written feedback • Audio feedback was unanimously seen as more personal, more detailed, made students feel more valued and is a better way of interacting with their lecturer • Students found it comforting to know that their lecturers had taken the time to read their assignments thoroughly
Conclusion Perhaps unsurprisingly, given that it is one of the main ways in which students can be
involved in the wider process of assessment, projects about Feedback dominate these projects. Common themes seem to indicate satisfaction with specificity and personalisation of feedback and unsurprisingly that it is best received promptly. SFS projects which primarily focus on assessments themselves are less common, perhaps indicating that this is perceived as a more inflexible aspect of the student learning experience. Clarity of communication around processes seems to be key and there seems to be evidence in favour or a broad diet of assessment to test a range of skills.
Bibliography Bae, S., & Kokka, K. (2016). Student Engagement in Assessments: What Students and Teachers Find Engaging. Stanford, CA. Stanford Center for Opportunity Policy in Education and Stanford Center for Assessment, Learning, and Equity. Finn, J. D, Zimmer, K., S. (2012) Student Engagement: What is it? Why does it Matter? In Christenson, S. L., Reschly, A., L., Wylie, C. (2012) Handbook of Research on Student Engagement. (Springer Science & Business Media, Berlin). Healey, M., Flint, A., Harrington, K. (2014) Engagement through partnership: students as
CAPTURE | VOLUME 5 |SPRING 2017
partners in learning and teaching in higher education. York: Higher Education Academy. Hernåndez, R. (2007) Student’ Engagement in the Development of Criteria to Assess Written Tasks. Transforming Assessment. Published by REAP. Available at: http://ewds.strath.ac.uk/REAP/reap07/Portal s/2/CSL/t2%20%20great%20designs%20for%20assessment/s tudents%20deciding%20assessment%20criteri a/Students_engagement_in_development_of _assessment_criteria.pdf [Accessed 3 March 2017]. McCombs, B.L., 1996. Understanding the keys to motivation to learn. http://www. mcrel. org/pdfconversion/noteworthy/learners_learn ing_schooling/barbaram. asp>. Acesso em, 27(03), p.2006. National Union of Students [NUS] (2012). A manifesto for partnership. Retrieved from: http://www.nusconnect.org.uk/campaigns/hi ghereducation/partnership/a-manifesto-forpartnerships/ (Accessed 20th Janurary 2017). Shaw, C., Lowe, T. (2017) The Student Participation Map: A tool to map student participations, engagements, opportunities and extra-curricular activities across a Higher Education Institution.
QAA (2012). UK Quality Code for Higher Education. Chapter B5: Student Engagement. Gloucester: Quality Assurance Agency.
Engaging with Feedback for Learning Delia Scalco Student Fellow (Criminology & Psychology) d.scalco.16@ unimail. winchester.ac.uk Feedback is an essential tool that students can use to improve their work, positively influencing the quality and overall outcome of their studies. As a Student Fellow working on a project about assessment and feedback, I conducted research on student engagement with feedback at the University of Winchester. The research focussed on students' perceptions of feedback, what forms of feedback they considered most helpful, how they use it and in what other manners they would like their feedback delivered. Seventy-one undergraduate and postgraduate students and five members of academic staff across different courses at the University of Winchester participated in the research. Of
the seventy-one students, fifty-six took part in semi-structured interviews while the remaining fifteen responded to an online survey. The questions were designed to generate participants' perspectives of their knowledge of feedback, opinions, and experiences with an emphasis on their preferred method that would enable them to participate actively in the feedback process. The five members of academic staff answered questions regarding what they do to ensure that the feedback they provide is part of a two-way process. The findings describe a marked difference between what students and academic staff consider to be feedback. Academic staff are clearly aware of the diverse methods of feedback available whereas students typically only recognise it as being written and on a one-to-one basis. The students prefer more thoughtful comments that are personalised and focused on the quality of their work rather than comments about, for example, the structure of an assignment or referencing errors. Students responded positively to the quantity of feedback given with 62% saying that the amount is 'just right' and 38% saying that they
CAPTURE | VOLUME 5 |SPRING 2017
'don't receive enough'. However, they were more critical about the efficacy of feedback when asked to identify the positive and negative aspects of the feedback that they have received in the past. Students identified problems such as overly critical comments, and within courses where feedback is handwritten, they are struggling to read and understand these observations, suggesting the removal of this format. Below are quotes used to demonstrate typical responses: ‘I do not find it helpful when they put negative information on your work and sometimes I do not want to see that. It does not motivate me to carry on". ‘It is hard when they write on your work because sometimes you cannot see it, you cannot understand the handwriting, they should do it in a word document. I think that handwritten feedback should not be used anymore’. One issue that students identified that affects their engagement with feedback is timeliness. There is often a large gap between the end of the assignment and the receipt of feedback by which point other tasks have passed, and new ones started. This time difference creates a
delay in the ability of the student to act on the feedback. However, students are understanding and accept that ‘sometimes it takes a while for the feedback to be delivered, but we know we are not the only class that they have’. When asked if they are struggling to understand their feedback, most students stated that they were not, but some commented that it could occasionally be unclear: ‘Sometimes comments can be a bit vague, on one of my assignments I had three question marks with no explanations’. In considering what they do with their feedback, the most common answer was that students just read the comments. However, some students actively participate in the feedback process while others don't engage with feedback at all. The academic staff provided valuable insights into their methods of creating a 2-way feedback process such as having an opendoor policy, personalised feedback and feedforward tutorials. The purpose of conducting this research was to find out the student views of feedback but more importantly their preferred method of
delivery of feedback. The general theme that unfolds from the responses is that students would prefer more timely and formative feedback such as mock exams, regular exercises and in-class quizzes. They also expressed a view that they would like more one-to-one time available with their lecturers. However, one of the most significant findings that emerged from the interviews was the fact that students do not know how to act on their feedback. Therefore, they cannot actively engage with it. With this key information in mind, different support methods are being thought-through at the University of Winchester. One of the approaches being considered refers to an adaptation of the Developing Engagement with Feedback Toolkit – DEFT (2016) which was created by Dr Naomi Winstone from the University of Surrey in collaboration with Dr Robert Nash from Aston University. The DEFT contains a feedback guide for students, workshop materials and resources for the development of a feedback portfolio. During the interviews, students were very confident about making better use of Canvas to host their feedback portfolios. In other words, all their feedback would be available in a single place of reference for easy review at any time
with students being able to access learning tools to help them put their feedback into practice. The University of Surrey uses the DEFT and different universities across the UK use an adaptation of the feedback guide. In summary, feedback is a two-way process, and the role that the student plays in this is of high importance. Although students are satisfied with the quantity of feedback received, they are more critical of the delivery of feedback, and this results in a lower overall engagement than would otherwise be possible. However, students are very positive about the opportunity to interact more efficiently with the academic staff and consider that any additional support would be beneficial.
References Winstone, N. &, Nash, R.A. (2016). The Developing Engagement with Feedback Toolkit (DEFT). Heslington: Higher Education Academy. Retrieved from https://www.heacademy.ac.uk/resource/dev eloping-engagement-feedback-toolkit-deft.
Alternative explanations of high and low module average marks Prof. Graham Gibbs Independent Consultant
The problem Many institutions experience the phenomenon of individual modules differing widely in their average marks and pass rates. This may not matter much when most students take much the same collection of modules to obtain their degree within a subject. But when most students collect a wide variety of different modules across different subjects, their degree classification, and even their likelihood of graduating at all, can be determined as much by which modules they took as by how much they have learnt. It then matters why a module has high or low average marks, in order to decide what steps
to take to establish a fair assessment regime, in which standards are roughly equal and can be defended rationally.
Possible diagnoses High or low average student marks on modules (or for whole subject areas) can have a wide range of causes. Here these causes are categorised in three ways: • • •
teachers setting inappropriately high or low standards teachers implementing the standards weakly even where standards have been set appropriately students achieving high or low educational standards, through educational effectiveness or ineffectiveness of the teachers or the students themselves, even where standards have been set appropriately and implemented effectively.
There are a range of possible forms of each of these causes and they can be combined in almost any combinations. Causes can also change from year to year and can vary between modules within the same subject area so that a high average for a subject area can be the result of a range of causes in
CAPTURE | VOLUME 5 |SPRING 2017
different modules each of which contributes to the average in different ways. Low or high averages may also be perfectly justified – for example students may be putting in insufficient effort or teaching may be especially effective.
•
1. Setting standards at an inappropriately high (or low) level 1.1 Having too much (or too little) content in the curriculum. • There have been examples of a module being split into two so as to spread content over a longer time period in order to enable students to cope, after persistent low averages in the preceding single module, only to produce high averages in the subsequent two modules. Programme re-structuring often produces disjunctions in terms of appropriate ‘volume’ of this kind – a study by Gibbs and Haigh demonstrated such a phenomenon for one subject at Brookes over a decade the 1980’s and 90’s, with each restructuring following course review producing low averages that gradually increased over the subsequent four years. • Newly formed modules on new topics or with radical new pedagogies often
•
misjudge the volume of material students can realistically tackle. New subject areas are sometimes thin in terms of the volume and complexity of theory, evidence and literature available to be studied, while well established subject areas have the opposite problem of having to be ruthlessly selective if they are not to overburden students. Oxford, with its very traditional curricula, persistently overburdens students for this reason. Professional bodies may define curricula in a way that overstretch the kinds of students being taught. There used to be an Institute of Banking course at Brookes with a very low pass rate, imposed by the Institute by releasing day-release students for too little time for the huge syllabus, as a deliberate mechanism to depress the career and salary aspirations of bank staff. In effect the university lost control of standards through not being able to define the curriculum appropriately for the students.
1.2 Having too much (or too little) for students to do in terms of tasks and assignments. There used to be wide discrepancies between modules in terms
of the number of hours students needed to put in to get the same marks – and this was more to do with the design and number of assignments than the perceived size of the curriculum. A study at Brookes by Roger Lindsay found a variation of 900% between the lightest and heaviest perceived workloads on modules, with an average of 300% variation for each student between the different modules they had taken, even within the same field of study. 1.3 Overestimating (or less commonly underestimating) the ability of students, given their background knowledge, to meet the standard set. This can happen when: • the academic has experience, as a student or as a teacher, in an institution with students of greater (or lesser) ability or previous educational experience than those on the current course, and the standards have been set in relation to that past experience rather than the current context. It is assumed here that all standards are somewhat relativistic rather than absolute and should be adjusted somewhat to the context. It is a matter of judgement how far this adjustment should stretch. For
•
example probably 95% of Wolverhampton’s Maths students would fail the first year of Maths at Oxford and it is appropriate that Wolverhampton does not apply Oxford’s standards. This problem is most prevalent in subject areas where content is clearly graded in difficulty and the module design issue is how far up the gradient to set the tasks. Oxford regularly gets this wrong for tutorial problem sheets in science that hardly any students can tackle, but is more careful about getting it right for exam questions, after teachers pool their experience of the ability of their tutorial groups to tackle the kinds of problems that have been set. Essay-type subjects, in contrast, have tasks that can be tackled at some level of competence by almost anyone. when changes in enrolment result in the student population changing from what the module was originally designed for (e.g. students from another field taking the module without having taken a recommended prior module or not having an appropriate A-level background). Such a change in population may take several years to resolve as it may involve compromising on ‘single
CAPTURE | VOLUME 5 |SPRING 2017
•
•
subject’ standards which academics are (rightly) reluctant to do. when the epistemology or discourse of the module is unfamiliar to the students (e.g. a discursive text-based module when students are used to scientific, numerical or algorithmic processes). This is a focus of the recent study at Brookes by Anton Havnes. The problem can be ameliorated by planning the way students gradually get experience of new ways of thinking, and receive feedback on their performance, before the first time they are marked. Some subjects manage this well over time, across modules, while other subjects have little programme-level planning of this kind across modules. when assignments involve new kinds of task demand. For example the first time students give an oral presentation they are unlikely to be very good at it. If the first time they do this the presentation counts for 50% of the marks for the module this will depress average marks. A research study carried out in 2006 found Brookes students to be bewildered by the variety and idiosyncrasy of assignment demands (compared with students in the same subjects at two very different
•
•
universities where students gain more experience of each type of a more limited range of assignments) and also relatively unclear about goals and standards associated with these demands, despite the higher level of explicit specification of standards at Brookes. when a pre-requisite module has failed (for whatever reason) to bring students up to the level expected by the designer of the module. when subject matter is inherently more difficult. When an academic argues that ‘all students everywhere do poorly in this particular subject’, this may well be true. The issue then is whether standards should be adjusted in the light of this phenomenon so that students can have a reasonable expectation that, with a similar level of application as on other modules, they should have a reasonable chance of similar performance.
1.4 Drafting criteria and standards in such a way that getting marks is too easy (or too difficult) independently of how students perform or how the marking is undertaken. This can happen where:
•
•
‘novel’ assignments are used which make unusual demands for a student ‘performance’ and where the standards students can reasonably achieve are not yet well understood (compared with, for example, essays) where learning outcomes are unusual or not yet well calibrated. For example there is evidence that generic and transferable skills, such as ‘presentation’ and ‘group skills’, tend not to produce many very low marks compared with tests of knowledge that can produce a mark of 0%. Transferable skills often have a ‘baseline’ around the pass mark. A study by Gibbs and Webster of marking of dissertations in Sociology at Brookes found that when the markers’ overall mark was closely correlated with the markers’ ratings of ‘general’ characteristics (such as organisation) the overall mark was significantly higher than when it correlated closely with ‘sociology’ characteristics (such as use of sociological theory). A greater emphasis on generic outcomes in any subject tends to inflate marks. At the current stage of higher education in the UK learning how to assess generic outcomes, outcomes such as ‘group skills’ are easier for students to
acquire and display in a way that acquires marks than is ‘content knowledge’. Subject areas with traditional academic values and a distinctive disciplinary discourse, the sophisticated use of which defines standards within the discipline, tend to emphasise such generic outcomes to a lesser extent.
2. Implementing standards weakly This is an issue that is often amenable to procedural improvement through codes of practice, to staff development and to student development. The most common problems involve: 2.1 Lack of clarity about standards so that students set their sights too low (or high) or just miss the target altogether. There is a good deal of literature describing students’ confusion and misconceptions about criteria and standards and such problems should be expected unless special steps are taken. Specifying criteria in ever more detail has been shown by studies by Price, Rust and Donnovan within Brookes not to help much, but students seeing and discussing exemplars of work marked to be at different standards, and self-assessing in relation to standards so they can calibrate their
CAPTURE | VOLUME 5 |SPRING 2017
judgements to those of their teachers, works comparatively well. Efforts at the University of Strathclyde along these lines have improved grades on modules that previously had low averages, across a range of subjects. 2.2 Lack of clarity about standards so that markers mark too leniently (or toughly) or simply mark to different and unrelated standards. While Brookes specifies criteria in course documentation, standards are much harder to specify in a way that is understandable to ‘outsiders’. High levels of unreliability in marking are the norm rather than the exception for open-ended and discursive assignments. The study of dissertation marking in Sociology at Brookes, mentioned above, found near random marking despite criteria being specified. Sociology have since developed criteria and standards, and the way they are used, to a considerable extent, and have published the outcomes, so this problem is amenable to resolution. 2.3 Bringing to bear marking standards from a different context rather than adjusting to the more appropriate local standards. This is common where markers are employed who are not full time members
of the local academic community. The Open University goes to considerable lengths to orient its part time tutors to appropriate standards, through marking exercises with sample assignments, discussions with full time members of the course team, and detailed specifications in relation to individual assignments. Even after all this effort it still then monitors the tutors’ average marks for every assignment and every exam question, to ensure they do not differ statistically from the average, and full time members of the course team then sample the tutors’ marks and feedback and make individual staff development interventions where necessary. Any tutor, no matter how experienced, can be put on a high level of monitoring if their average marks diverge from course averages. The QAA allows the Open University not to double mark because they align markers’ standards so thoroughly. This problem is also common where markers assess the students in a different context – for example in the workplace, where it is not possible for standards to be the same as quite different things are being learnt and assessed. Equivalence of standards, or at least similar grade
distributions, may also be difficult to achieve if those marking in the workplace do not also mark in the academic context.
2.4 Lack of expertise or experience (or both) of markers. New teachers new to marking commonly produce low marks – recognising weaknesses in students work before they learn to recognise strengths, and applying inappropriately tough standards as part of their assertion of their own academic standing. New markers need to be mentored and monitored through their early marking experiences, and inducted into local values and standards through a kind of a ‘cultural alignment’ process. Such alignment cannot be achieved by written specification alone. 2.5 Lack of ‘emotional distance’ from the student. At Oxford the tutorial relationship and feedback on weekly assignments is quite separate from the examination system and tutors never examine their own students – it is considered impossible for tutors to be objective about their own students. Similarly those at Brookes who mentor or supervise social workers or nurses in the
workplace have a personal investment in their students and their success but at Brookes these supervisors are allowed to mark their own students and the result is very positively skewed grade distributions. Supervisors of PhD theses are normally excluded from the examination process for the same reason. It is very common for such personally engaged tutors or supervisors, if they are allowed to mark, to produce very high average marks and no failures, as currently happens in some courses at Brookes. Such supervisors may need to be brought into the academic community and its sense of standards, or excluded from anything other than judgements about whether formal requirements have been met by the student. 2.6 Not picking up problems with standards and dealing with them as part of the examination process (for example external examiners considering discrepancies between two internal markers, or sampling markers’ work and checking the standards they use or grade distributions they produce) before marks are agreed and recorded. The Open University has a mark moderation process for each course, and arithmetically manipulates marks from assignments, and
CAPTURE | VOLUME 5 |SPRING 2017
changes how marks are combined, if there is evidence of unusually and inappropriately high or low marks for particular assignments or even individual exam questions, before marks are agreed and students informed of their course grade. Students understand that their assignment grades will not produce a course mark automatically.
3. Achieving low (or high) standards of learning outcome
This issue is not about the standards as set by the course or the standards used to produce marks, but about the educational effectiveness of the module or programme and the standard of student work or performance on the module. There are an enormous number of variables which affect the standards of educational achievement of students and the following list is not comprehensive. It is clearly quite wrong for students who have been taught in a dull way, put in little effort as a consequence, learnt little as a consequence and gained poor marks as a consequence, to have their marks increased just because the module displays an unusually low average. It seems equally wrong for a well taught module
that has engaged the interest and effort of students, who then produce wonderful performances, to have their marks reduced just because the module average is unusually high. It is very difficult to make judgements about what to do about the consequences of the following variables without reference to some independent judgement of the relative standard of student work in relation to standards at equivalent institutions. While the proportion of students gaining firsts at Oxford is comparatively high, external examiners routinely make a point of supporting this high proportion because they judge the quality of work produced to be exceptional. External examiners’ judgements are crucial to a diagnosis of whether a module or subject is or is not producing high (or low) standards of student work. 2.7 Student variables. Especially good (or poor) students, or skewed distributions of students, may produce high or low average marks. For example there is usually a substantial difference in performance between students studying subject X modules who are majoring in X or only taking a few X modules that they are required to as part of another subject – the students majoring in subject X do
comparatively well and the students from a different field do comparatively poorly, and you end up with a bimodal distribution of marks. The balance of the types of students determines if the average is high or low. Sometimes fields have designed modules for ‘visiting’ students that deliberately cover less material at a lower level so as to produce ‘normal’ average marks. Here a normal average mark hides lower academic standards. 2.8 Especially engaged (or disengaged) students. This may be due to how students came to take the module - for example compulsory modules tend to be associated with lower motivation, lower ratings of teaching and lower performance, in all higher education contexts. Or it may be due to the effectiveness of the module in engaging the students. There is now convincing empirical evidence (from the USA) about those features of courses that lead to higher engagement and hence to higher marks. Interestingly these features have much more positive impact on the lower ability and less motivated students. The high ability and self-motivated students seem immune from interventions and innovation, and engage themselves.
2.9 Excellent (or poor) teaching. A whole range of measurable features of teaching have been found to consistently affect student performance to some extent and some factors, such as teachers’ ‘organisation’, have a quite marked impact on average marks. 2.10 Class size. Brookes was the first institution, in the late 1980’s, to identify a significant negative impact of class size on student performance, in some but not all subject areas. It also proved possible to identify large enrolment modules that did not suffer from lower average marks – and they almost all used teaching and assessment methods that resulted in students experiencing many of the benefits of smaller classes. Subsequently it appears that standards have been adjusted on large enrolment modules to produce ‘normal’ average marks, but without any of the improved pedagogy to explain their achievement. I would be much more worried about average or high marks on large enrolment modules than about low marks. Low marks will usually be justified by the almost inevitably poorer student performance
CAPTURE | VOLUME 5 |SPRING 2017
produced on most large enrolment modules. 2.11 Well (or poorly) co-ordinated goals, assessment, feedback, teaching and marking, leading to high (or low) levels of ‘constructive alignment’, so that regardless of whether the components are adequate separately, they do (or do not) pull effectively in the same direction. ‘Constructive alignment’ has been identified as a major issue beyond ‘good teaching’ that affects student learning outcomes. The most common problem is a lack of alignment between what the students understand the module to be asking them to do, and what they therefore spend their time on, and what the assessment system actually rewards in terms of marks, so that students, while they work hard, like the course, respect the teaching and feel they have learnt a lot, still ‘miss the target’. 2.12 Lack of appropriate knowledge background of students resulting from bad teaching or low standards on a preceding module. In modular structures the preceding modules may not even have been studied. The level of students’ background knowledge predicts student
performance better than any other variable. The Open University by and large allows students to take whichever courses they like, but they can predict with chilling certainty which students, who have not passed certain previous courses, will drop out, fail or gain poor grades. Degree programmes with more vertically integrated structures and more rules about pre-requisites, so that students’ knowledge background is more predictable, tend to produce more advanced outcomes and have lower failure and drop-out rates. 2.13 Students’ lack of familiarity with the type of assessed task so that they cannot demonstrate their learning effectively. Oxford students almost always write essays, usually at least one a week, so they get very good indeed at writing essays, and it is essays that they write in exams, at which they then excel. In contrast Brookes students bump into all kinds of curious assessment demands they may not be familiar with and often have only one or two goes at each type of assignment and each may contribute 50% or even 100% of the marks for a module. It is much harder for Brookes students to become very good at the types of assignments they tackle - except in
subject areas that have coherent policies to limit the variety of assessment and progressively develop students’ competence at each type through multiple cycles of practice and feedback. 2.14 A ‘hidden curriculum’ that allows students to be highly selective in what they study, perhaps because exam questions are highly predictable or because they are the same as essay questions, so that with little effort they can display high levels of performance under examination conditions. The problem here, and it is usually hidden from external examiners, is that the exam mark is then a very poor predictor of student knowledge across the whole curriculum. The recent study of assessment at Brookes compared with other institutions, mentioned above, found clear evidence of this ‘hidden curriculum’ effect operating and enabling students to study only a small proportion of the module they were taking, and still gain good marks. The opposite problem is less common, of examination demands being so unpredictable that even industrious and cautious students find that they have not studied the right material.
Reviewing individual modules It is unlikely that any quantitative approach to identifying causes of high and low average marks is likely to be very productive, because such methods identify patterns across large numbers of modules (such as the effects of Alevel scores of students), while causes in any individual module are likely to be complex, highly idiosyncratic and highly contextual (such as whether, this year, the tutors hired to do the marking understand what they were supposed to be doing). Causes are often only visible to those inside the local culture who understand the ‘teaching and learning system’ the module is part of. The above list might be useful as a checklist. Module leaders of modules with high or low average marks might be asked to think about which of these possible causes they think are the most likely to be influential in their case. Some causes (such as poor teaching or inappropriate marking standards) may not be identified by such self-diagnosis, but these causes would not be open to identification by quantitative methods either. The credibility and usefulness of this kind of self-diagnosis of causes, and what might be done to resolve problems, where they actually exist, depends
CAPTURE | VOLUME 5 |SPRING 2017
on the active engagement of the professional judgement of those involved. The checklist might assist in that engagement. External or technical diagnoses seem less likely to produce that engagement, however statistically sophisticated. Whether action should be taken when averages are high or low is always a matter for academic judgement by those closest to understanding the complexity of the context in which the variation occurs. Bureaucratic rules on their own can never safeguard standards. Graham Gibbs is an Honorary Professor at the University of Winchester, working in support of the TESTA project: Transforming the Experience of Students Through Assessment. This paper was originally drafted to inform a debate at Oxford Brookes University and so refers to data and studies within Oxford Brookes. Reproduced with the author’s permission from www.testa.ac.uk
What are the links between assessment and creativity? With specific reference to practical assessment in the Drama programme. Samuel Chivers Vice President, EducationWinchester Student Union Samuel.Chivers@ winchester.ac.uk Having spent the last three years studying an undergraduate degree in Drama here at the University of Winchester, the idea of creativity is something that is clearly in the mind of both staff and students. Arts subjects tend to have an air of subjectivity and this shows itself clearly when students are being assessed through practical performance. This piece will explore this concept and any allied issues by looking at six opinions from a range of individuals. Two graduates, two current undergraduate students and two lecturers in the field of performing arts. My reasoning
behind this is to give the discussion and opinion a broad range to be considered. My aims in this piece are not to demand that all criteria and assessments be stripped from the syllabus, on the contrary, as one participant argued ‘assessment criteria can also help to focus the creative process, giving students an overarching theme or goal to work towards.’ My aim is however to open up discussion as to what assessment criteria does to students and how we might find ways of working around the issue of students obsessing over marking criteria when, some may argue, what is important is the work they are putting in. For further clarification, the QAA guidelines state: ‘Assessment within… drama and performance enables students to demonstrate their level of attainment and the full range of abilities and skills. A diverse range of assessment types are used in…, drama and performance as reflects the discipline and curriculum design ensures that students are appropriately prepared’ QAA, (2015) This is a tried and tested system and works to give students a fair grade in all their marks. Simply put, it works when one looks at a
CAPTURE | VOLUME 5 |SPRING 2017
degree in terms of quantitative data and grade point averages. However, when doing a degree in Drama, many are not here just for the degree. Many hope that the skills they are able to learn will be transferable and assist them in securing a job within the Performing Arts or even somewhere completely different such as teaching/lecturing. The idea that a student’s creativity can be nurtured and improved is one that is attractive to many students but a quantitative look at degrees arguably gets in the way. I would therefore like to invite you to read the six responses I have received. Each has their own views and looks into this particular question and each were asked to answer the same question. Some take a more broad a philosophical look and others have a clear and defined opinion.
Ollie Smith – Drama Graduate With regards to creativity in assessment, there is always a limitation as to how creative a person can be. There is always room to be creative, however the limitations to it, are that the criteria one must cover and use in said assessment. Although we may be free to choose our topic of focus to be assessed on,
we are still limited as to how creative our answer and question could be. From a practical point of view this is increased. Although we may be again free to choose the way in which we execute our assessment, we are still limited as to how our response can be as certain criteria needs to be fulfilled. One could argue that because of the freedom of our chosen topics we need to remain focused in specific areas as to not branch out and become too broad for both written and practical, however I feel that restricting this does not allow for creative freedom in which we can experiment and focus our own understanding.
Katie Muncer – Drama Graduate While it may be argued that practical assessments enforce certain limits on students’ creativity, assessment criteria can also help to focus the creative process, giving students an overarching theme or goal to work towards. The process of creating practical work for the purpose of an assessment allows students to explore the complex topics and themes of their modules through more proactive means. In using creative devices to create practical work for assessments, students are able to engage in a
live dialogue with the theories and texts presented in their modules. Assessment criteria then provides a focal point for this dialogue; encouraging students to explore their own views on a specific assessment topic, which is further informed by the teaching they experience throughout the module. As both Ollie and Katie are graduates, they both have their own ideas about how their time studying this particular course firstly, could have been and secondly, how it actually was during their studies. It is worth noting that their opinions do not necessarily disagree, rather that Ollie’s stance takes a more holistic look at the pedagogic practice whereas Katie believes that assessment has the ability to focus students.
Becky Jones – 3rd Year Drama Undergraduate Honestly, I don’t believe that assessment and creativity exist in absolute harmony. The assessment criteria are insightful in terms of understanding what lecturers need to see from you in order to please academic standards. Apart from that, it’s a guideline and I think if criteria are followed too
rigorously, you end up sacrificing your own artistic merit. From experience, I rarely look at assessment criteria (yes I know, bad student) and often forget about it. I work collaboratively with peers, throwing new ideas in the equation, trying them out practically; this particular form of devising work allows for experiment which is paramount to learning. If you don’t take a risk, how will you ever improve? Although I understand how fixated some students are on attaining a certain grade, it really doesn’t reflect the gumption or creativity of the work. Obviously, from an achievement point of view, it is informative to understand where you are at grade wise. However, when the time approaches to start applying for Postgraduate courses or jobs, I think prospective institutions and employers should be more concerned by your ideas than a digit on a piece of paper. We are not defined by regulation, we are defined by our skills which can’t be measured by that 2:1 you barely scraped.
Martin Jakeman – 3rd Year Drama Undergraduate Creativity as an assessment, to me an Oxymoron, I ask myself “what do I prioritise, the creativity or the marking criteria?” In the
CAPTURE | VOLUME 5 |SPRING 2017
early stages of creating (usually through workshops) multiple ideas are being tried out and inevitably either scrapped or altered but creativity can often be limited if ideas are capped to “what the examiner is looking for”. For many practical exams I have looked at the marking criteria twice, once when finding key points to explore, and then at the end of the project when the marks are attached. Creativity to me is a long string of thoughts that link from idea to idea that although might not stick strictly to the marking criteria; it has been inspired by it. It can be difficult when weighing up how important creativeness and meeting the brief is to an assessment and it differs between each group in our cohort. To me personally I have always found the criteria of an assessment restricting so view it more as an inspiration rather than viewing it as scripture. As both are current students, I believe that Martin and Becky’s points hold an awful lot of insight into how students currently feel about the topic of assessment criteria and practical work. Their views, though similar, take a different look at the question, whilst Becky looks at the experience of her peers and how they may obsess over the marking criteria;
Martin takes a personal view to back his argument. However, all students, current and past, all share one thing in common. An extensive knowledge and respect for assessment criteria. I do not believe that any outright appose to having criteria as a part of their assessments, Martin even says how he views them more as ‘an inspiration rather than viewing is as scripture’. And this is what part of this piece is beginning to open up, is the emphasis on marking criteria put on by the students themselves? Or does the fault lie with lecturers whom follow these criteria and perhaps put on too much stress?
Marianne Sharp – Senior Lecturer in Drama My feelings about the relationship between practical assessment and creativity, particularly in relation to how assessment works on the Drama programme at Winchester, are mixed. Students seem often to be concerned over two related things, the first is whether or not marks given for practical work are 'subjective', and the second is around grading processes and assessment criteria. To deal with the first concern, whilst I recognise that there is an element of subjectivity in whether or not tutors 'like' a
piece of performance work (in terms of personal taste) I don't think for the most part that their marking practices are subjective. We mark to criteria so, for example, we are all entirely capable of personally disliking piece of work, but still awarding it a First Class grade because it meets the criteria and we can recognise the quality of the work, even if it is not to our personal taste. Concerning grading processes, I don't like having to award marks for practical work. The reason for this is that I am interested in the work, rather than in the frameworks and structures of Higher Education, per se. I'm interested in students learning and developing creatively and with rigour, and numerical marks and lengthy assessment criteria can sometimes distract from processes of learning. Anyone can make a piece of work that is more or less developed... So a piece of work that gets a 2:2 can have first class potential, but just not have developed to that extent yet. Labelling the work with a number can sometimes keep students obsessed about marks, rather than exploring the excitement of creating work for the joy and significance of creating work. Numerical marks don't, in my opinion, serve the student and their creativity
(or their future potential) - they serve the University system. They help us to calculate a degree number, they relate to national degree standards and they inform other universities to a certain extent - about the level of a students work for entry into a postgraduate programme. On the final year Group Project module on The Drama programme, we try to encourage students to make work they are passionate about, and to invest themselves fully in the making and rehearsing process. This is done within the community of the cohort: we obligate students to show their work to each other over the course of the year, as it is developing, with the intent that seeing the development of each other's practice encourages a high level of investment on the part of the different student groups in the quality of the work they produce. We are proud that in recent years, the External Examiner for Drama has commented that the quality of final year student practical work is often higher than the national average at this level, and whilst this is largely due to the hard work of the students involved, we hope that we have managed to foster an environment in which students are encouraged to think beyond the assessment criteria in their
CAPTURE | VOLUME 5 |SPRING 2017
making processes, and rather focus on their passion.
Stevie Simkin – Lecturer in Drama I am aware this may be a minority opinion, but one of my chief concerns about the relationship between assessment and creativity is the growing preoccupation with assessment criteria I have witnessed in over 20 years working at Winchester University. I think this obsession with criteria has the potential to stifle student creativity, particularly in practical work. Criteria seem to be a growing preoccupation for students, understandably anxious about their grades and wishing to maximise their potential to score the highest marks possible. In turn, and often in response to pressure from quality agency demands both internal and external, lecturers have dutifully produced lengthier and more complex criteria, which, increasingly, tend to be riddled with slightly paranoid caveats and disclaimers. If we step back for a moment from assessment and consider: what is it about the best performances we remember that sears those moments into our memories, whether inside the academy or on the professional
stage? Those live events that take our breath away, bring us to tears, leave us elated or devastated? I am proposing nothing new when I suggest that, as soon as we are forced to put these feelings into words, those moments are immediately diminished, even as we try and convey to others how they have moved us. But, essentially, this is what we are required to do when we draw up criteria for performance work. I have lost count of the number of times I have had animated discussions with colleagues about an exciting student performance, only to be brought crashing back to earth by the question, ‘But have they met the criteria?’ Not that we shouldn’t try to find the words. We have a duty to the students to offer them guidance, to help them fulfil their potential. And, after all, we ask them to do it all the time, whether reflecting on work they have seen, or critiquing their own practice. But the danger, it seems to me, is that students become bogged down in their efforts to interpret lecturers’ attempts to put into words those extraordinary qualities that define the best creative performance work. Diligent attempts to ‘meet the criteria’ can mean students lose sight of what has inspired them;
they lose the vital spark of originality by trying to emulate what previous ‘high scoring’ students have achieved. Free creative flight is foiled as the students have yet another debate about ‘what we need to do to get a First’. The solution? There are no easy answers, but we can at least talk to the students about these issues. Speak openly and honestly about the challenges of judging creative work by criteria that seem set in stone (at least for this academic year). And encourage them to read and absorb those criteria, and then put them aside and be true to their own visions and creative instincts. Because only then will they be free to make new and exciting performances that will inspire both them and those fortunate enough to see (and, some of us, grade) their work. As both of these lecturers have been working in their field for many years, they have unique insight into how the Drama course has changed and improved, with that being said, I believe their views are similar to the current students Martin and Becky. They are all very much interested in the work that is being created right now, even to go so far, as Marianne by saying that she does not like to award marks for practical work. I feel that all
of these opinions are part of an even bigger conversation that should be happening. The arts and Drama are such an integral part of society and to stifle artists ability to craft work at the point where a students’ life is beginning could have damning affects.
Conclusion To conclude, I’d like to take a look at a quote from Robert Sternbergs article The Assessment of Creativity and how he states that ‘educational practices that seem to promote learning may inadvertently suppress creativity.’ Sternburg. (2012). I believe that he is on to something here. The drive to provide an adequate learning environment and educational practice cannot be denied and I firmly believe that lecturers within the programme are put under huge amounts of pressure to comply with these, sometimes, stringent criteria. It is not their intent to mark down students who put in huge amounts of effort and put on incredible pieces of work, but that still does not follow the opinion that many within this piece believe that creativity can and is stifled when too much emphasis is placed upon assessment criteria. I believe that criteria are needed, if only for structure, as it can give clarity to many students who find it
CAPTURE | VOLUME 5 |SPRING 2017
difficult to pull ideas out of thin air. However, in many instances that I have heard about in conversation, these criteria can be some of the most difficult constraints on a students work and many can find themselves forcing ideas and work into a particular practical assessment without giving it due diligence, which is all down to the point that Stevie Simkin makes of “meeting the criteria�.
Bibliography Sternberg, R. J. (2012). The Assessment of Creativity: An Investment Based Approach. Creative Research Journal. 24 (1), p. 4. QAA. (2015). Subject Benchmark Statement: Dance, Drama and Performance (2015). Available: http://www.qaa.ac.uk/publications/informati on-andguidance/publication?PubID=2964#.WP9NbIe gsdW [Last accessed 25th April 2017]
Programme Focused Assessment Paul Jennings Head of Department of Accounting and Investment Paul.Jennings@winchester. ac.uk Julia Osgerby Senior Lecturer in Accounting Julia.Osgerby@ winchester.ac.uk Alison Bonathan Senior Lecturer in Accounting Alison.Bonathan@ winchester.ac.uk
Introduction
Programme-focussed assessment (PFA) is assessment of student learning specifically designed to address key programme learning outcomes real. 1 PFA shifts the balance away from component- and module-based assessment, which is where most assessment currently takes place. It is integrative in nature and directly assesses the extent to which students are meeting programme learning outcomes; reinforcing the link between the aims of the programme and the activities of learning, 1. This initiative builds upon the research carried out in the PASS project. Details are available at http://www.pass.brad.ac.uk
teaching and assessment. It can thus help to counter the short-term superficial learning that can result from frequent module-based assessment. It also helps to address the problem of overassessment which is undesirable for pedagogic reasons as well as being an inefficient use of scarce resources. Duplication of assessment can be reduced and assessment is more obviously relevant to the student’s future studies – assessment for learning as well as assessment of learning. Integrative assessment that focuses on programme learning outcomes is consistent with greater intellectual attainment and improved employability, since it encourages the development of skills relevant to future intellectual and professional development.
Context In 2015, the Undergraduate Accounting programmes introduced a programme level assessment as a new Level 4, 20 credit assessment module. There is no new technical material in the module; it is an overall integrated assessment of the Programmes’ Learning Outcomes. This was initially introduced as a pilot to support the university’s exploration of PFA; following a successful implementation the change was confirmed at a revalidation event in 2016. A diagrammatic representation of the programmes is shown overleaf.
CAPTURE | VOLUME 5 |SPRING 2017
Structure of the assessment
The assessment is a case-based closed book written exam lasting 3 hours. No materials are allowed in the exam. Non-programmable calculators are allowed. The company used in the case is usually a wellknown publicly listed company. The information given to students contains background information about the company, usually consisting of extracts from financial statements and additional information about activities and performance from the annual report, company publications or public media. Usually students will be given around 5-10
pages of information. As far as practicable all the information is real.2 All or most of this information is distributed to students via the Learning Network approximately 24 hours before the exam. Students therefore have the opportunity to read the information in advance. The main reason for distributing the information in advance is to reduce the amount of reading students are expected to do under exam 2. Depending on the nature of and areas covered by the case study, parts of the information may be simplified or fictitious (for example, students may be instructed to assume the value of some variables to facilitate calculations or analysis if this information is not publicly available).
conditions. Students are not expected to do any analysis of this information or to prepare any specific work in advance of the exam.3 The assessment is integrative in nature, relying on knowledge and skills addressed in all the taught modules in the first year of the programme. The specific technical knowledge areas included will vary from year to year. While the assessment is set at level 4, the style of assessment and feedback available to students will be valuable for assessing progression since the focus on skills emphasises the importance of academic and professional skills at higher levels of the programme. Requirements consist of six single structured requirements. Students are expected to spend roughly equal amounts of time on each of the requirements.
• •
The discipline-specific technical knowledge and understanding marks (50% in total) will be awarded for: •
• •
• • •
Students will be given a clean copy of this information in the exam; they will not be permitted to bring their own copy with them and they will not be permitted to bring any notes or other material with them.
Use of knowledge and data (20%) Analysis (20%) Evaluation (10%).
The weightings reflect the importance of skills in the assessment and the level at which the assessment is set. The weightings could be amended in the future or if the assessment were set at a different level. 4
3
Technical accounting knowledge from three distinct areas of the syllabus4, which will vary from exam to exam (10% each, total 30%) Business understanding (10%) Ethical awareness (10%).
The non-discipline-specific transferable skills marks (50% in total) will be awarded for:
Assessment criteria The assessment is marked by assessing the student’s ability in each of several identified areas of competence (AOC), which are in turn derived from programme learning outcomes and module content. The criteria used in assessing students include:
Discipline-specific technical knowledge and understanding (weighted 50%) Non-discipline-specific transferable skills (weighted 50%)
Financial Reporting (AN1902), Management Information (AN1903), Financial Management (AN1917 from 2016-2017), Business Ethics (AN1915), Business Law (AN1915), Business Economics (AN1911 / AN1918 from 2016-2017), Business Management (AN1908 / AN1918 from 2016-2017)
CAPTURE | VOLUME 5 |SPRING 2017
The constructive alignment of this assessment with assessments at levels 5 and 6 of the programme helps the University to manage progression and gives students formative feedback on their abilities, which can help them to inform their approach to study at higher levels. The marks awarded in each AOC are weighted to calculate a total mark. (It is emphasised that the total mark awarded is thus a weighted average of the marks awarded to each area of competence, rather than the total of the marks awarded to each answer.) Finally, a factor is applied which converts the
student’s total mark into a final mark. The factor reflects naturally occurring variations in the overall level of difficulty of each assessment and allows some longitudinal normalisation of marks where there are identifiable differences in performance between cohorts. Typically, the factor assumes that the average performance in the top decile of students represents the best realistically achievable mark and that the percentage of students who pass the module will be comparable across all modules at that level. A diagrammatic representation of the assessment criteria is shown below.
Myth busting folklore around assessment Nicolette Connon Quality Officer (Regulations & Policies) Nicolette.Connon@ winchester.ac.uk We’ve all heard these myths at some time or another, someone says: ‘You can’t reference a source more than once in the same assessment’; another person claims ‘Marks will be deducted if you go over the word count.’ But are they right? You want the truth? Alright then, the answers are ‘no’ and ‘depends’ respectively. The ‘no’ seems obvious if you have read around your subject because most texts make repeated references to a single source at some time - that’s why the word ‘ibid’ was invented. So why does the myth persist? And even though ‘depends’ suggests confusion about word counts, it’s still a definitive answer because it tells you that there is more to know before a definitive answer can be given - and why should the myth develop that says ‘marks will be
deducted’, when it could just as easily have evolved to say ‘marks won’t be deducted’? I spend most of my time writing , tweaking and publishing rules and regulations and reminding staff and students to read and ensure they understand them. But do they? And even if they do, how long do they remember them? And what happens when substantial changes are made and something is no longer true? And what about staff who move here from elsewhere and have never considered that the rules they were familiar with before might be different here in Winchester? And now you can begin to see how the myths develop. But myth busting is a dangerous game. I’m trying to set the story straight but will it backfire? We’re all human and as such we read quickly, skimming text, looking for nuggets of information. If I write down the myths alongside the truth, which will be taken on board and remembered as fact – the myth or the truth? So take heed and ensure you read with due care and attention, I’m aiming to bust myths, not perpetuate them. Let’s start with the myth above for which the answer was ‘depends’, namely ‘marks will be deducted if go over the word count’. There is no University-wide policy for an automatic penalty for overshooting or
CAPTURE | VOLUME 5 |SPRING 2017
indeed undershooting the word count for an assessment. Most written assessments have a ‘word count’ – this is an indicative number designed to give students an idea of what the markers expect them to produce in order to effectively answer the question. However, and this is where the ‘depends’ comes in, penalties will be imposed if the programme has set a ‘word limit’ for a particular assessment. If a word limit has been set, this will be detailed in the Module/Programme Handbook and must be accompanied by details of the penalties that will be imposed if students overshoot or undershoot that limit. This can vary from a single word or a percentage over or under the word limit - it’s up to the programme. The Assessment Regulations detail how programmes should do this. If they haven’t documented it properly, then students are entitled to appeal against a penalty. Another common misunderstanding among students is: ‘Self-plagiarising means I mustn’t ever quote myself or revisit topics I’ve written about previously’ – untrue. Self-plagiarising is reusing or paraphrasing your own work without referencing the work where you originally expressed the ideas. Students should consult their programme’s referencing guide - this will explain how to reference unpublished work including previous
assessment submissions – and learn to selfreference with confidence. After all, this is a valuable skill and who knows; maybe one day you will be referencing your own published work. And while we’re talking about referencing: ‘If you reference the marker’s work, you will be seen favourably.’ Seeking favouritism is frowned upon in academia and in the UK generally. So, no, markers will not give students extra marks for quoting their ideas or work. Of course, if the marker’s ideas or work is relevant to a student’s argument and they reference it properly, then they can include it. But they should always aim to seek out and reference the most relevant sources to support their ideas by using the reading lists provided. Now the student has learned to reference and is making progress, but the work is getting harder and the reading lists are long and then some not-so-bright spark panics their peers by claiming: ‘You need at least 5 bibliography items at L4, 10 at L5 and 15 at L6’. Rubbish. Academic work is not a ‘do it by numbers game’ unless perhaps you are studying accountancy. The reading list for the module will indicate the range of texts students might consult when preparing for an assessment but a bibliography should be based on sources that
students have actually consulted. Regardless of whether they read an entire shelf of books or just a handful, it is quality and relevance of the texts that they choose to include in their bibliography and how they reference them in their assessments that will affect the mark they receive. So the student has completed their essay, submitted it and then, shock horror, realises that they forgot to include their meticulously referenced bibliography. ‘Too bad,’ says an administrator, ‘Once submitted, you can’t ask for your assessment back’. Wrong! A student can retract an assessment in order to change it, but only if the deadline hasn’t yet passed. So for anyone caught in this position, make sure you have a corrected version available at the point you ask for the hard copy back or retract the electronic version. Because if you retract the work, and the deadline passes before you submit the corrected version, your first attempt will be marked as a non-submission and whatever you submit after the deadline will be deemed a resubmission and the mark will be capped at the minimum pass mark. Sadly, things didn’t go to plan and a student is required to repeat a failed module. ‘If you are repeating a failed module’, says a tutor, ‘you can only have one attempt at each assessment because the work will be capped’
– not so. Students who repeat a module have exactly the same number of attempts at assessments as any other student taking that module. That said both the first attempt and the second, if needed, will be capped at the minimum pass mark. Fortunately, the student passed their failed module and made it to the end of their Honours degree. They finally give some thought to how the University is going to add up their marks over 20+ modules and come up a honours classification. ‘Don’t worry’, says a mate ‘you can drop your two worst modules from the calculation’. This is half right. 30 credits will be dropped from L5/6 and none of L4 will be counted, but students don’t get to choose which modules will be ‘dropped’. If they take an Extended Independent Study (EIS), that will always count for 20% of the final mark. Then the best 60 credits at L6 will count for 40%. (Students who don’t take an EIS will have the best 90 credits at L6 count for 60%). Finally, whether they took an EIS or not, the best remaining 120 credits at L5/6 will count for the last 40%. So there you have it, some common myths: .
Call for papers- Capture Vol. Six Special Edition- ‘The Lecture’ It is our intention to publish Capture twice a year with our next issue due for publication in Autumn 2017. This will be on the topic of the lecture; • • • •
What is one? Why and how do we do it? What happens when it goes wrong? How can we ensure it motivates and engages?
If you would like to contribute an article, case study, literature or book review, link to particularly helpful resources, visual or other kind of offering then do please get in touch with the editors. We also welcome suggestions from readers as to themes and content for future issues. Please send all suggestions, questions or submissions to Capture@winchester.ac.uk