Are traditional assessment methods appropriate in contemporary higher education?

Page 1

Are traditional assessment methods appropriate in contemporary higher education? April 2014

Sarah Hamilton BPP Learning & Teaching


Introduction There have been various reports and papers over the last few years citing a need for assessment reform in Higher Education “The National Student Survey, despite its limitations, has made more visible what researchers in the field have known for many years: assessment in our universities is far from perfect. From student satisfaction surveys to Select Committee reports there is firm evidence that assessment is not successfully meeting the needs of students, employers, politicians or the public in general.” (HEA, 2012: 7) The arguments put forward by the HEA claim that assessments are not keeping up with the changing nature of Higher Education or the wider range of skills and knowledge that is expected from students. Authentic assessment is a concept that has become more and more widely written about in the literature (Murphy 2006) and often refers to assessments that are more complex and challenging than traditional tests. They examine performance on worthy or valued activities as opposed to assessment that largely tests the ability to recall knowledge (Wiggins 1990). Sambell et al (2013) identify two different concepts of authentic assessment. Authenticity in assessment can refer to activities that have relevance to or simulate work related activities requiring students to use the skills they would be using in the work place. Wiggins (1990) states that “Authentic tasks involve “ill-structured” challenges and roles that help students rehearse for the complex ambiguities of the “game” of adult and professional life.” The second approach to authenticity in assessment is about ensuring assessment design is not influenced primarily by time and resources but instead is focused on ensuring the educational learning outcomes are genuinely being assessed (Murphy 2006). This however does make the assumption that the learning outcomes are appropriate for the changing nature of Higher Education as identified by the HEA (2012). Traditional assessments often fail to address the concept of authenticity and the wider approach to skills as well as knowledge development required within the Higher Education sector. Assessment within Higher Education therefore needs to be reformed. There are today more types of assessment available than traditional exams yet research indicates exams are still widely used particularly at postgraduate level (Brown 2012). Brown identified that “most assessment in current use relies principally on very traditional methods – particularly unseen time constrained exams, essays, and above all, dissertations and other lengthy written assessments” (Brown 2012:1).

A BPP Business School working paper

The focus of this paper is to consider whether examinations are genuinely appropriate methods of assessment for the 21st century context of Higher Education and to review the validity and reliability of two alternative more authentic approaches in use at BPP University.

The Examination Whilst examinations were used in medieval times (Cox, 1967) and by the Chinese from as early as the Han period (206 BC – 23 AD) university examinations in the United Kingdom originated from Oxford and Cambridge. “The Victorians felt that examinations were necessary to make undergraduates work” (Cox, 1967:294) a sentiment that some might agree with today. However, in the 19th century exams weren’t just the domain of Oxford and Cambridge but provided a fair selection method for employment in public office. The civil service examination created a system where men were judged without bias on the basis of their performance rather than their background (Matthews, 1985). Those in favour of exams might argue they are the only truly ‘fair’ method of assessment, genuinely assessing individual merit. Race et al dispute the fairness of exams and state that “students who may have mastered the subject material to a greater degree may not get due credit for their learning if their exam technique repeatedly lets them down” (2005: 28). Flint and Johnson (2011) found that students were unhappy that exams do not enable them to demonstrate their full level of competence or ability. However, there has in recent years been an increase in essay bank companies that sell pre-written or bespoke essays to students. The challenge of identifying whether the coursework is genuinely the work of the student is an ongoing one and fears of an increase in essay purchasing has led some degree programmes to revert back to exams. Authentic assessment could be used to eliminate the market for these essay bank companies. These companies are only able to exist because course work and written assessments lack authenticity and are often formulaic in their requirements. It could be argued that these formulaic requirements and a desire for fairness overrides the validity of the nature of the assessment itself. It might be fair but what do the results actually tell us? What do exams actually assess? Race (2006) states that unseen written exams have low validity. They identify that “badly set exams encourage surface learning” (Race 2006:28). This is supported by Sambell et al (2013) who also argue that exams force students to adopt surface rather than deep approaches to learning. Students adopt shallow learning strategies that enable them to delete the knowledge in time for the next exam on a different subject.

bpp.com


One of the consequences of a surface approach that involves memorising rather than embedding the knowledge into your minds means that very soon after the exam all the knowledge you had is often lost or often disregarded to make room for the next module. This is in part a criticism of modular based degrees where often modules are delivered in isolation which creates artificial boundaries between concepts and areas of study. As a result exams can cause students to think too narrowly and restrict their ability to make connections across subject areas. Students fail to focus on the wider field of study which would enable them to draw on learning from previous modules which in turn would strengthen and support their performance in the current one. Advocates of Authentic assessment such as Wiggins would argue that “traditional tests tend to reveal only whether the student can recognise, recall or “plug in” what was learned out of context” (1990:1).

have identified that non traditional assessments can take less time to mark particularly when using group based activities (Willmott 2014). In addition more alternative forms of innovative and authentic assessment might provide more opportunities for inclusion where there are elements of choice involved. Yet exams are not all bad and there are genuine arguments for the use of exam based assessments and the concept of what is an examination has broadened. Exams were traditionally a series of questions in a timed written test often a three hour unseen paper with three or more essay questions. Examinations now have several forms (Brown 1999) such as •

Open-book exams

Take-away papers

Agazzi (1967) summarises some of the key criticisms aimed at exams and despite the time that has elapsed from his writing, many of the criticisms below still remain applicable today

Case Studies

Objective structured clinical examinations

Examinations are essentially a matter of chance and good luck and their results depend almost entirely on the character and mood of the examiners

Simulations

In-tray exercises

The marking of written papers is affected by the legibility or otherwise of the candidate’s handwriting

MCQ or short answer tests

The final result is influenced by the examiners own cultural and ideological opinions.

The candidate’s own state of health, whether he approaches the examination calmly or in a state of nervous tension, even his social background, have a decisive influence on his answers and on the results

Experiment has shown that there can be a marked and disconcerting discrepancy in the marks awarded by different examiners correcting the same papers

Examiners’ own reactions can override the objective evidence offered by the candidate’s answers.

Since the time of Agazzi’s writing there has been a massification of higher education which has inevitably led to larger cohort sizes and a need for assessment methods that suit large numbers of students. In the UK there has also been an increase in international students, many of which are more familiar with traditional examinations than more innovative authentic assessments. Exams may therefore seem an obvious choice for assessing a large group of diverse students in a reasonable time frame. Here the priority is time and resource rather than authenticity. However, recent innovations in authentic assessment

A BPP Business School working paper

The argument against exams is their inability to represent or reflect the complexity of situations and skills. The research cited above implies it’s impossible to test anything beyond knowledge and comprehension in an exam. This means exams should be reserved for the testing of Bloom et als (1956) lower level skills only. However exams and the technology used to deliver them have significantly advanced and online MCQs such as the QuestionMark e-assessment tool offer a sophisticated range of questions that some would argue can be adopted to test the full range of cognitive skills from knowledge through to synthesis (Bull and McKenna 2004). assessment have identified that non traditional assessments can take less time to mark particularly when using group based activities (Willmott 2014). In addition more alternative forms of innovative and authentic assessment might provide more opportunities for inclusion where there are elements of choice involved. Whilst there are those that would still argue MCQs cannot in reality, sufficiently test these higher level skills (Fellenz 2010), viewing authentic assessment as merely assessing higher level skills is still missing the second concept of authenticity which is about reflecting real world tasks. Here again a tool such as QuestionMark could be used for medical students to diagnose a condition or for business students to determine the biggest risk to an organisation.

bpp.com


However, whilst open book exams and case studies can take students beyond just the task of recalling knowledge, the authenticity is still limited. In reality when we are problem solving we’re unlikely to do it in one go, we’re unlikely to read what little information we have and make a decision then and there. We would work out what information we didn’t have and go away and try and find it first, often in real life the problem itself isn’t so clearly articulated for us either. We would therefore interact with our environment to investigate further to try things out, ask questions and explore different avenues before making recommendations. Yet often even though the concept of examination has moved on from being a standard three hour essay based paper, the artificial construct within which the student operates can often only test what they think they would do not what they would do.

Students in their study believed that continuous assessment was a better predictor and way of demonstrating their abilities.

The Objective Structured Clinical Exam (OSCE) used in healthcare provides opportunity for problem solving and decision making and for some interaction with the environment. The OSCE is made up of a series of tests similar to circuit training in P.E. classes. Each of these tests, which are usually referred to as stations, involve a different activity for example, counselling a patient on a procedure, reading an x-ray or writing a chart note. The student moves around the circuit completing each test as they go (Yudkowsky 2009). This type of exam provides more authenticity than traditional formats. Miller (1990) states there are four areas of activity that assessment needs to test. Knows, knows how, shows how, does. Whilst he was writing primarily about clinical assessments this is equally transferable to many disciplines, particularly those related to specific professional practice. Examinations are therefore important to test the Knows, and knows how elements of assessment but cannot test the actual performance and what the student actually does. This requires a different type of assessment. Brown and Glasner argue that “exams can be a useful element of a mixed diet of assessment, so we do not want to throw the baby away with the bathwater” (1999: 9). The argument herein therefore is not that exams are no longer appropriate but that they need to be used for the right purpose, their use needs to be authentic in that they are a valid form of assessment for the intended learning outcomes and not used as a default form of assessment based on issues of time, resources, plagiarism or historical legacy.

What the students think Flint and Johnson (2011) argue that students find exams unfair and that exams are one of the most stressful or problematic forms of assessment for students. In their research one of the criteria that students assign to fair assessment is one that enables them to evidence and demonstrate their capability. Exams it is felt do not enable them to evidence their capabilities.

A BPP Business School working paper

Some of the key comments from students in the Flint and Johnson study are summarised below: •

Students believe exams are only testing memory,

There isn’t enough guidance on what is to be in the exam

It is unreasonable to expect a student to evidence their learning from one module or academic term in a two hour paper which is primarily based on what you can remember rather than what you can do with that knowledge.

Exams were based on luck, depending on what you could remember, what you had revised and what the questions were.

There is a general lack of feedback from exams so they didn’t learn anything from them

They cause more stress or anxiety compared to other forms of assessment.

In their research there was one student that preferred exams whilst others understood that they were preferred by tutors (as you were less likely to plagiarise). The majority of students in the study felt exams were unfair and lacked the validity to test capability. As has been shown, the research in the field is overwhelmingly critical of exams and strongly argues for more authentic assessments, yet if the research is largely in favour of moving away from exams why is the Higher Education sector still so dependent upon them? The assimilate project (Brown 2012) identified some of the challenges and restraints that lecturers have found in trying to move away from exam based assessment. These included, restrictions placed by existing learning outcomes in modules that are not easy to change due to lengthy quality assurance processes and cultural climate and conservative attitudes among validating panels and professional bodies. They also found the lecturers own experience and what they perceived as appropriate assessment (because that’s what they did when they were a student) has an impact too. They identified that the Universities approach to innovation in assessment was significant; if the institution had a conservative attitude and was risk adverse they would likely stick with traditional methods. Finally, the research identified the need for training and faculty development to understand what tools are available to them and how they can be used in designing more authentic assessment.

bpp.com


Alternative authentic assessments Assuming the above challenges can be overcome, two alternative forms of assessment that provide more opportunity for authenticity are portfolio and group based assessments.

Portfolios According to Race et al, “it seems probable that, in due course, degree classifications will no longer be regarded as sufficient evidence of students’ knowledge, skills and competences, and that profiles will be used increasingly to augment the indicators of students achievements, with portfolios to provide in-depth evidence” (2005:71) . Eight years after Race wrote this there has been increased interest in the use of e-portfolios but overall the portfolio based assessment hasn’t really caught on to the degree perhaps envisaged. Stock and Trevitt (2012)argue that the use of portfolios is becoming increasingly common . They are frequently used in teaching qualifications and Healthcare qualifications but they are not a new assessment just less commonly used than other traditional methods. Elton confirms that they have been used for a “long time in architecture and in art and design” (Elton 2011). The evidence therefore suggests that they are more commonly used within qualifications of a more vocational orientation. In fields such as Art and Design for example, the portfolio is part of their professional practice, designers would expect to have a portfolio in the real world and therefore portfolio based assessments provide authenticity with regards to mirroring real world practice. A portfolio whilst comprising of several different documents, still constitutes as one overall piece of work. The danger with portfolio based assessment is that each document becomes an assessment in its own right which then leads to over assessing of the course and student. Elton (2011) argues that portfolios are better suited to the higher level skills. This is largely due to the reflective nature of portfolios that are used to evidence and demonstrate the development of skills over time, in particular criticality, problem solving and creativity. Elton (2011) also argues that portfolios are a more inclusive form of assessment as the students write about themselves and their own experience in their own context. Inclusivity is an important element of assessment design with an increasingly diverse student population. Portfolio’s therefore also provide a more authentic form of assessment as they can test higher level learning outcomes around activities such as evaluation and synthesis but also lower level outcomes such as knowledge and comprehension albeit in a more indirect manner.

A BPP Business School working paper

Portfolios are in essence an individual piece of work, they don’t necessarily have to conform to a set style or format unlike essays and business reports that tend to have certain universal conventions that should be applied. Stock and Trevitt (2012) allow their students to determine their own format and encourage them to be innovative with it. This does of course make it very difficult to mark and can cause some difficulty for tutors or examiners not familiar with portfolio based assessment. However it does reduce the possibilities for plagiarism and purchasing completed assignments. There is some challenge with perceived fairness and reliability of marking over such individual pieces of work. Knight (2002) suggests that the types of higher level skills and professional practice skills that portfolios can assess should be formatively assessed only as they are too subjective and cannot be assessed fairly. He argues that summative assessments should be kept for the domain of knowledge which he believes can be fairly and reliably assessed. The requirement for portfolios to evidence personal reflection and creativity makes them less formulaic and bound by standard formats and conventions. Arguably the uniqueness of each individual students learning journey would make it difficult to copy someone else’s work or to pay somebody else to write your portfolio for you. Stocks et al (2010) make the point that whilst portfolio’s are designed to be an honest reflection of the learners development the very nature of its use as a summative assessment can impact the degree of honesty students are prepared to share or undertake. Buckridge (cited in Stocks et al 2010) refers to students focussing on success against competence rather than providing genuine reflection on the things that didn’t go so well. She suggests that portfolios become subject to game playing in the same way that any assessment can. Baume (2001) suggests that portfolio assessment can be reliable and are therefore suitable for summative assessments. He also notes that many assessments used in higher education could be deemed ‘discouragingly unreliable’ (Baume 2001:12) but key principles need to be adhered to, to ensure the reliability of any assessment. The primary principle is that they are based on the learning outcomes. However he does note that any course with an excessive number of learning outcomes will find it difficult to have a reliable assessment. It’s a tall order to expect one summative assessment to meet extensive multiple learning outcomes. There is also a wide field of literature on the unreliability of marking in traditional assessments. Studies have shown huge discrepancy between markers which suggests that traditional assessments are no more reliable or fair than other forms. The discrepancies don’t just exist between markers, but external examiners often vary greatly in their allocation of marks too (QAA 2013).

bpp.com


Whether an assessment is fair or not is really open to interpretation. It could be argued portfolios are fairer because they are inclusive and provide an element of choice from the student’s perspective. The challenge in fairness arrives with regards to how you reward effort equally across such potentially varied portfolios. This can be in some way addressed through strict guidelines and rules around size and word count, ultimately though this starts to limit choice reducing some of the inclusiveness of the assessment. The concept of fairness is and always will be a very subjective one and open to debate (Baume 2001). The PGCPE at BPP University uses portfolio based assessment on all four modules. The PGCPE has deliberately tried to address some of the issues of fairness. It does this through stipulating what evidence should be submitted but does allow opportunity to submit alternative evidence if appropriate and in consultation with the module leader. Following a short assessment evaluation survey, some students on the course felt that stipulating the evidence made the portfolio too ‘contrived’ and would have preferred more personal choice on what evidence they could submit. The survey also revealed that at the beginning of the course the majority of students hadn’t had any previous experience of submitting portfolios but all felt that the portfolio was an appropriate form of assessment for their qualification. Whilst they thought it appropriate they didn’t all like it, though they all accepted it. Those students that didn’t enjoy it so much were ones that were more familiar with traditional examinations and had become used to last minute revision and cramming for exams. To spread the effort out throughout the module was a new and disconcerting experience for some of the students as they had been conditioned to last minute assessment preparation. Guard et al (2003) identified that for some students the move to portfolio based assessment requires a significant shift in behaviour and attitude so it should not be taken lightly. As we have already seen a key criticism of exam based assessments is that they encourage only surface learning and sudden pockets of manic study. Principles of good assessment (Nicol & Macfarlane-Dick 2006, Gibbs 2006) refer to assessment that spreads the effort out over the course of the programme encouraging deeper learning from the student. Whilst overall students agreed that portfolios were a good form of assessment in this context it was clear that time for discussion around the portfolios and the application of the intended learning outcomes is essential for students when unfamiliar with this type of assessment. Price et al (2012) would claim that this open dialogue about assessment is important for all assessments to establish the students assessment literacy enabling them to understand the requirements of the test better. It’s also important to determine the extent to which you need to provide some boundaries for the assessment and whether those boundaries might be too restrictive potentially limiting some of the added value of portfolios in allowing student choice. A BPP Business School working paper

The importance around dialogue on portfolio assessment relates as much to dialogue within the teaching team as it does between the teachers and the students. The importance of standardisation meetings when marking portfolios is particularly significant. Those teaching on the PGCPE had found that not only was discussion around the approach students had taken essential for creating a shared understanding of the standards required but constant dialogue around the interpretation and application of the learning outcomes was essential. Portfolios are therefore not without their challenges, they require significant changes in mindset from lecturers and changes in attitude and behaviour from the students. However, overall they can be considered a more authentic and perhaps appropriate type of assessment than exams, particularly for postgraduates in professional fields of practice. Group work and Team Based Assessments: Another potentially authentic approach to assessment is through the use of group work, however the authenticity would depend on what is being assessed, whether it is the way the group work together as a team or if it is the actual output at the end of the group work. Group based assessment is not a new concept and the educational value has been long espoused by writers that believe so many problems in today’s world are too complex to be resolved by individuals on their own and need the input of the group. Burdett (2003) does point out that group work has the added advantage of not only having educational value but also time and resource savings in the increasingly demanding world of Higher Education. Almond claims that the real word argument for using group based assessment (GSA) is false and that “GSA should only be used on non-contributing modules, because degree classifications are awarded to individuals” (2009:147). Almond makes an interesting point here and arguably in the place of work, whilst such things as team awards exist, your main award which is your salary is an individual reflection of your performance. According to Flint and Johnson (2011) along with exams, group work is one of the most stressful assessments for students. Whilst the ability to work well in a team or a group is widely recognised by most employability programmes as a key skill for graduates to possess, the concept of group work and ultimately group based assessments is one that sparks a lot of debate. In a recent set of undergraduate focus groups at BPP Business School particular frustration was expressed by the students about the use of group work in class. The higher achieving students felt that they got nothing out of participating in group work where groups were mixed ability. Students made comments such as they were “fed up with carrying those that aren’t any good” and “Let me choose my group, mixed ability groups are used as a way of raising tutor pass rates, not why I bpp.com


am here”. Research by Almond (2009) would support the view of the students in these comments. Almond’s (2009) research indicates that in mixed ability groups, students that normally score highly when assessed individually have their marks reduced or brought down by other students in their group. Whereas those students that when assessed individually have lower marks, found that their marks increased when in mixed ability groups. . Knight (cited in Plastow et al, 2010:401) also found that “group marks were higher than individual assignment marks and the number of fails was lower in group assessment than their individual assessments. Plastow et al (2010) go on to say that the increase in marks during group work disguises the lack of skills and knowledge lower achievers have enabling them to pass and move on to the next module despite not having reached an appropriate level of skills and knowledge. Plastow et al (2010) therefore felt group assessment not appropriate for first year undergraduate study but considered it as suitable for final degree level. Assessment designers therefore need to be clear on what group work is assessing, asking themselves what learning outcomes this assessment is seeking to address. Learning outcomes that relate to team working or working in groups may therefore require a group or team based assessment, but clarity also needs to be given on what marks are being allocated for. One of the biggest areas of debate regarding group based assessment is how they will be marked, for example, will it be one group mark or should individual performance be rewarded too? Many people seek to use a combination of both often introducing personal assignments related to the group project. The research indicates that group work provides opportunities for students to develop their learning and their higher level skills (Burdett 2003, Plastow et al 2010) but this is not reflected in all the students’ results, particularly those that are normally high achievers. This would suggest a lack of constructive alignment between the learning outcome, learning activities and the assessment which in turn suggests it is not authentic. Willmott (2014) makes an interesting distinction between group based assessment and team based assessment and perhaps the focus for authentic assessment should therefore be on the team rather than the group. He identifies group based assessment as an activity that the individual could in reality achieve on their own. For example, writing a business report. A Team based assessment is one that in reality requires the combined talents and strengths of the individuals to successfully complete the task. He uses an example of an assessment that involved students creating video presentations on a bioethics theme. This he described as a Team based assessment because the skills required were wide and varied. For example the assessment required a variety of skills from the

A BPP Business School working paper

creative and the artistic through to the technical and practical and the academic and theoretical. Choosing teams based on their individual strengths establishes their role (Belbin 2010) and creates a sense of purpose. This in turn should reduce the social loafing or free riding that can happen in group work where some students don’t pull their weight. If all students have something specific they bring to the assessment then one student not performing means they all fail, as opposed to group based assessment that means it is possible for students to pass without doing anything. The appropriateness and authenticity of group and team based assessments depends on what is being assessed and this brings us back to the concept of authenticity as constructive alignment. Within group work assignments there is a danger of weak students passing unnoticed onto the next stage of the course when they haven’t got the full set of skills. Group based assessments therefore carry risks when used too early on in undergraduate degrees. Team based assessments are as focused on what the students do as individuals and contribute to the team, as they are the end product. Group work can be used as a way of reducing the number of assignments for marking, but if constructively aligned to the learning outcomes could prove an appropriate form of authentic assessment depending on the nature of the students and the stage of study.

Conclusion This paper has shown that traditional examinations have been widely criticised for lacking validity and authenticity. They are primarily considered appropriate for testing knowledge and comprehension but do not authentically assess higher level skills such as synthesis and evaluation. Despite this lack of authenticity examinations are still widely used in the sector. Programme teams are finding it challenging to move away from examination based practices, constrained by institutional culture, lengthy regulatory frameworks and lack of training. Portfolio based assessment has been identified as more relevant for vocationally orientated qualifications and those directly related to the professions. Portfolio’s are used by many professionals as part of their ongoing practice but have also been shown to reduce opportunities for plagiarism and provide more opportunity for inclusion in assessment. Group based assessment is the subject of much debate and if used traditionally and too early on in a degree is likely to lack authenticity, but if the concept of team based assessment is adapted and the appropriate of learning outcomes for team based activities aligned, there is a stronger argument of ? for the use of group based assessment as an authentic activity.

bpp.com


Agazzi A (1967) The educational aspects of examinations: Strasbourg; Counsil for the cultural co-operation of the counsil of Europe.

HEA (2012) A Marked Improvement: Transforming assessment in Higher Education. Online available at: http://www.heacademy.ac.uk/assets/documents/ assessment/A_Marked_Improvement.pdf Accessed on 24/1/14

Almond, R (2009) Group assessment: comparing group and individual undergraduate module marks, Assessment & Evaluation in higher Education, 34:2, 141-148

Knight, Peter T. (2002). Summative assessment in Higher Education: practices in disarray. Studies in Higher Education, 27(3) pp. 275–286.

Baume D (2001) A Briefing on Assessment of Portfolios. Assessment Series No. 6. LTSN:

Klenowski V (2002) Developing Portfolios for learning and assessment, processes and principles. RoutledgeFalmer:London

Belbin M, R, (2010) Team Roles at Work. Taylor & Francis Ltd: Oxon

Lijten A (Eds) (1990) Issues in public examinations. Netherlands: Lemma

Biggs 2003 Teaching for quality learning at university: what the student does, 3rd edn, Society for Research into Higher Education and Open University Press, Maidenhead.

Mathews J (1985) Examinations A commentary, London: George Allen and Unwin Publishers Ltd

Brown S and the Assimilate project team, (2012). Assimilate. Available online at http://bit.ly/Accessed on 20.2.14 Bloom, S, B Krathwhol ,R & Masia, B,B (1956) Taxonomy of Educational Objectives: Book 2 Affective Domain. New York:Longman Brown S, A, & Glasner, A. (1999). Assessment matters in higher education: choosing and using diverse approaches. Buckingham [England], Society for Research into Higher Education & Open University Press. Brown and Knight (1994) Assessing learners in higher education, Kogan Page, London Bull, J & McKenna C (2004) Blueprint for computer-assisted assessment. London:RoutledgeFalmer Burdett, J (2003) Making groups work: University students’ perceptions. International Education Journal Vol 4, No 3, p 177 - 191 Burgess, T (1979) New ways to learn (Cantor Lecture), Journal of Royal Society of Arts, vol. 127 no 5271 pp 7-17

Miller G, E, (1990) The assessment of clinical skills/competence/performance. Academic Medicine Vol 65, Issue 9 p563-567 Miller, AH,, Imrie, BW and Cox, K (1998) Student assessment in higher education: a handbook for assessing performance, Kogan Page, London. Murphy R (2006) Chapter 3 Evaluating new priorities for assessment in higher education. Innovative Assessment in Higher Education Ed Bryan C, and Clegg K. Routledge:London Nicol, D, J. & Macfarlane-Dick (2006), Formative assessment and self-regulated learning: A model and seven principles of good feedback practice, Studies in Higher Education, 31(2), 199-218. Price M, Rust,C, O’Donovan B, Handley K, Bryant R (2012) Assessment Literacy: The foundation for improving student learning. Oxford Centre for Staff and Learning Developoment: Oxford Plastow, N Spiliotopoulou G, and Prior S ( 2010) Group assessment at first year and final degree level: a comparative evaluation, Innovations in Education and Teaching International, 47:4, 393 - 403

Cox R (1967) Examinations and higher education: a survey of the literature. Higher Education Quarterly, Vol 21,Issue 3, P292 -340

Quality Assurance Agency (2013) External examiners’ understanding and use of academic standards. Available online at http://www.qaa.ac.uk/Publications/ InformationAndGuidance/Pages/external-examiners-report.aspx Accessed on 20.2.14

Elton L (2011) Principles for a Fair and Honest Approach to Assessing and Representing Students’ Learning and Achievement. online available at http://78.158.56.101/archive/palatine/files/928.pdf

Race (1995) What has assessment done for us – and to us? In Assessment for learning in higher education, ed. P Knight, RoutledgeFalmer, Abingdon, pp 61-74

Fellenz, M, R (2004) Using assessment to support higher level learning: the multiple choice item development assignment, Assessment & Evaluation in Higher Education, 29:6, 703-719

Race and Brown (1998) The Lecturer’s toolkit: a practical guide to teaching learning and assessment, Kogan Page, London. Race, P (2006) The Lecturer’s Toolkit: 3rd Edition, London: Routledge

Flint N, R and Johnson B (2011) Towards Fairer University Assessment: recognizing the concerns of students. Routledge :Oxon Gibbs, G, (2006) Chapter 2: How assessment frames student learning. In Innovative Assessment in Higher Education Ed. Bryan C and Clegg K. Oxon: Routledge Guard et al (2003) Portfolio Assessments. Available online at http://www. heacademy.ac.uk/resources/detail/resource_database/casestudies/cs_084 Accessed on 20.2.14

A BPP Business School working paper

Race P, Brown S and Smith B (2005) 500 Tips on Assessment. RoutledgeFalmer:Oxon Sambell K, McDowell L and Montgomery C (2013) Assessment for Learning in Higher Education. Routledge:London Stefani L, Mason R Peglar C (2007) “the educational potential of e-portfolios supporting personal development and reflective learning” Routledge:oxon

bpp.com


Stock C and Trevitt C (2010) Signifying authenticity: how valid is a portfolio approach to assessment? Online available at http://icep.ie/wp-content/ uploads/2010/01/Stocks_et_al.pdf Accessed on 24/1/14 Trevitt C and Stocks C, (2012) “Signifying authenticity in academic practice: a framework for better understanding and harnessing portfolio assessment�. Assessment and Evaluation in Higher Education. Vo. 37 No. 2 245- 257 Wiggins G (1990) The Case for Authentic Assessment. Available online at: http://assessment.uconn.edu/docs/resources/ARTICLES_and_REPORTS/Grant_ Wiggins_Case_for_Authentic_Assessment.pdf Accessed on 20.2.14 Willmott C (2014) Multimedia in Bioethics Education: Examples of Authentic Assessment. Available online at http://www.slideshare.net/cjrw2/multimediain-bioethics-education-authentic-assessment Accessed on 20.2.14 Yudkowsky R (2009) Chapter 9 Performance Tests in Downing S and Yudkowsky R (Eds) (2009) Assessment in Health Professions Education. Routledge:London

A BPP Business School working paper

bpp.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.