AK AKORANGA KORANGA
THE PERIODICAL ABOUT LEARNING AND TEACHING FROM THE HIGHER EDUCATION DEVELOPMENT CENTRE
Values in Higher Education
ISSUE 7:August 2011
Assessment IN HIGHER EDUC ATION
Cover: “Poetic Assault” poster, by student Laura Cowie as part of the MART330 course assessment.
Welcome from the editorial team... We, the Akoranga editorial team, thought that Assessment would be a great theme for this edition, perhaps particularly because it is one of the most contested aspects of higher education. We might have taken a different path to avoid the pitfalls that, with the benefit of hindsight, were inevitable. Assessment is a contested aspect because of its importance, complexity and diversity. This is a challenging edition, for the editorial team and for readers, as above all there is no easy consensus on the key question - how do we know what our students are learning?
3 “I don’t care what your mark is, I just want you to learn something”
So, we have selected here articles that address some of the concerns and interests that university teachers have, and that illustrate the range of innovation and research that we have discovered. We suggest that you start with the student voice (well, actually 5 student voices). How you react may influence how you interpret the other articles, as there are some interesting messages here for us all and probably some motivation to engage with terms such as ‘assessment for learning’ and ‘strategic approaches to learning’.
6 The challenges of ePortfolios: have they a place in higher education?
Tony Zaharic, Biochemistry
5 PeerWise: an online multi-functional learning tool Kerry Shephard, HEDC
Russell Butson, HEDC
7 Q: How do we know our graduates can do the things we say they can? A: Assurance of Learning Brigid Casey, Leader of Teaching and Learning, School of Business
Two articles focus on assessment from a broad, or even holistic perspective. Brigid Casey, on behalf of the School of Business, addresses elements of learning that we all think important, in that they define much about our graduates, but that sometimes are difficult to include explicitly in our assignments and examinations. Brigid’s article describes the school’s on-going engagement with this enduring and perplexing task. Clinton Golding suspects that many of us are interested in how our students think and dares us to consider assessing this attribute, for we do hope that our graduates think particularly well. Do they?
8 They’re participating, but are they learning? Assessing online discussions Keryn Pratt, College of Education
10 Assessment at Otago: five students’ perspectives Swee Kin Loke, HEDC
Two articles look in depth at the difficulties of assessing learning in particular circumstances. Keryn Pratt describes the discussions that she and her students have had about what should and what should not be assessed in online learning situations, and offers her conclusions. Martyn Williamson and Tony Egan, in assessing patient care by medical students, force us to realise that sometimes we are interested in aspects of learning that may be just too difficult to assess in real-life situations, and tell us of their, and their colleagues’, exploration of simulations within which to assess students.
12 Aligning “assessment for learning” with “assess-
ment of learning” in health professional education: the SECO clinic Martyn Williamson and Tony Egan, Dunedin School of Medicine
14 Assessment of Thinking Clinton Golding, HEDC
The remaining three articles explore assessment as an integrated part of learning and teaching. Tony Zaharic encourages us to take a strategic look at assessment as he guides us through a highly integrated teaching, learning and assessment planning process. An article on PeerWise, based largely on Phil Bishop’s experience with this tool, identifies learning experiences that are not assessment, but have strong links with assessment. Russell Butson concludes this theme with a prediction about the future role of ePortfolios for learning, and for assessment.
15 Spotlight on Teaching and Learning Colloquium (29th-30th of August) Tentative programme
Kerry Shephard, Swee Kin Loke, Ayelet Cohen and Candi Young
We welcome submissions to Akoranga from staff and students at Otago. Please contact us if you would like to contribute. We also welcome your views, feedback, or letters on any of the items featured. Drop us an e-mail at: hedc.akoranga@otago.ac.nz
Akoranga is produced by the Higher Education Development Centre (HEDC) at the University of Otago for all University staff. Printing: Southern Colour Print. This periodical is printed on recycled paper. Copyright: We welcome reprinting if permission is sought. We would like to acknowledge and thank all contributors to this newsletter.
2
Photo by: Andrew West
“I don’t care what your mark is, I just want you to learn something” Tony Zaharic, Biochemistry
The titular quote is one I blurted out in sheer frustration at a student, who was as stubborn about not wanting to explore a topic, as I was about not just answering his question. Though initially accidental, I now use the statement as an opening gambit when students are being particularly blatant in expressing a desire to “simply be told what we need for the exam”. After waiting for the student(s) concerned to pick up their jaw, I follow with a wry smile and a further comment that “if you take care of the learning, the marks will take care of themselves”. Of course assessment is a powerful motivating tool for students, and this notion is one possible derivation of George E Miller’s classic adage “assessment drives learning”. But in this context, is the motivating influence of assessment a positive or negative effector on learning? On one level, Miller’s quote for me expresses everything that is bad about assessment. Assessment is a stick. When paired with shallow assessment tools, it is an ugly stick. Learn or you will fail. Now it’s easy enough to understand a student’s desire not to fail, but having this as your driving force for learning is not very satisfying. Perhaps even more disturbing, an assessment drives learning philosophy is the very reason students express the desire to “simply be told what we need for the exam” (especially in competitive courses like Health Sciences First Year). In this context, assessment narrows learning. Isn’t this the antithesis of a University education? Indeed an unintended consequence of the modern drive towards learning objectives/ outcomes, especially when paired with statements to students along the lines of “the exam is based on the learning objectives” (which I confess is something I say constantly), is that it encourages a punc-
tate approach to learning course material and incentivises students to ignore content areas not covered by a learning objective. Of course the get-out clause here is to write learning objectives that cover all of the content, but the subliminal message of “you need to learn these things BECAUSE they are going to be assessed” remains, rather than “you should learn this because it is inherently interesting/important”.
the exit tests were actually a hindrance to learning, we’d had feedback through class representatives that many students felt the exit test signified the end of their consideration of the laboratory material. We were becoming increasingly concerned that the analytical and problem-solving features of laboratory exercises (arguably some of the more important aspects) were not being explored and revised by students.
On a more positive note, the flip-side of Miller’s statement means you can use assessment as a carrot. You are still tapping into the student’s desire to succeed, but you are harnessing it to achieve desired outcomes: a sort of cod-liver oil - you don’t know if it’s doing you good, and it taste’s like it’s doing you some bad, but you drink it anyway - approach. We have found that using a hybrid formative-summative assessment approach for aspects of our paper that historically were given the once-over-lightly treatment by students, or simply just ignored because they were too difficult, has been a very positive way of enhancing student engagement, learning and skill acquistion.
In the mid-2000s, ever-increasing class sizes required HSFY papers (for practical reasons) to reduce the number of laboratory sessions students attend (in some cases from twelve down to six laboratories). Again, we were concerned about the impact of these changes on both the early technical development of the students and their opportunity to apply the scientific method. Thus, we instigated a change in the assessment method for the laboratories that both created more time for doing science in laboratory sessions and provided a way to encourage more revision and in-depth analysis of the laboratory content.
“Fortune leaves always some door open ....... to come at a remedy” - Don Quixote BIOC 192 (Foundations of Biochemistry) is a paper in the Health Sciences First Year (HSFY) programme. Since its inception, BIOC 192 laboratories had been assessed by in-lab, pen and paper exit tests. In reality, these only assessed short-term recall of the main elements of the laboratory, and probably represented more of a “stick” to ensure students diligently completed the prescribed exercises (cognisant of the fact that there was the potential for the material to appear in a rapidly-approaching assessment), than a constructive adjunct to learning. In support of the notion that 3
That change was to remove the paperbased exit tests at the end of laboratories (which could consume up 30 minutes of time that could otherwise be spent doing/analysing experiments), and instead use the Blackboard learning management system for online tests. This both created more time for hands-on work (the tests were done outside laboratory time), and allowed us to ask more in-depth questions than could have been reasonably asked in an exit test scenario. By no means were we first movers in migrating from in-lab paper tests to online assessment, but we did incorporate some novel (in the context of HSFY) design features that we feel have been of great assistance in building capability in our students. The tests were designed to be formative as well as sum-
mative. They are open book, students can repeat the test as often as they wish within a three week time-frame, and because most students get at least one question wrong on their first attempt (see below), students must repeat the whole test to
given test (and is usually the source of the “at least one question wrong” mentioned above), and does cause some consternation amongst the class (especially in the first test). However, we have found it very valuable for promoting a more in-depth analysis of laboratory material.
“As we progress through the semester, the tenor of the questions changes from exasperation (‘I can’t do maths, I have no idea how to do this question’) to inquiry (‘do I need to use this/these number(s) because.....’). It’s no longer about the maths, it’s a puzzle.”
These difficult questions have also provided a more general benefit in building student confidence in approaching interpretive calculations. More often than not, the last comment a student will make after we help them work through the calculation is “is that all”. The math inevitably is trivial, it’s knowing what to do with the data before you that is the challenge. And I think this change in mindset is important. As we progress through the semester, the tenor of the questions changes from exasperation (“I can’t do maths, I have no idea how to do this question”) to inquiry (“do I need to use this/these number(s) because.....). It’s no longer about the maths, it’s a puzzle.
gain full marks. As the tests draw on questions from a pool, students inevitably have to answer at least some new questions on each test attempt. In particular, we have been able to promote the skills required for interpreting data and doing calculations. We had been aware for some time that calculations presented a challenge both to students in BIOC 192 and our continuing 200-level students. Though we had made practice calculations available, the carrot-stick approach of assessment provided an opportunity to ensure students were focusing on calculations as part of their curriculum. Prior to the introduction of an online test that included calculations, we had very few enquiries from students with regard to calculations (or lab material in general). Now we have many students (often in groups - suggesting they have been working on the problem together) coming to see us having tried the calculations, got them wrong, and seeking help with how to approach the problem. For us this has been a very positive way of engaging the students in the process required to interpret data and solve any given calculation. Furthermore, each test includes one particularly challenging calculation (which is not in MCQ format). The difficulty is not related to the maths required, but rather is because finding the right numbers to use (and which ones to ignore), and knowing how to use them, requires the student to integrate a number of aspects both from the laboratory in question and (usually) prior laboratories and/or principles from lecture material. This question quickly gains a reputation as “the question” for any
Of course, it would be completely unfair to use this type of question in an exit-test scenario, and perhaps more importantly, unreasonable. If you are asking students to perform higher cognitive functions, you need to give them the time to do that. Herein lies the real value of using online tests in a combined formative and summative way. If we limited the tests to one or two attempts, most students, in the absence of the carrot of getting full marks, would never see the process of learning how to do these calculations through to the end (most students take two or more attempts to get full marks, usually because of “the question”). Typically, more than 95% of the class get full marks for any given assessment, and because (as discussed above) there is a pool of randomly chosen questions from which each individual test is compiled, revision, re-inforcement and embedding is promoted. As a counter-point to the benefits we perceive with our approach, there is always the very real issue associated with unsupervised, online tests of “who is answering the questions”. Is the test mark of any given student a reflection of their work/ understanding, or that of a class-mate or indeed a friend/tutor. In fact we know that this is an issue in the HSFY programme. For example, some on-line assessments are limited to either a single or a small number of attempts. In these instances, sometimes students that are NOT doing HSFY (each paper in the programme has both HSFY and non-HSFY students) act as “sacrificial lambs” for their classmates. The non-HSFY 4
student does the test first with their HSFYcolleagues looking over their shoulders. The HSFY student is hoping to gain some advantage from knowledge gleaned by observing test questions/MCQ options before they sit their own test (remember that HSFY students are competing for entry into medicine, dentistry etc., so the motivation for this behaviour is high). Although we only have anecdotal evidence, we feel that having unlimited attempts on our assessments largely ameliorates this issue. However, providing unlimited attempts, when combined with the type of questions mostly used in the online assessments (MCQ, matching etc.), does raise the spectre of students being able get their marks by simply “pointing and clicking” for a sufficient length of time. Feedback from class representatives suggests that this in fact isn’t widespread, with most students who are inclined to take this approach quickly working out that because no two tests are ever the same (question pools), it’s easier and faster just to learn the material! Finally, since the introduction of the online tests, we have made one question in our terms test of exactly the same form as one of the difficult calculations (with the numbers changed), with the aim of rewarding those who made the effort to understand the question and also to gauge the level of understanding. We have been very pleased that the correct response to this question has consistently been in the 65-70% range. Whilst we have no control question to compare it to, we feel (given the high rate of incorrect first-attempt responses to these calculations in the online test) that an equivalent question given to the students in the terms test, without the formative element in the online tests, would be correctly answered at a rate barely higher than the random 25% for a four option multiple choice question. Despite the initial stress associated with the more difficult aspects of our online tests, we have received positive feedback from class representatives about our approach, and staff associated with our second year papers perceive that students coming through BIOC 192 since we adopted these changes are better equipped to analyse and interpret the results of their experiments in 200-level laboratories.
PeerWise: an online multi-functional learning tool Kerry Shephard, HEDC
This article is based on conversations with Phil Bishop (Zoology, and winner of an Otago Teaching Excellence Award and National Tertiary Teaching Excellence Award in 2010). Have you ever looked for a learning tool that would really engage your students? Perhaps something that would help them reflect on what they are learning and test out whether they really understand the complex concepts that you have been teaching them? (Or perhaps reveal that they are misunderstanding these concepts?) What about doing all this, but also doing it collaboratively with their peers? If you were to design such a tool, you would probably make it easy for students to use, and give it some of the same characteristics as the social networking tools that they use in their personal lives. You would also want it to be easy for you to use, and maybe build into it some elements that would help you to formatively assess the progress of your students. Well, Paul Denny from Auckland University took your design brief and created PeerWise just for you. (Look at it here…. http:// peerwise.cs.auckland.ac.nz/). Paul is clearly a fan of the multiple choice format as this is used to focus learners’ attention and to motivate their engagement. The key point about PeerWise is not that learners do the assessment, but that learners create the assessment. Creating successful assessments is probably one of the best learning approaches that educators have come across
yet, and some university teachers have found it to be an amazing learning tool for their students. From an educational development angle, important perspectives are probably integrating the tool into the learning programme so students don’t perceive it to be an optional extra, and motivating students to have a go. If they find it useful they probably will use it.
“Working their way through it, evaluating it and discussing it appears to make the concept clearer in the minds of students who take this approach.“ Otago’s own Frog Man, Phil Bishop from Zoology, has worked with colleagues to integrate PeerWise into one of Otago’s largest papers (CELS191 – Cell and Molecular Biology). More than 1250 students (out of a class of 1850) voluntarily answered and evaluated more than 800 questions that had been created by their peers. During a 12 hour period leading up to the midsemester exam, answers were being submitted at the average rate of 1 every 1.2 seconds, with a peak between 8.00-9.00 pm of 84 a minute! Over the semester the students on this paper broke the PeerWise World Record and submitted 163,635 responses, clearly 5
highlighting the students’ perception of the value of this learning tool. Two students were awarded prizes (during the last lecture), the first for the highest quality rating for their questions and the second for coming top of the leader board. Student feedback suggests that students, who are struggling to understand a concept, look for a question on it that someone else has designed. Working their way through it, evaluating it and discussing it appears to make the concept clearer in the minds of students who take this approach. Phil adds that initially to get the students motivated to explore PeerWise they were challenged to ‘beat’ the Frog Man. Competition appears to be an important motivator here. Phil also let the students know that one of the best student-generated questions would appear in the final exam. “Once the students had become familiar with PeerWise (it is very easy and intuitive to use), then it took off and they found it an extremely useful learning tool.”
Photo by: Giulia Forsythe
The challenges of ePortfolios: have they a place in higher education? Russell Butson, HEDC
Some higher education institutions claim that the adoption of an ePortfolio platform for student learning has been transformational. From an educational perspective there are a number of commonly agreed identifiable advantages for students to implement an ePortfolio. These include encouraging reflective thinking and personal development planning, gathering evidence of learning outcomes and skills development, and providing support for life-long learning. The National Learning Infrastructure Initiative (2003) has developed what is generally agreed as the conventional definition of what an electronic portfolio (ePortfolio) is: A digital collection of authentic and diverse evidence, drawn from a larger archive representing what a person, a community or organisation has learned over time and on which the person, community or organisation has reflected, and designed for presentation to one or more audiences for a particular rhetorical purpose. There are, however, differences of opinion on the main purpose for ePortfolios from an assessment perspective. For some a learning ePortfolio offers an ideal way to observe development through the various components of a project that students periodically upload, and which may include self-reflections and peer-reviews. In this sense the ePortfolio is a learning record or transcript that allows teaching staff to include as part of an assessment programme. For others the ePortfolio is not about observing a process, but about a producing a product. In these instances the portfolio is used to create an evidence-based ‘snap-
shot’ of a student’s attributes and abilities at a particular point, extending the credentialing of a student beyond the general curriculum vitae. There are also some that incorporate a mix of both approaches. Since their inclusion within educational institutions there has been general agreement that ePortfolios are beneficial to learning. The successful use of an ePortfolio system is dependent upon students discovering the relevance of the curriculum, their response to the curriculum, and their understanding of the importance of being able to document and present evidence of their proficiency. Knowing what can be captured and why can be very problematic both for the user and the designers of the ePortfolio. Unlike research or teaching portfolios that use standard forms, learning ePortfolios are contingent on curricula that can vary significantly across disciplines and years. Learning portfolios need to be configured in a way that allows tailoring to the specific structures of various courses and papers. If not, there is a danger that the ePortfolio can be perceived as a succession of assignments, rather than recognized as a complex and holistic underpinning to a student’s career path. If this is true, then simply rolling out software will not suffice. Instead the complexities involved in developing an ePortfolio environment would require considerable investment in planning and implementation if students are to value such a service. We know that for students to find value, the ePortfolio needs to be designed in such a way as to situate students at the centre of their learning experiences, and allow them 6
to manage and control their own records and information in order to make sense of, and map out, their academic and professional goals, experiences and outcomes. A study by Ayala (2006) found that student input into the design phase of ePortfolio implementation is uncommon, and fewer than 5% of the published reports on ePortfolios at that time reported consulting with students regarding their concerns and needs. He went on to say that “when articles did mention students, [ePortfolios] were done unto them and not by them”. ePortfolios were established for students to comply with, by requiring the uploading of specific materials as part of the course. Involving students and staff in the process may well provide a fresh perspective and produce a more useful product in terms of ePortfolio development. This article highlights some of the elements that need to be considered prior to implementing an ePortfolio programme. While interest in ePortfolios in New Zealand is still marginal, many institutions in Australia have decided to invest in ePortfolios, which will undoubtedly influence practices here. It is only a matter of time before New Zealand universities will embrace ePortfolios as part of the student provision. The challenge then centres not on ‘should we’ or ‘shouldn’t we’ adopt ePortfolios, but on what sort of ePortfolio environment we want to implement.
Ayala, J. (2006). Electronic Portfolios for Whom? Educause Quarterly No.1, 12-13. National Learning Infrastructure Initiative (2003).
Q: How do we know our graduates can do the things we say they can? A: Assurance of Learning Brigid Casey, Leader of Teaching and Learning, School of Business
At the School of Business we are implementing an assurance of learning (AoL) process that aims to improve student learning and provide evidence that students are achieving the learning goals communicated in our graduate profiles. AoL is high priority for the School and ultimately everyone’s responsibility. School-wide the process is coordinated by a Leader of Teaching and Learning, who works closely with the Undergraduate and Postgraduate Advisory Groups. Champions in every department provide support for AoL and represent their discipline in on-going development. Faculty and support staff have roles in practice, or facilitating the process. Assurance of learning begins with the University of Otago’s mission statement, strategy and graduate profile. The principles in these statements are reflected in the School of Business’s strategy and the graduate profiles for each of our degree programmes. From the graduate profiles, key learning goals are identified. The next step involves faculty identifying courses across the programme where these attributes are taught and assessed. This curriculum mapping exercise is valuable for checking that graduate attributes are adequately covered in course and programme activities. For each learning goal, samples of assessments from across the programme are collected and student performance is evaluated using rubrics developed by faculty. It is important, throughout the process, to maintain a programme focus; this is not an evaluation of individual teacher effectiveness. Analysing the assessment data, and reflecting on practice, leads to lively faculty discussions resulting in action to improve student
learning at multiple levels including programme and course curricula, student support and extra-curricular activities, and the learning environment. In 2010, for example, we focused on written communication and critical thinking in the BCom programme. The criteria for evaluating written communication included basics – grammar, spelling, style, etc. appropriate to the situation, and document structure and flow. Results indicated that the majority of students were competent, but basic writing skills were identified as the weakest area for some students. Faculty generated a large number of ideas for improving teaching and learning written communication, from reinforcing our expectations of the students, to the introduction of processes to teach courseembedded academic study skills across our core 100 level papers. Some of these changes have been accepted and will be piloted in 2012. “Closing the loop” on critical thinking has not been so straight forward. While critical thinking is implicit in what we encourage as university educators, it can be difficult to explain exactly how we teach this attribute. Rather than ‘what can we do to improve critical thinking?’ we needed to step back and ask ‘where and how do students learn critical thinking from day one at university?’ to get the conversation flowing. At first year, for example, tutorial questions and case studies introduce the concept and critical thinking is modelled in lectures. As students progress through their degree they are exposed to more examples and practice, and by their final year they have many opportunities to demonstrate their 7
ability in assessments. As faculty enthusiastically debated the finer points of critical versus logical thinking versus problem-solving, they shared best practice and generated innovative ideas for teaching. Again, some of these will be piloted in 2012. The next step will be a workshop for faculty on assessing for critical thinking, applied to our business discipline contexts. We hope that these changes will progressively improve the development of these attributes by our students and we are confident that our AoL sampling processes will help us to determine if we are being successful. The challenge of building understanding and gaining support for the AoL process is not unique to our School. However, it is not difficult to engage faculty in discussions of teaching and learning and there is plenty of enthusiasm for improvement. The key to developing a robust and systematic assurance of learning process is the contribution and support from an extensive network of faculty, support staff, students, and key people from across the wider University.
They’re participating, but are they learning? Assessing online discussions Keryn Pratt, College of Education
Background The College of Education has offered distance papers since 1994, and online papers since 1997, largely at the postgraduate level. Based on work regarding effective online teaching by those involved in distance education at the College of Education, as well as the international distance-learning literature, our courses are designed to be student-centred and to promote the development of a community of learning that involves cognitive, teaching and social presence (see Garrison & Anderson, 2003). We also seek to promote interaction between the learners, the content, other learners, and the instructor (Anderson, 2004). The approach is in line with a constructivist viewpoint, which emphasises the roles of social interaction and authentic contexts in learning (see Woo & Reeves, 2007). In line with these principles, our distance courses are designed around online discussions, which play a central role in students’ learning. Typically, these asynchronous discussions run for two weeks, during which students are asked to complete some readings and then respond, based on their own experiences and the readings, to questions posed by the instructor. A discussion then ensues, with the instructor and students responding to one another’s comments. This approach to learning, however, raises an important question: how do you know if the students are learning? The first step to answering this question is similar to that involved with any form of as-
sessment; you need to identify what the aim of the activity is. A common learning aim of online discussions is that of knowledge building or construction (e.g. Chai & Tan, 2009; Engstrom, Santo, & Yost, 2008; Skinner, 2007). This has been operationalised in a variety of ways, with the two most common being through Garrison, Anderson and Archer’s (2000) cognitive presence, which forms part of their Community of Inquiry model, and through Gunawardena, Lowe and Anderson’s (1997) Interaction Analysis Model. Garrison et al.’s (2000) Community of Inquiry model represents learning in an online environment. It consists of three overlapping components: Cognitive presence “the extent to which the participants in any particular configuration of a community of inquiry are able to construct meaning through sustained communication” (Garrison et al., 2000, p. 89) Social presence “The ability of participants . . . to project their personal characteristics into the community, thereby presenting themselves . . .as ‘real people’” (Garrison et al., 2000, p. 89) Teaching presence “The design, facilitation and direction of cognitive and social processes for the purpose of realizing personally meaningful and educationally worthwhile learning outcomes” (Garrison & Anderson, 2003, p. 29). Under this model, the degree to which knowledge is constructed is recognised by 8
the degree of cognitive presence. Garrison and Anderson (2003) argue that this occurs in a four-step process. Initially, a problem or question is identified, and constitutes the ‘triggering event’. This then leads to the second stage of ‘exploration’, where information is searched for and exchanged. Next, this information is ‘integrated’, with ideas being connected to one another and answers created. Finally, this information is used to provide a ‘resolution’ to the initial problem by applying the new ideas and determining their level of success. In contrast to the four-stage cognitive presence approach to knowledge construction, Gunawardena et al.’s (1997) Interaction Analysis Model identifies five phases for knowledge construction in online discussions, although acknowledging that not all phases will always occur for all groups: Phase 1: Sharing/comparison of information Phase 2: The discovery and exploration of dissonance or inconsistency among ideas, concepts, or statements Phase 3: Negotiation of meaning/co-construction of knowledge Phase 4: Testing and modification of proposed synthesis or construction Phase 5: Agreement statement(s)/applications of newly-constructed meaning (p. 414)
Parallels can be noted between the models, with both involving sharing of ideas, identifying connections between the information, and the creation of new knowledge. Both models have also been successfully used to identify examples of knowledge construction within online courses. Is one of these models, then, an appropriate tool to use to assess learning in online discussions?
“In my courses, I want the class to form a sense of community through their online discussions, so contributing to the discussions and doing so in a timely fashion is important.” In one of my online courses, which looked at knowledge construction as part of the wider focus on Online Learning Communities, my students and I discussed these models, and informally applied them to our own discussions. We noted that based on these models, learning was not occurring, as knowledge was not, in general, being constructed. In most cases discussions halted either at Garrison and Anderson’s (2003) exploration or integration stages, or in Guanwardena et al.’s (1997) Phases 2 or 3. Despite this, however, they felt very strongly that they were learning, but they were not always articulating this learning in the online discussion. They felt this learning was, however, being applied in their assignments and in their own practice, and that deeming that learning was not occurring based on the online discussions was doing them a disservice. This brings us back to the central issue: what is the aim of the activity? If it is to show learning, as evidenced through knowledge construction, then applying one of these models would appear to be an appropriate assessment tool. However, if the aim of the online discussions is to work in conjunction with the other activities that comprise the course, such as the readings and the assignments, then this may not be appropriate. If the online discussions are to act more in the role of lectures and tutorials, which provide the opportunity for learning to occur rather than acting as a measure of this, then the degree to which students are learning may be better assessed in terms of well-designed assignments, rather than through their ability to show knowledge construction in online discussions. This raises the question of whether online discussions should be assessed, if they are
not to be used to measure learning. After all, if you are not measuring learning, then are you measuring participation, and should participation be measured? I have distance learning colleagues who would argue no. However, I would argue there is still a role for assessing online discussions, for several reasons. The first is a very pragmatic reason: if no marks are awarded for participation in online discussions, students tend not to participate (Rovai, 2003). However, simply rewarding students for the number of comments they contribute can result in posts that contain very little thought and that contribute little to the discussion (Garrison & Anderson, 2003). It would seem that the necessary compromise is to provide some marks for online discussions, but for the assessment criteria to be related to the purpose of the discussions. In my courses, I want the class to form a sense of community through their online discussions, so contributing to the discussions and doing so in a timely fashion is important. In general, this means that I ask my students to contribute “a minimum of two questions or comments each week that the discussion is running”. Variations may include requiring one question and one response to a classmates’ comment. However, I do not want their posts to be meaningless or surface responses, but rather to show evidence that students have been engaging with the readings and topic under consideration, and to at least have the potential to contribute towards knowledge being constructed. To encourage this, my students are also told that “The questions or comments need to be designed to facilitate the class discussion and should reflect an in-depth understanding of the topic under discussion and/or the required readings”. These requirements are further spelled out in grading criteria that explain further how I will be determining whether or not their comments meet these requirements. This is just one way in which this can be done. Another example would be to use instructions similar to those given by Ragan (2007), which inform students of when they need to contribute their comment, the maximum length, and that the quality rather than quantity will determine the grade. The quality of the comment is to be determined based on the student’s understanding, synthesis, and use of previous postings. While there are many examples of grading criteria and rubrics that can be used for assessing online discussions, the important point to remember is to first identify the aim of the online discussions, 9
and then to determine what is the most effective means of assessing, or evaluating, whether or not that aim has been achieved.
Anderson, T. (2004). Toward a theory of online learning. In T. Anderson & F. Elloumi (Eds.), Theory and practice of online learning (pp. 33-60). Athabasca, Canada: Athabasca University. Retrieved December 8, 2006, from http://http://cde.athabascau.ca/online_book/ch2.html. Chai, C.-S., & Tan, S.-C. (2009). Professional development of teachers for computer-supported collaborative learning: A knowledge-building approach. Teachers College Record, 111(5), 1296-1327. Engstrom, M. E., Santo, S. A., & Yost, R. M. (2008). Knowledge building in an online cohort. The Quarterly Review of Distance Education, 9(2), 151-167. Garrison, D. R. & Anderson, T. (2003). E-learning in the 21st century: A framework for research and practice. New York: Routledge Falmer. Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. The Internet and Higher Education, 2(2-3), 87-105. Gunawardena, C. N., Lowe, C. A. & Anderson,T. (1997). Analysis of a global online debate and the development of an interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational Computing Research, 17(4), 397-431. Ragan, L. (2007, August 21). Best practices in online teaching - during teaching - assess messages in online discussions. Retrieved from the Connexions Web site: http://cnx.org/content/m15035/1.1/ Rovai, A. P. (2003). Strategies for grading online discussions: Effects on discussions and classroom community in Internetbased university courses. Journal of Computing in Higher Education, 15(1), 89-107. Skinner, E. (2007). Building knowledge and community through online discussion. Journal of Geography in Higher Education, 31(3), 381-391. Woo, Y., & Reeves, T. C. (2007). Meaningful interaction in webbased learning: A social constructivist interpretation. Internet and Higher Education, 10, 15-25.
Photo by: Sean MacEntee
Assessment at Otago: five students’ perspectives Swee Kin Loke, HEDC
Akoranga spoke to five students who are also leaders of Peer Assisted Study Sessions*. They range from 2nd to 4th year students and all were keen to preserve their anonymity, so we use pseudonyms below. We talked about the relevance, transparency, and significance of assessment practices at Otago. Relevance
ing placements. But by the end of my Law degree, I won’t get a placement anywhere, and that’s pretty bad. Akoranga: So what do you think are the differences between placements and, let’s say, writing an essay as an assessment mode?
Karen: I think it’s good. Cos you’re not going to be hand-fed stuff at work. Like in your workplace, you’re going to actually have to go out and find things, not so much being spoon-fed. Sophia: Like for Law, by the time we graduate, we’ve got to do five research assignments. I think it’s really important cos we’re going to have to do that.
When questioned on the relevance of current assessment practices at Otago, the five students frequently used their potential workplaces as a benchmark.
Karen: Well, I think definitely you should have a little bit of practical experience. But then, it’s like that across all universities in New Zealand, so we’re all probably going to be in the same boat.
Akoranga:What about the short answers? Do they test the right things?
Akoranga:Would you have preferred if we had placements as a form of assessment?
Karen: For the Law paper, for the opinions, I think that they test you on the right stuff. Cos that is kind of what you would be doing in a workplace scenario. Well, I’m assuming that that’s what we have to do.
Karen: Yeah. Definitely.
Justin: I think that should be a preference. That’s why you get to choose to do Honours, for example. So I don’t think all students would be interested in doing research.
Fiona: So that’s where I think it’s lacking. I think that’s what all departments should really do in the last year.
Fiona: I think research skills are important. But not necessarily doing research work like – fully-intensive.
Akoranga: So this is like a case study?
Sophia: Yeah, like go into a firm or –
Karen: Well, we learn all the material, like all the cases, and they give you a random scenario. You have to apply all the cases to what they’ve given you. So it’s kind of like a real life scenario.
Fiona: Yeah, even if you were just to go for a couple of weeks, even just to see what it’s about.
Akoranga: So all students have to learn some research skills?
Indeed, the desire to learn workplace-related skills is so strong that the students felt that placements should be compulsory in their final year. Karen: My flatmate, she’s a Physio, and they started doing stuff last year, like hav-
One student mentioned having to carry out a research project as a form of assessment. We took the opportunity to seek the students’ views on the relevance of assessment of research for them. Akoranga: What do you think of your being assessed on how well you do research? 10
Akoranga: Should all students learn to become researchers?
Justin: At least to read it probably. Understand research literature. Transparency We also asked the students how fair current assessment practices were. They unanimously agreed that transparent criteria were crucial in designing assessment. Fiona: There was one person I was flatting with who did this paper before me. He had a different lecturer, and they did essays and
stuff. Then when I did it the next year, it was a new lecturer, and the assignments that I had to do were like sculpt a stadium. And it was very subjective on how it was assessed. I mean, the only criterion we got was that it had to be of a certain dimension, but that was it. My flatmate could pass the paper real well because he was told exactly what he had to do, whereas I was kind of in the balance. Paul: It also helps to know what level of detail you need to know. Quite often, if you’ve got one exam at the end of the semester which counts for a lot, you don’t know how to gauge how much you should know and how in-depth you should go. Fiona: So I think that’s the biggest thing, really. It’s just making clear exactly what they’re assessing, exactly what they want, so that you know what you have to go for. Significance To conclude our conversation, we sought to find out the importance of assessment in the students’ academic life. Akoranga: How true is the statement “assessment drives learning”? Sophia: It’s the ultimate goal, isn’t it? If you don’t pass the papers, then – Fiona: It doesn’t mean anything. Paul: It’d be great to think that we’re so interested in learning that we just do it off our own bat. (laughter) I mean, we do to some extent, but if there’s not that motivating factor of an exam or a test, you’re not going to put the same focus.
Akoranga: How many percent would that be?
Akoranga: So for the majority of students, is it fair to say that you do just enough?
Sophia: I think there’d be like, you know, 75% of people? There’s like 25% who like learning, probably going to be here for years and do Master’s and PhDs. But the rest of us really aren’t going to do that, eh? (laughter)
Fiona: Yep.
Fiona:Yeah, I think you’re right. Most people are just here to get their degrees so they can get somewhere where they’d be applying this. They’re not really - too interested. Akoranga: So even if you had an interesting paper, like that anatomy paper that you were talking about, would you go the extra mile, knowing that this part is not assessed, but would like to learn it? Justin: I mean, I don’t know, I’m probably the 25% that you guys were talking about. I’m very into learning. I don’t really focus on exams or marks: I just see that as a bonus. I do extra readings and all that, if there’s something I don’t understand. And that just makes things a little easier as well when it comes to exams. I’m not running around just trying to cram things into my head. Because I’ve understood it, I just do a little bit of revision and I’m OK. Karen: That is how everyone should be like. (laughter) Fiona:The only extra stuff that I’d do is if it’s something that I personally find interesting. Like if there’s a medical condition that someone that I know has got.
Akoranga: To get by. Fiona: Yep. Justin: For me, it’s too expensive to think “I am just here to pass.” Especially that I’m an international student. So I’m paying a lot for each paper. So it’d just be crazy for me to go “Oh I’m just gonna pass.” So that sorts of motivates me as well. Paul: Depends on your personal goals. What you want. And we learned once again how much emphasis the students placed on gaining employment as an outcome of their degrees. Akoranga: So what if there was no assessment at all? Sophia: Why would you pay money? (laughter) Fiona: Yeah, why would you come? Justin: Just stay at home and study. Sophia: To learn, yeah, but you don’t have anything to show for it. Fiona: You wouldn’t want to pay this much money for like - You’d want something to come out of it. Cos otherwise, in a way, what’s the point in doing a degree? You can learn from it, but you can’t say to people, “Look, I have learned. So hire me.” (laughs)
*Peer Assisted Study Sessions (or PASS) organised by the Student Learning Centre (http://hedc.otago.ac.nz/hedc/sld/ PASS-Peer-Assisted-Study-Sessions.html)
11
Aligning “assessment for learning” with “assessment of learning” in health professional education: the SECO clinic Martyn Williamson and Tony Egan, Dunedin School of Medicine
In 2002, faced with the prospect of preparing medical students at Dunedin School of Medicine (DSM) for interacting with patients in rural primary care settings, Trevor Walker and Martyn Williamson went back to the drawing board. Their response to this challenge was to create the SECO clinic. Walker and Williamson had both spent many years in successful rural general practices. The students they had to prepare for placement were fifth year undergraduates who already had hospital-based experiences in surgery, medicine and psychiatry and some limited exposure to urban general practice. However the challenge of rural placements for undergraduate medical students was new to staff and students. Students would be distributed around the lower South Island in small rural centres, distant from the human and physical resources available at DSM. They would have to integrate into the busy practices of rural GPs, other local health professionals and the community into which they were placed. Learning opportunities abounded. In small communities communication and integration among health professionals tended to be both effective and efficient. However, being distant from base hospital affected practice. For example, as serious problems required rapid assessment and effective interim management, clinical investigations could be less accessible. The learning opportunities were there, but how could they be optimised? What should students learn about (and from) consultations with
patients? How could this learning be facilitated and evaluated? The first step was to develop an educational activity that realistically simulated the demands and environment of clinical general practice. A requirement of this was the need for outcome-based performance measures providing feedback for students. The next step was to define these outcome-based standards for the consultation; these were expressed in terms of safety and effectiveness. In addition the clinic had to be financially- and resource-efficient. The SECO clinic was the solution. The acronym expands to: “Safe and Effective Clinical Outcomes”. It represents the end point of a successful consultation; the patient receives safe and effective management of their concerns. The SECO clinic was devised as a means whereby students could learn how to achieve this goal by participating in simulated consultations without compromising the well-being of a patient. The clinics are held in a clinical skills laboratory, which includes a waiting room, a reception desk, and ten cubicles equipped with computer and basic diagnostic equipment.The waiting room has a display of patient information pamphlets. The computers provide access to useful websites (e.g. Dynamed). Students can bring textbooks with them.They can use portable electronic devices and any other useful resources. They can use a phone to speak to a member of faculty - an experienced doctor who can give advice to the student based on 12
the student’s representation of the patient (rather than anything the staff member might know of that case). He or she takes the role of a helpful colleague who has the desire to see that SECO is achieved. To populate the clinic, we developed 70 cases for use in the clinics, allowing multiple attendances per student. Each case is based on one or more real patients seen by an experienced GP. The safe and effective clinical outcomes are carefully determined for that particular scenario and patient according to agreed guidelines. Each scenario contains a detailed description of the patient, their background, their personality, and their medical problems. The patient presentation is explicitly laid out including key responses to student questions, responses or actions likely to arise during the consultation. It is important for the success of the clinic that the patients seem “real” both to the actors playing the role and to the students. In order to achieve a high degree of authenticity we developed guidelines to support experienced clinicians in scenario construction and asked them to base the scenarios on specific clinical situations. Safe outcomes in our scenarios are those which result in no increased risk of harm in the short or long term to either the patient or the doctor. Effective outcomes (the best possible for the patient in that particular scenario) are evidence based, patient centred, context sensitive and resource efficient. Our emphasis is on the
students attaining both for their patients. How long the student took, or how they went about it, is less important than getting the required result. This allows for student individuality and patient specificity. The SECO approach is a significant extension of more traditional uses of simulation. These differences include use of actors (not mannequins) as simulated patients; emphasis on clinical outcome; no direct observation of the consultation; flexibility in the use of time; access to external resources, including senior colleagues and specific outcome-related feedback. The feedback is provided in the debrief (of approximately one hour), when students can compare the predetermined outcomes defined for each scenario with the clinical notes they made and the case-specific questionnaire, completed by the simulated patient while still in role. Participants can be assessed by faculty or they can self-assess.
“The SECO clinic was devised as a means whereby students could learn how to achieve this goal by participating in simulated consultations without compromising the wellbeing of a patient.” While the SECO clinic uses assessment (of outcomes) to facilitate learning, it is also used to assess learning. Four clinics comprise formative assessment opportunities for students and a fifth clinic is used for summative assessment. The latter differs from most summative assessment in undergraduate medical education in that it requires a complete performance from each student in each consultation. Most assessments focus on specific knowledge or specific skills without having to integrate these in a consultation that takes into account the patient’s context, goals and values, and requires the student to put together all that information together with signs, symptoms and other diagnostic information. It also requires that the student produces a management plan that is acceptable to the patient and likely to achieve the pre-defined outcomes of safety and effectiveness. For assessment purposes in the SECO clinic, performance can be thought of as the ability to achieve SECO (the priority) and proficiency which is a function of the time and resources used to do so.
The closest form of summative assessment to this is the ‘long case’ (involving a single real patient). This was used widely for decades before being passed over in favour of other forms of assessment, which avoided its problems of case-specificity and low reliability, but lost its authenticity and completeness. To a certain extent in contrast to the long case, the SECO clinic mitigates against the problems of case specificity by allowing the student access to resources, including telephone advice from a senior colleague (in other words, case specificity may impact more on proficiency than outcomes), and against the problem of low reliability by using the same cases over time to create a benchmark of scores unique to each case to allow comparisons. It is taking the consultation from beginning to end that is most valued by the students. Getting detailed feedback on performance, based on patient outcomes (including comment from the patient) maximises the potential for learning. Analysis of the operation of the clinic itself, and the data generated over time by increasing numbers of users, opens up many possibilities for interesting research. Examples are exploring students’ decisionmaking capabilities, factors which influence the achievement of SECO or not, the use of the clinic as a professional development tool in picking out areas of clinical strengths and weaknesses, the effect of the outcome focus on students’ subsequent practising habits, their awareness of the importance of outcomes for the patient and how they can influence these. It is possible that important insights into practitioner behaviour that affects outcomes for patients may be gained. Crucial to this research is the creation of a secure and reliable database. This is required to facilitate maintenance of cases, hold records of performance, and to facilitate detailed analysis of this data patient clinician and groups of clinicians. Ideally it would be web based to allow for collaborative research and educational development initiatives with other groups and institutions. Currently this is the major hurdle we face for future development. The clinic is highly valued by the students with consistent feedback over the last 8 years. In 2010 we asked fifth year students to write a statement on what they will take away from their SECO clinic experience that will enhance their future practice as safe and effective clinicians. Here are a few representative quotes:
13
“I don’t remember anything that I read in books but I remember things that I see on people and what treatment those people were given. The SECO clinics are just as good for me as seeing real patients, in some ways better because I am forced to make a decision and then can read about it afterwards... All runs should have them.” “Being able to work through a problem from the beginning to the end makes learning a lot more ‘realistic’ and I found that I could remember things a lot easier after solving the problem myself. And to get feedback straight away on what was done well or not is great” “It was the first chance to play the full role of a doctor and made me think about taking a good history, what relevant exams to do and how to give explanation to a patient and prescribe medications. A safe environment and resources available so could fulfil the whole role of a doctor. It exposed me to some common problems and I learnt a lot about issues that could arise and what needs to be covered to ensure patient safety…I didn’t ring a senior colleague during clinics but looking back, should have made more use of it; still feel like I need more knowledge on this.” “The biggest thing I will take away from SECO is the safety part. More than anything else in med so far, SECO has helped me realise how to be safe the best.”
The development of the SECO clinic was supported by a CALT Teaching Development Grant 2002. A video of the clinic in operation is available at: http://dnmeds.otago.ac.nz/departments/gp/images/SECO. mov Further information is available at: http://dnmeds.otago.ac.nz/departments/gp/teaching/seco. html The following people are currently involved in the SECO project: Dr Martyn Williamson, Mr Tony Egan, Dr Jim Ross, Dr Emma Storr, Dr Kristin Kenrick, Ms Frances Dawson. Enquiries to Mr Tony Egan (tony.egan@otago.ac.nz) or Dr Martyn Williamson (martyn.williamson@otago.ac.nz) Stiggins RJ. Assessment Crisis: The Absence of Assessment FOR Learning. The Phi Delta Kappan. 2002; 83(10):758-65.
Photo by: Michael Kellett
Assessment of thinking Clinton Golding, HEDC
One important educational outcome is that our students develop their thinking. Graduates of Otago University are meant to be critical thinkers. Similarly, we want our students to learn to think like physicists, doctors, teachers, sociologists, economists, etc. Does this imply we should assess students on their thinking? But if we assess their thinking, does this mean that they could do well in their exams and assignments, and show a mastery of the content of their subjects, but fail because of lack of thinking? Perhaps this is an unfair question. In many cases we set assignments that allow us to assess both thinking and content, for example, a task where our students have to apply what they know to solve a problem, or where they have to critically evaluate a position, theory or text. But not all the tasks we set our students require critical or disciplinary thinking. Many assessment tasks can be successfully completed with simply a good grasp of the content that was taught. How can we tell whether our students just give us the answers we were looking for – including using words like ‘critically evaluate’ in the appropriate places - or whether they engage in critical or disciplinary thinking? A separate assessment of thinking might be useful to deal with this sort of issue. What we want to avoid is situations where our students have a wealth of information about our subject, but are unable to evaluate or apply this information. For example, they know what photosynthesis is, but they do not seem to be able to predict what will happen in a complex situation involv-
ing plants and variable lighting conditions. They know that they are meant to check blood pressure, but they cannot explain why this is important, or judge when to do this . They are trivial pursuit smart, but the deeper thinking is missing. So assessing thinking might sometimes be a good idea, but how might we assess it? Given thinking is seemingly invisible and silent, how can we tell whether our students are engaging in useful thinking for the situation, context or task, and how well they are thinking? Although it might be argued that we cannot assess thinking, we do frequently judge some colleagues and students to be ‘good thinkers’. How do we make these judgements? Basically we judge them to be good thinkers because they do things that poor thinkers do not do – they ask questions, they give reasons, they consider alternatives, etc. By making similar judgements we can assess thinking on the basis of whether our students do the things that critical thinkers do, the things that historical thinkers do, the things that scientific thinkers do, etc. Put the other way around, if we can identify exactly what a scientific thinker, or a historical thinker does, we can assess to what extent our students think in the same ways. Yet identifying what an expert thinker does is a problem in its own right. We have become so proficient at the thinking involved in our subject areas that we are no longer consciously aware of what we do. We might say, “Well I identify and then solve a problem.” Yes, but how can we make explicit exactly what this involves so we can 14
ask our students to do the same thing and construct clear criteria to assess if they have done so? “I give a critical evaluation of a text based on a theoretical framework.” OK, but our students have no idea of what this means and how they should do this, and it is too abstract for us to assess.
“What we want to avoid is situations where our students have a wealth of information about our subject, but are unable to evaluate or apply this information.” So assessment of thinking starts with reverse engineering or defamiliarising what we do expertly but tacitly: How exactly do you think through the problems, questions and issues in your subject area? What you say, do, write and ask can be isolated into criteria for assessing thinking – do our students say, do, write and ask the same sorts of things or not? To what extent do they do this independently? How frequently and in how much depth?
Monday, 29th of August (tentative programme)
15
Tuesday, 30th of August (tentative programme)
AK AKORANGA KORANGA THE PERIODICAL ABOUT LEARNING AND TEACHING FROM THE HIGHER EDUCATION DEVELOPMENT CENTRE
ISSUE 7:August 2011