Akoranga issue 8 (November 2012)

Page 1

AKORANGA The Periodical about learning and teaching

ISSUE 08 2012: Evaluation


AKORANGA The Periodical about learning and teaching

ISSUE 08 2012: Evaluation

03

Editorial

Kerry Shephard, HEDC

04

Senate’s recommendations on course and teaching evaluation

05

How should we evaluate the impact of e-interventions?

08

Using student data to improve teaching

11

Student evaluations: closing the loop

12

1000 words to change an institution?

15

Interpreting the results from a student feedback survey

16

Evaluations: perspectives from ‘end-users’

18

No, we don’t run popularity polls!

21

Sarah Stein, HEDC

Swee-Kin Loke, HEDC

Clinton Golding, HEDC

Lynley Deaker, Sarah Stein, Jo Kennedy, HEDC

Kerry Shephard, HEDC

David Fletcher, Department of Mathematics and Statistics

Angela Mclean, HEDC

Jo Kennedy, HEDC

QAU learning and teaching

Romain Mirosa, Quality Advancement Unit

22

Beyond centralised student evaluation

23

Akoranga Evaluation

Swee-Kin Loke, HEDC

This document is printed on an environmentally responsible paper produced using FSC® Certified 100% Post Consumer recycled, Process Chlorine Free (PCF) pulp.

02


Akoranga Editorial Kerry Shephard

O

h dear, this edition of Akoranga is very late. The editorial team decided some time ago that its theme would be evaluation of teaching. Late in 2010, Senate proposed some substantial changes to evaluation processes at the University and asked HEDC to put these into effect. HEDC asked for help and a working group was established, with representatives from each of the divisions and from the Department of Mathematics and Statistics, to recommend how Senate’s proposals might operate. Senate was concerned about our possible overdependence on the individual teaching feedback survey and the lack of use of the course feedback survey. Senate was also keen to ensure that these feedback instruments are used appropriately and constructively for developmental purposes by teachers and by departments, and to allow evidence-based decisions on the quality of our teaching. I chaired the working group, and somewhat optimistically thought it would all be over by Christmas (that’s last Christmas). It has been a much longer task for all involved. We thought that we would find some great synergy between this edition of Akoranga and the recommendations of Senate and the deliberations of the working group. Indeed we have, but Akoranga has had to work to the same timescales as the working group, hence the delay. The editorial team hopes that the wait has been worthwhile. Within this edition we have provided a synopsis of Senate’s recommendations on the use of student feedback and some insights into the task of the working group. We have given colleagues within HEDC’s Evaluations Unit (Jo Kennedy) and the Quality Advancement Unit (Romain Mirosa) free rein to describe their contributions to the evaluation of teaching. Sarah Stein, Lynley Deaker and Jo Kennedy have described some new research in this area, emphasising a primary rationale for evaluating teaching – to get better at it. Clinton Golding tells us how he uses student feedback to help to develop his teaching, David Fletcher provides a statistician’s perspective on interpreting quantitative student feedback, and Swee Kin Loke addresses the particular problems of developing teaching that involves the use of computers. Kerry Shephard asks if we could use our extensive evaluation processes to address some broader questions, such as how well our students achieve the graduate attributes that we plan for them. Angela McLean has added some student perspectives to keep us on our toes, and, to practice what we preach, we end this edition with a request for readers to help us to evaluate the impact that Akoranga may be having on learning and teaching at the University of Otago.

Editor: Kerry Shephard

Sub Editor: Candi Young & Swee-Kin Loke

Design: Gala Hesson

03


Senate’s recommendations on Course and Teaching Evaluation Sarah Stein

In 2010, Senate discussed the approaches that the University uses to evaluate its teaching and its courses and agreed on a set of recommendations that required HEDC to investigate, recommend and initiate changes. The Senate’s recommendations were Recommendation 1: That Course Evaluation Questionnaires normally be used by all departments or programmes on a three-year cycle. Processes will be established to monitor this requirement, which should be part of regular evaluation policy; Recommendation 2: That a set of indicators be used as guidelines to support and develop a strong teaching culture within a department; Recommendation 3: That when Course Evaluation Questionnaire reports are submitted as evidence for confirmation, promotion or appraisal processes, they may be accompanied by a context form that includes information on the staff member’s contribution to the course and issues outside the staff member’s control; Recommendation 4: That HEDC investigates the desirability and feasibility of having one evaluation system that is fully customisable and is available in both hard copy and online formats; Recommendation 5: That HEDC continues to explore evaluation policy and practice at Otago and monitors the impact of any changes.

04

In mid-2011, HEDC established a Working Group, with membership drawn from across the University, to deliberate on Senate’s recommendations and to prepare a series of plans to put the recommended changes into effect. The Group’s advice will underpin and inform HEDC’s response to the Senate recommendations. That response will be led by HEDC’s Evaluation Research and Development group - a core group in HEDC that focusses on running an efficient student evaluation system for the University, as well as providing professional development support, raising awareness about evaluation and engaging in research in, through and about evaluation. I have not been a member of the Working Group, but am aware of the myriad issues that can arise when the topic of student evaluations is raised. I do know that the Working Group has been discussing many of these issues in their consideration of the tasks they have been set. As coordinator of HEDC’s Evaluation Research and Development Group, here are some of my comments about some issues concerning student evaluation. I have heard many people say that student evaluation systems are flawed because the questionnaires only elicit information about student experience, and do not capture all aspects of a whole course or teaching. Student evaluation questionnaires, such as the ones used in our (and other institutions’) centralised student evaluation system, do not stand alone, so they do not pretend to be able to gather everything one needs to know about teaching or courses. They ask students to provide their responses about their experiences of teaching and the courses with which they have been associated.


The student view, gathered through the student evaluation questionnaires is just one type of data gathered from one (important) group of stakeholders. Knowing how students perceive our courses and our teaching allows us to understand more about the impact we are having, which we need to know if we are serious about developing as educators. To gain a fuller, rounder understanding of our teaching and courses, we need to gather data from other sources to interpret in conjunction with the data we gather from our students. Evaluation data from the student evaluation questionnaires should be seen as the start of the conversation, not the end. Monitoring (gathering data) and development (interpreting and acting on the data) are part of the same process; and one without the other is an indication of a flawed system. Unfortunately, because so much emphasis is placed upon the monitoring aspect of student evaluation data in this University for the confirmation, promotion and annual performance review processes, there is a tendency for student evaluation to be seen as a summative exercise that is essentially punitive, private and individual. We have some wonderful guidelines about student evaluation and the use of the data we gather at this University. The recent research project I have been undertaking has shown me how far we are in advance of many tertiary institutions in this regard. It is a shame that we do not capitalise upon the guidance they provide us. There are concerns voiced by many that the Individual Teacher questionnaire isolates the teacher from the whole curriculum in an artificial way, and, because the questionnaire scores are used in confirmation and promotion processes, issues of confidentiality and secrecy surround the results. On the other hand, there are standard questions in our Individual Teacher questionnaire, and this means that comparisons about teaching effectiveness can be made over time. Departments, Divisions and the Institution can use and share the aggregated data from the standard questions in the questionnaires to monitor, report and respond to the status and changes in teaching. The current Course Evaluation questionnaire does not have any standard questions, so supports very well the need for teachers to focus on specific aspects of their courses. Because results are shared with HoDs and Course Coordinators, there are some bases for

the results of the University’s Course Evaluation questionnaires to be part of a collaborative and collegial process of ongoing and active review and development. However, these evaluations do not include any standard questions, and without deliberate decisions about including relevant questions each time an evaluation is run, they provide no opportunity for programmes, Departments, Divisions or the Institution to know, over time, how well courses are performing. Individual teachers, Departments, Divisions and the whole institution should want to gather and see the data about courses and teaching. Each group has a right and a responsibility to be gathering such data to demonstrate and understand the status and process of the development of teaching and courses over time. Without this data, engagement in development and improvement cannot happen. I hope that one of the Working Group’s outcomes will be recommendations that could form core elements to be included in a University policy about evaluation (perhaps as it relates to Recommendations 2 and 5, above). Such a policy would define and describe student evaluation processes and practices and their role in enhancing teaching and learning. I hope, in the light of research as well as experience of the issues at Otago, that such a policy would separate the auditing/quality assurance process from the developmental process, while simultaneously explaining and describing how both monitoring quality and active engagement in development are parts of the same improvement process. I also hope that student evaluation is seen as a key aspect of engagement as a community in the ongoing pursuit of better teaching, better courses and enriched student learning. I hope that within that policy the notion of “closing the loop” at individual teacher, programme, Department, Division and whole institution levels will feature very strongly. The policy should provide the basis for systems, processes and practices that are explicit, aligned, transparent, flexible, useful and meaningful. A big emphasis of our ongoing activity in HEDC should be placed on the professional development of groups and individuals across the University (including students) to inform and guide understanding about evaluation, including what it means to engage meaningfully, actively and collaboratively with student evaluation.

05


How should we evaluate the impact of e-interventions? Swee-Kin Loke

Most educators are interested in improving student learning rather than to the use of tape recorders, learning, hence it is no surprise that educational techreflecting a conceptual flaw in media comparison nologists over the years have been preoccupied with studies (Warnick & Burbules, 2007). Secondly, research questions of the type ‘does viewing instructional films over 40 years has found that technology’s impact on improve learning?’ or ‘does using the iPad in class lead student achievement (versus no technology) tends to to better learning outcomes?’ For example, as long range from low to moderate (Bernard et al., 2004; Tamim ago as 1972, Ackers and Oosthoek reported how they et al., 2011). Given these findings, Professor Emeritus attempted to “transfer knowledge to [their] students by Tom Reeves (2011) jokingly advised Otago staff, in the means of tape recorders” (p. 137). They first divided event that more PhD students asked to compare online their students into two groups, and then gave the ‘tape learning with traditional instruction, to “chase them out group’ tapes containing information in economics that of your office” [57:10] because we already know that techstudents could listen to at their own pace. The ‘lecture nology’s impact is likely to be modest. group’ followed ordinary economics lectures. At the end of the experiment, the authors evaluated the use of tape So where does that leave educators who would like to recorders by comparing the two groups’ mean scores evaluate the impact of technology on student learning? in an economics examination. Similar Clark (1994) suggests that we should comparative studies were conducted in reframe the evaluation around the We already know that more recent times: for example, Knox instructional method and not the technology’s impact is (1997) evaluated the impact of videotechnology. Because our primary goal likely to be modest conferenced versus live lectures in is in improving the way we teach (not actuarial science. in inserting a piece of technology into our teaching practice), our evaluation Evaluations of this type are often called media should relate directly to the way we teach. In fact, I comparison studies because they compare how much believe that we should conceive and evaluate e-interor how well students have learned from a lesson ventions like any other pedagogical innovation (e.g. presented via a new medium (e.g. tape recorder) versus self-paced learning, problem-based learning or PBL). existing instruction (e.g. lectures). Media comparison As such, the question at the start of this paragraph studies used to be prevalent in the field of educational needs reframing: educators using technology should technology (Reiser, 2001), but have fallen into disrepute be less concerned with evaluating the impact of our over the last two decades for the following reasons: technology use and more concerned with evaluating firstly, media comparison studies conflate media (e.g. the impact of learning activities mediated by technology. tape recorder) and instructional method (e.g. self-paced This is a subtle but important difference in how we learning) (Clark, 1994). For example, in the “economics approach the improvement of teaching and learning, by tape” study described above, any difference in student and some suggest that too many educators have forescores could be attributed to the activity of self-paced grounded technology over pedagogy over the years (Reeves, McKenney, & Herrington, 2011).

06


When we conceive e-interventions as pedagogical innovations, we gain some useful insights. In their meta-synthesis When is PBL more effective, Strobel and van Barneveld (2009) reported how PBL in medical education led to better outcomes in clinical knowledge and skills, whereas traditional instruction resulted in better outcomes in basic science knowledge. Perhaps, the important impact of e-interventions lies not in how much students learn but in how they learn (Kozma, 1994). For example, colleagues from the Otago School of Pharmacy and Higher Education Development Centre (HEDC) sought to identify qualitative differences in student learning processes between paper- and computer simulation-based workshops (Loke, Tordoff, Winikoff, McDonald, Vlugter, & Duffull, 2011). We evaluated the project by teaching, observing and audio-recording two paper- and two simulation-based workshops. Our data showed that students in the simulation-based workshops learned that they needed to frame the clinical problem themselves, rather than rely on having all the necessary information laid out on paper. If Pharmacy students are expected to frame clinical problems themselves, and if they can develop that problem-solving skill in a simulation-based workshop, then we have sound pedagogical reasons to continue running the simulation-based workshops for our students’ benefit. References

Ackers, G. W., & Oosthoek, J. K. (1972). The evaluation of an audio-tape mediated course. British Journal of Educational Technology, 3(2), 136–146. Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., & Wallet, P. A. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439.

Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21–29. Knox, D. M. (1997). A review of the use of video‐ conferencing for actuarial education — a three‐year case study. Distance Education, 18(2), 225–235. Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational Technology Research and Development, 42(2), 7–19. Loke, S. K., Tordoff, J., Winikoff, M., McDonald, J., Vlugter, P., & Duffull, S. (2011). SimPharm: How pharmacy students made meaning of a clinical case differently in paper- and simulation-based workshops. British Journal of Educational Technology, 42(5), 865–874. Reeves, T. C. (2011). Authentic tasks in online and blended courses: how to evaluate whether the design is effective for learning. University of Otago. Retrieved from http://unitube.otago.ac.nz/view?m=9WOR29G9iGC Reeves, T. C., McKenney, S., & Herrington, J. (2011). Publishing and perishing: The critical importance of educational design research. Australasian Journal of Educational Technology, 27(1), 55–65. Reiser, R. A. (2001). A history of instructional design and technology: Part I: A history of instructional media. Educational Technology Research and Development, 49(1), 53–64. Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research, 81(1), 4–28. Warnick, B. R., & Burbules, N. C. (2007). Media comparison studies: problems and possibilities. Teachers College Record, 109(11), 2483–2510.

07


using student data to improve teaching Clinton Golding

08


In 2006 the average student evaluation result for my subjects was 4.6 out of 5 for overall quality of learning (I was teaching at the University of Melbourne where 5 was best). On the basis of student comments, formal and informal, as well as their assessment results, I hypothesised that students were unclear about the requirements of their final assessment – they just did not get what I was asking them to do. I then devised and implemented a plan for helping my students to better understand the assessment task. In one teaching session I explained the assessment criteria, we identified what would be written in an assignment that met these criteria, and the students assessed a draft of their own work using these criteria. At the end of the semester I found that my students had produced much higher quality work in their assessment (corroborated by a moderator), and I received a higher evaluation score of 4.8. Each year I consulted my student evaluations and refined my explanation of what I wanted my students to do, until I left Melbourne in 2010 with an average evaluation score of 5 out of 5.

W

e are encouraged to evaluate our courses, subjects and teaching so we can confirm our jobs, get promoted, and assure the quality of teaching and learning in the university, yet we have little guidance about how we can use the evaluation results to improve our teaching. Here is how I do it, using student survey data.

I start by thinking about how I will approach the evaluation data I already have. My attitude towards this data, particularly student surveys, allows me to use it to improve my teaching. 1. View the data as formative information I see student survey data as a tool to help me improve, rather than as a mark or grade for me, my teaching or my course. If I do not make this shift in perception, if I view the data solely as a final judgment – a pass or fail on my teaching – then I cannot use it for development or improvement. I renounce “What’s wrong with my teaching?” and embrace “How can I develop my teaching?” It is also important that I treat the data as something that can be improved, no matter how good or bad my evaluation results. If I wanted to judge the overall quality of my teaching, perhaps for promotion purposes, then I would acknowledge mitigating factors. For example, if I was teaching a large, compulsory course, I might argue that my student survey results do not directly reflect the quality of my teaching or my course. But these mitigating factors are a distraction when I want to develop my teaching, and they can easily become an excuse for why I cannot improve. So when my aim is to develop my teaching, I simply ask myself: “Given this data, what can I do to improve the learning for my students?” 2. An approximation about student learning I treat the student survey data as a window on student learning, an indication of the extent to which students benefitted from the course I taught. So, if students said that they found a lecture boring and irrelevant, I would not take this to be a judgment about myself, my teaching, or even my students. But I would take it as indication that there is a barrier to their learning, and I ask: “How can I change my teaching and my course so that they no longer find it boring?” 3. Approach the evaluation data as an inquiry My evaluation results provide me with a source for hypotheses about what might be improvable, and an approximate means to test these hypotheses.

09


I hypothesise why I received my particular student survey results, and therefore what might be improved. I use triangulated data from various sources – formal student evaluations, my own classroom observations, assessment results, informal feedback from students, peer observations – as well as my understanding of teaching and learning. From my hypothesis, I infer various changes in teaching that are likely to improve my evaluation scores, which I then implement. Then I gather more evaluation data to see if my teaching has improved. I generally use the student evaluations of the overall quality of learning as a proxy measure – more specific evaluation data can help to identify the areas that could be developed, but I tend to judge improvement by the overall student survey score. If that score increases after my change in teaching, and nothing else is lowered, then this suggests that my teaching has improved. If I continue to receive higher evaluation results year after year, then Clinton deliberately this gives me reason to think my hypothesis has been confirmed and made time for my teaching or my course has improved (though I have to be careful to students to clarify distinguish merely making students happier, and genuinely making the and discuss the course better and improving their learning).

rubrics intended for assessment purposes. He even illustrated how the assessment criteria were applied when he gave us feedback on our mini assignments, and arranged for peer review of the assignment drafts. This was most helpful when we eventually embarked on our final, large assignment because we were by then very confident and clear of what he would be looking for in the assessment…. We were thus encouraged and motivated to work towards excellence in our major assignment! We wanted to do well, not just for ourselves but also for him! - Student comment 2009

10

PEER OBSERVATIONS

4. Read between the lines When I use student survey evaluation data to improve my teaching I am conscious that what students say is the issue may not be the issue. When they say a session is boring, then something is a problem for them, but that problem may not be a boring lecture. Perhaps the real problem is that when I gave my students feedback about an assessment task, I had not explained how this would enable them to understand the course and do better in the final exam, so they thought I was wasting their time, which they expressed as “That was boring!” If so, this suggest that I might improve my course by clarifying the importance of the feedback. 5. Take a broad view of teaching I also approach the evaluative data with an expanded view of teaching. Teaching is everything I do to support student learning, including writing course documents and assessment tasks, selecting readings and text books, offering online support, office hours and email contacts, scheduling lectures and arranging the desks in tutorials. This broad view of teaching better enables me to hypothesise about what I might do to improve my teaching and my courses.


Student Evaluations: Closing the Loop Lynley Deaker, Sarah Stein, Jo Kennedy

Centralised systems of student evaluation have become normative practice in higher education institutions. Research has indicated that how teachers perceive student evaluations within their context, and the role they personally play within the evaluation process, determines the nature and degree to which they engage in evaluation.

The project highlighted the need to see the process of evaluation as one that is complementary to monitoring, demonstrating and assuring quality. This implies a need to educate about the rationale for using student evaluations and to focus on engaging meaningfully with evaluation processes.

The recommendations highlight the need for a ‘closing the evaluation loop’ framework. Such a framework A recent research project led Dr Sarah Stein (HEDC) would be underpinned by a view of evaluation that investigated the perceptions tertiary teachers in the addresses both developmental and auditing needs. It New Zealand education context hold about student would focus on the role and responsibilities of groups evaluations, the factors affecting their views and how and individuals to provide, interpret and act upon they engage with evaluations. The project was funded evidence gathered through student evaluation; to by an Ako Aotearoa National Project Fund (2009) plan for on-going personal, professional and course grant. The research drew on a combination of quandevelopment; and to involve the whole of the institution titative and qualitative data: 1,065 staff from three in a collaborative way to enhance the quality of the institutions (Otago and Waikato Universities and learning and teaching environment. This will ensure Otago Polytechnic) participated in the questionnaire, that there is a match between the conceptualisations of and 60 volunteers were interviewed. Almost 75% of evaluation and how they are expressed in institutional respondents from across the three policy, or enacted by institutions institutions claimed they regarded and individual teachers, which will Students have very valid gathering student evaluation data as underpin a move to developing opinions and need to personally worthwhile, the meanings teaching and evaluation as a colknow that what they attributed to ‘worth’ highlighting both laborative and organic endeavor that think matters and is contributing and limiting factors. is complementary to a commitment to good practice. taken seriously, just as I Factors contributing to teachers’ seek out input for course sense of worth of student A system that maps plainly and content from them evaluation data included precisely how accountability and so that they find the • Informs course/teacher development work alongside each sessions relevant, useful development other, including how student and meaningful, they • Helps identify student learning evaluation plays a part, will assist needs/experiences in reducing the tensions noted by should also see teaching • Informs and provides evidence some teaching staff in the study, and and course evaluations for use in quality/summative/ reinforce the value for those who as part of this natural performance-based processes already actively engage with, and process • Forms part of a range of welcome, the feedback. - A University of Otago participant evaluation practices Factors limiting teachers’ sense of worth of student evaluation data are • Use for quality/summative/performance-based processes and doubts about its dual purpose • Teachers judged on factors outside their control • The nature of evaluation instruments • Other evaluation methods better/preferred • Current student evaluation system limitations • Timeliness receiving the results • Difficulties in interpretation and how to use the data effectively • Questionable quality of student responses Other limiting factors included preoccupation with research, whether the institution valued teaching and concern for privacy around evaluation feedback.

The full report can be viewed at: http://tiny.cc/c9icmw The Summary Guide can be viewed at: http://tiny.cc/69icmw

11


1000 words to change an institution? Kerry Shephard

12


E

valuation of teaching has a history, and, as with all histories, it has had its critical points. The critical point for me occurred in the late 80s, when my higher education institution back in the UK started to prepare for academic audit. I know that we asked our students what they thought of their courses before then, particularly through staff-student committees and course representatives, but it all became so much more formalised when we were obliged to gather together a paper trail closing the loop between design, delivery, student perceptions and redesign, and account for any discrepancies. And I stress that these were ‘course evaluations’ not ‘individual teaching evaluations’. Actually, I did not encounter a student perception form that mentioned my name until the 21st century, at which point it became personal. From then on, if students were not happy it was clearly my fault. Of course, Australia and New Zealand have sought the views of students about their teaching and teachers for many years, and I understand that these instruments are widely used in the USA. Why shouldn’t we ask students what they think about the teaching that they’ve experienced? Surely teachers are at the centre of their learning experience, so need to be at the centre of our questions about their experience? But are teachers really at the centre of student learning experiences?

Approximately 25% of individual teacher feedback surveys at our university have included responses from 10 or fewer students

By now you will have gathered that I’m not the strongest supporter of the individual teaching evaluation questionnaire. And I’ve used more than 250 words already. But you get the point. In my version of higher education, teachers are not at the centre of the student learning experience – learners are.

For some, evaluation is a science, for others an art, and for many a discipline in its own right. There are many ways to do it, and the demands of accountability should not necessarily dictate how we do it, perhaps especially in academia where we might anticipate a high density of clever people able to do it effectively. Perhaps Otago’s system is particularly problematic for me. Most of our student feedback surveys are of the individual teacher variety, and as these are, at least initially, confidential to the teacher concerned, they have limited roles to play in departmental- and programme-based development. Add potential problems with small samples. Over the past decade, approximately 25% of individual teacher feedback surveys at our university have included responses from 10 or fewer students. Small groups provide low statistical reliability and the ever present danger of students feeling other than anonymous. Some other higher education institutions, in Australasia and the USA, place restrictions on processing survey data with very small student groups and insist that other evaluative approaches, such as peer review, enter the analysis in such cases. Perhaps, because we do not have these restrictions, our own enthusiasm for peer review and other evaluative inputs is itself limited. Our commitment to including, in an unrestricted manner, student perceptions of their teachers within our academic promotion processes may be commendable, but is not without consequences to our collective perception of what counts as evidence of good teaching. I am suggesting that we may have allowed student perceptions of their teachers to over-dominate our evaluative processes. Is there another way? Does this escalator have a stop button? Of course, but it is up to us to use it! Senate has now recommended that all papers have ‘course evaluations’ (Paper Feedback Surveys) at least on a three year cycle. These routinely enter departmental discussions


on what is working, and what is not, and will be (currently are for some) valuable in helping us to understand students’ learning experiences. Paper Feedback Surveys address student perceptions of their learning experiences rather than of what their teacher did. For many teachers a combination of student perception, peer review and critical self-review will provide rich descriptions of their teaching, suitable for departmental development processes and as evidence of good teaching. Such rich descriptions may be more challenging to interpret than the magic numbers (% 1s and 2s responses to the overall effectiveness question) ever were, but should provide those with this interpretative role with greater personal satisfaction for their efforts. But it is still too personal for me. If higher education really is serious about evaluating what it does in the world of teaching and learning, surely it needs to get to grips with bigger questions than whether or not Kerry Shephard, or any other individual, is individually pulling his or her weight. Surely the enterprise of higher education is more than the sum of its cogs? I’d like to think so, and that we could apply our extensive evaluative efforts towards some bigger-picture questions. The heart of the matter for me is that whereas assessment is used to determine the achievements of individual students, evaluation is good for populations or cohorts of students whose anonymity is assured, and where we either cannot, or do not wish to, apportion blame for lack of achievement. If our students are not collectively achieving what we hope they should, it is not my fault, your fault, or the fault of the student body; it is our problem to solve. It is also our responsibility to ask the questions, or to evaluate the extent to which we achieve what we say we shall. Each department or programme uses assessment to record individual student achievements with respect to agreed objectives related to their field of study. But institutionally we hope that our graduates will be good communicators, critical thinkers, have cultural understanding and a global perspective, be IT literate, lifelong learners, selfmotivated team players, have a sense of personal responsibility in the workplace and community, and, as a bonus since 2011, have some degree of environmental literacy. But do our graduates achieve these things? How would we know? Assessment of individuals may not be appropriate, as descriptors such as willingness, appreciation and sense of personal responsibility will always be challenging to assess in an exam. If students’ degrees counted on it, surely most could write an essay to demonstrate their sense of personal responsibility within the community! But such qualities are open to evaluation using a range of validated instruments. We accept that anonymity encourages our students to give impartial views about our teachers’ characteristics, so it is not a great leap to use similar approaches to explore their characteristics (always triangulated, of course, with a broad range of indicators, such as, in this case, student portfolios). We could evaluate these things if we wanted to and use our existing evaluation infrastructure to help us. Indeed, if we spent a fraction of the time, and effort, that we spend asking students what they think of their teachers to address these important attributes we might discover something about ourselves. Collectively, we might be doing a good job. But then again …

14


Interpreting the results from a student feedback survey David Fletcher

If you want to interpret the results from a student feedback survey, there are two statistical concepts that can be helpful to bear in mind: non-response bias and precision. Non-response bias Non-response bias can compromise results if not all the class respond. Those who do not respond might have different opinions from those who do, in which case the results may not be representative of the views of the whole class. For example, it is possible that a student who does not like the teaching may feel more (or less) motivated to provide feedback than a student who enjoys the teaching. Precision Imagine that you have the results from a single survey with a 100% response rate, so that there is no possibility of nonresponse bias. There will still be some uncertainty associated with the results, as illustrated by the following example. Suppose 80% of the students respond to the question “Overall, how effective have you found X in teaching this course?” with the answer “Very effective”. If you were to repeat the survey next year, you might find that 75% of the students responded in this way. And the year after that it might be 82%. This variation in the results for the same paper and teacher is to be expected, even if you teach it in exactly the same way each year. With survey results from several years, you can clearly assess your teaching in this paper better than with results from just a single year, both overall and in terms of any improvement over time.

of the population in New Zealand. This leads to the concept of the precision of the results couched in terms of what statisticians call a “95% confidence interval”. In the example above, 80% of students responded with the answer “Very effective”. If this comes from a survey with 500 respondents, a 95% confidence interval covers the range 76% to 83%, meaning that the percentage of students in this hypothetical population that would respond with this answer is very likely to be in this range. If the survey involved only 10 students, the 95% confidence interval covers a much wider range, from 44% to 93%. The precision of the first survey is much greater, with more certainty about the results. This is analogous to the “margin of error” often quoted when opinion polls are reported in the media. What does this mean in practice? You can clearly reduce non-response bias by encouraging more students to respond. Increasing precision is harder if you always teach small classes. It is possible to combine data from several years in order to reduce the range of the confidence interval. Even without this extra calculation, it is clear that if you have 80% of students responding with the answer “Very effective” in each of three years, the results are more precise than those from a single year. Whether you use the results for personal development, or others use the results for some form of judgment about your teaching, precision and the potential for non-response bias need to be borne in mind. In addition, I am sure that my colleagues in HEDC will recommend that you “triangulate” such results using other indicators of teaching quality!

Results from only one survey can be put into context by predicting what the overall response would be over many hypothetical repetitions of the paper. To do this, we view the results from a single survey as a sample from a population of students that might take the paper, in the same way as an opinion poll involves a sample

15


Evaluations: perspectives from ‘end-users’ Research reported in this issue of Akoranga looks at teachers’ perceptions of evaluations. In response to this I conducted an informal street-poll survey with six Otago University students, seeking their perspectives on the teacher and course evaluations they are asked to complete. Students ranged from first to fourth year of study, with an equal gender representation. The questions, and the students’ responses, feature below.

How seriously do you take the process of filling in the questions? I don’t do it 100% seriously, most of the time I just tick all “good”. It depends how the lecturer is, whether I can understand them, whether I fall asleep or I think the lecturer is really bland. I don’t write comments, partly because I can’t be bothered, or I feel really bad if I write something bad. I always do them seriously, because people in the future will benefit from the suggestions. Also, I know the lecturers in our department value them a lot. Seriously – this is our chance to speak.

Reasonably seriously – I mean, they put time and effort to make them, so you may as well spend a couple of minutes filling them in. I fill it in truthfully.

16

So, what is the purpose of those evaluation questionnaires that you fill in? To evaluate the lecturers, to see if we’ve enjoyed the course, if we’ve learned anything. To give our feedback on how we think they teach.

So that lecturers can alter their courses, get feedback on how students think they are doing, so they can change their teaching style. This is a chance for students to evaluate their teachers, gives students a chance to speak. So they can figure out how effective they are as a teacher, and keep their job. To see how everybody finds the lecturers, for the lecturers to see how they’re doing, a personal evaluation I guess.


What’s your opinion on how many surveys you are asked to do each semester? It’s just right.

It’s ok.

One per lecturer is ok, but we probably could do more about the whole course. Just right, not too much.

It’s ok.

I don’t think it’s too bad.

How often do you hear back about the results?

What value do you see in filling in these questionnaires? It is important, but a lot of students don’t take it seriously. It’s important, but what happens to the results afterwards? You’d like to think that next time the course runs it would be better. I use them to write comments, because sometimes it’s easier doing that than approaching the lecturer. It’s quite important – it’s communication between the lecturer and the students, it’s good for the lecturer to see students’ perceptions. It makes the teachers more effective, creates a better learning environment, to improve this uni for everyone else. I think it’s important – how else would they know – it’s the main way you give your opinions.

Never.

I’ve never heard back.

Actually never.

On the whole, the students in this small-scale street-poll seem to regard evaluations as a way of providing feedback to teachers, take the process relatively seriously, and think the number of surveys they are asked to complete per semester is appropriate. In addition, it seems these students view the evaluation process as important, not only for themselves but also for the benefit of other students. However the resounding response of “never” hearing back about results suggests a potentially underemphasised aspect of the evaluation process – that of closing the feedback loop. Taking a more substantial look at students’ perspectives on evaluations may reveal further insights into how the process can be strengthened, for both students and staff.

17


No, we don’t run popularity polls! Jo Kennedy

HEDC Evaluation Research and Development exists to support academic staff professional development. The team consists of Jo Kennedy, Allen Goodchild and Julie Samson, with Dr Sarah Stein as the academic coordinator of the section. We are mostly known for administering the University of Otago centralised student evaluation system as a free service for staff. These questionnaires give students a voice to feed back their experience of papers/modules/courses and teaching, and they enable staff to utilise student perceptions when developing their papers and teaching. “Evaluation without development is punitive, but development without evaluation is guesswork” (Theall, 2010, p. 45)

What do we do? In the course of each year we create, log, scan and report on 2500 evaluations. Even though the proportion of online surveys has increased from 2% in 2009 to 9% this year, this still involves handling and counting over 100,000 forms. Due to system enhancements, our average turnaround time for generating reports after receiving completed questionnaires has reduced from 6.3 weeks in 2009 to the current 2.8 weeks. It takes us longer, however, to return reports over the last 5-6 weeks of lectures in each semester, when the three of us are dealing with over 600 evaluation requests. During these peak times, when our frazzle-levels are correspondingly high, patience and chocolate are appreciated! It’s always worth asking if you need something done urgently; we say “yes” much more often than “no”. We have our regulars in this category (you know who you are…).

18


Student Evaluation Design We work with you to design a suitable evaluation. Your job is to tell us what type of evaluation you would like, which questions to include and the questionnaire medium (paper or online). Our job then is to set up the questionnaire, and, once it has been completed by students, process the results. In addition, we provide advice about questionnaire design, interpretation of results, other evaluation methods, and how the evaluations can be used within institutional processes (e.g. your Otago Teaching Profile, promotion and confirmation). There are three types of evaluation that we offer, each with a slightly different focus Course Questionnaire

Individual Teacher Questionnaire

Coordinator/Team Leader Questionnaire

Primary purpose

To assist development and planning of a course/ paper/module.

To assist individual teacher professional development in relation to student learning.

To assist individual teacher professional development in relation to coordination/team leadership activities.

Respondents

Students.

Students.

Teaching team/ Tutors/ Demonstrators.

Structure of Questionnaire

No compulsory questions, choose from catalogue or customised questions. Options of 5-rating Likert scale or comments questions.

5 compulsory questions and 5 additional from a catalogue. One fixed comment question.

Choice of 5 to 10 questions from a catalogue. Options of 5-rating Likert scale or comments questions.

Can be used in performance appraisal processes?

Yes.

Yes.

Yes.

We encourage staff to not only be self-reflective and gather evaluation data that helps them improve their teaching/courses, but also to realise the importance of having that data in a form that can be used as evidence for all the types of quality advancement processes that exist at Otago University and externally. Examples of processes that this evaluation data can be useful for include performance appraisal (promotion, confirmation), job applications, departmental review and external accreditation. Improvement and development is the first step, and this allows judgments to be made about quality. Research, Projects and Policy Good evaluation practice starts at home so we support a range of HEDC evaluation activities. We conduct evaluation-related research. Just completed is an Ako Aotearoa funded project on Teachers’ Perceptions of Student Evaluation. The summary and full reports are available at: http://akoaotearoa.ac.nz/student-evaluations.

19


Over the last year we have been working on a major system improvement project, stage one being the Otago InForm online ordering system. This is an on-going project, and we welcome staff feedback about its usability and features. The initial development phase of this project utilised the feedback we received from our Evaluation Service survey conducted in 2010. We are involved with policy development and are part of an Evaluation Working Group currently considering possible changes to the student evaluation process. Myth-Busting! As the title suggests, a number of myths still persist about student evaluations. This is despite substantial research on validity and reliability of student ratings. If any of the phrases below sound familiar, you might like to read a paper by Benton and Cashin (2012) in which they summarise the research and literature that shows these are myths.

“Students cannot make consistent judgments.” û “Student ratings are just popularity contests.” û “Student ratings are unreliable and invalid.” û “The time of day the course is offered affects ratings.” û “Students will not appreciate good teaching until they are out of college a few years.” û “Students just want easy courses.” û “Student feedback cannot be used to help improve instruction.” û “Emphasis on student ratings has led to grade inflation.” û (Benton and Cashin, 2012, p. 2)

Despite the lingering myths, many staff value collecting student data on their courses and teaching. As one University of Otago academic staff member commented in the Ako Aotearoa research project mentioned above, evaluation “keeps you on your toes - review by others is a great way to identify how others see you... and how they see your strengths and weaknesses”, while another comment provides a constructive thought on which to conclude: “Student opinion is essential to understanding what the students feel and what they like and don’t like or understand - they also offer ways to improve.” References:

Benton, S.L., & Cashin, W.E. (2012). Student Ratings of Teaching: A Summary of Research and Literature. IDEA Paper No. 50. Manhattan, KS: Kansas State University, Center for Faculty Evaluation and Development. Theall, M. (2010). New resistance to student evaluations of teaching. The Journal of Faculty Development, 24(3), 44-46.

20


QAU learning and teaching Romain Mirosa

Since 1995, the University of Otago has conducted an annual Graduate Opinion Survey (GOS) and Student Opinion Survey (SOS) to obtain feedback of students’ overall academic experience. These surveys are managed by the Quality Advancement Unit (QAU), which also coordinates the University’s response to the external academic audit conducted by the New Zealand Universities Academic Audit Unit, administers the University’s internal academic and administrative reviews process, and supports initiatives that facilitate good practice in quality assurance and improvement across the University. The GOS surveys graduates who completed their course of study two years previously, while the SOS surveys current students. Each degree/major combination is surveyed approximately once every four years and the GOS and SOS timetables are aligned. The core instrument of both surveys is the ‘Course Experience Questionnaire’ (CEQ), which is widely used in Australia. The CEQ is directed at students and graduates who have/had a course work component as part of their study, and asks questions grouped into a number of themed scales in order to measure student assessment of the following: • Good Teaching • Clear Goals and Standards • Learning Community (SOS only) • Intellectual Motivation (GOS only) • Appropriate Assessment • Generic Competencies • Overall Satisfaction

In 2013, the graduate attributes section in the GOS will be modified to align with the new Otago Graduate Profile adopted by the University in April 2011. Like HEDC’s evaluation questionnaires, the SOS and GOS provide data to support improvement in teaching and learning outcomes. The results from the surveys provide a unique possibility to gain an overview of the “teaching and learning” outcomes (through the particular lens of students’ and graduates’ perceptions) at institutional, divisional and departmental level. They are also used as a source of information for review panels to demonstrate to government, through the Annual Report, that the University has achieved the goals it has set itself, and to external agencies, such as the 2011 Academic Audit Panel, that the University has processes in place to seek feedback and use data to facilitate improvement in its core activities and operations.

The statements in the CEQ are based on comments that students and graduates often make about their experiences of university teaching and study, and that research has shown to be indicative of better learning. The emphasis of this questionnaire is on students’ perceptions of their entire course of study. The results are the “averages” of students’ experiences.

While the GOS and SOS surveys continue to be core, the University has taken part in three iterations (in 2009, 2010 and 2012) of a relatively new student experience questionnaire called the Australasian Survey of Student Engagement (AUSSE). This questionnaire focuses on measuring the level of engagement of students and their learning outcomes. While response rates have been very low, the AUSSE surveys have yielded some interesting results, and, as it is a standardized questionnaire used by many institutions in Australasia, it provides the opportunity for benchmarking.

In 2011 and 2012, the University added a new postgraduate section to its graduate survey and student survey respectively, based on the Postgraduate Taught Experience Survey and the Postgraduate Research Experience Survey (developed by the UK-based Higher Education Academy). The aim of this modification is to enable benchmarking with comparable UK institutions, and to provide more in-depth information about the postgraduate experience of Otago students.

Recently the Australian Government has commissioned a new survey, focusing on the student experience, called the University Experience Survey (UES) that will be rolled out in Australia in 2013. The UES appears to be heavily influenced by the engagement literature, while still incorporating more “traditional” student experience questions. The QAU will monitor these developments and evaluate their possible relevance and value for the University of Otago.

21


Beyond centralised student evaluation Swee-Kin Loke

Whether you had planned for formal student evaluation or not, your students might already be evaluating your course or teaching via other means (sometimes publicly!). Akoranga scanned a popular teacher rating website and also student votes for the OUSA Teaching Awards for the most complimentary student views on Otago teachers.

.” life ole wh my er e.” mb tim me ur re yo

ul

” ure. fut in

the

d in tera

o n.”

e

ev

lp f

an

ts

t pe rs

.” ss cla e h an c tiv e wit h t en er dc tl yt d.” h e e hin ery e r ha m e v p e r g o e wi an .” f e s s o r I’ v cle ” th ,h cs d.” i a o g. o s r ns eh and l s st th in r a e t l d e as n s u r e d w e a ll u n we os s na C e a i e e r m rs bue e th th ly d me nt for at al ou c ri wit h a d e e p r e s p e ct i rq e are rit pap ue str he oc st u t t g f g o li n g a n io n d m a k e s a ll a r e a s s is how rn a by e chal el le nging so w our own thoughts

de

en

ea

rim

e,

gr

ys

n yi

ok

r,

pe

he

ex

as

a ew

he

bl

h rt wo

te at go re “G nd os ou m ll r “A he a t m ’s s he g ay “S a alw d an he nc “S lar tie a ho pa e sc of sh “A t o ich al wh

“H

“Th

22

t

od

ac

il l

“Th ew ay [h “R e] em em ta b ug er “Sh ht s em ev th ak er e es yo “Th ne me is g uy wa is “[H nt jus e] tf is cle a “So ar he a lpf s ul “Sh a n ei d si nc “I a r ctu all y

the world a bet ter p nd make lace o go a t d . [H e t n e] o w d , n a g t n i o g e a a g w r n t e h wil , and fair lb ally , e y u v n e ea n n i r u y t f l n r e ect lec hange)! H enthusiasm is co ery c v u I a tu re is infe is t h re c c i i d su ctio n wh je r ( a p l m e o x c u b b y s u r t e s s Iw v i n u , e s s . a y ” good fu s e la unn f c m , l t w o a r a a t a y.” n nd e sm , not easy but fair ’s is om of you .” s c e t ec :h to exp tic e s iss his lectures. LEG h a er m t EN v a e nt D. h In , w ” s w d i n t a t u y e l . ” b o to a or dge e m l ow hu kn e time!” y l whol ib e h d t e en il st


Q3. How can we improve Akoranga? Not at all

(c) Communicating what HEDC values in terms of teaching and learning Very much

Not at all

(b) Generating debate around issues in higher education teaching Very much

Not at all

(a) Promoting good teaching within the university Very much Q2. We have listed below the 3 main goals of Akoranga. How effective is Akoranga in

Q1. Approximately what proportion of this eighth issue did you actually read? 0% 25% 50% 75% 100% Do you teach at the University of Otago?

YES

NO

FOLD

AKORANGA The Periodical about learning and teaching

ISSUE 08 2012: Evaluation

You have in your hands the eighth issue of Akoranga, a collaborative periodical from HEDC and colleagues around the University with an interest in higher education teaching. To remain relevant to readers, the editorial team felt it was timely to seek your views more formally. We promise that this won’t take more than a walk to the internal mail postbox. Alternatively you can fill it on line here:

http://tiny.cc/kampmw (The survey will be live until Christmas)

To:

Jo Kennedy Higher Education Development Centre 65/75 Union Place West Dunedin 9016



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.