THEORIZING COMMUNITY RUBRICS: LIMITS, RESEARCH, AND CASE STUDIES

Page 1

Writing and Writing Instruction in Different Academic Contexts

THEORIZING COMMUNITY RUBRICS: LIMITS, RESEARCH, AND CASE STUDIES

4​ 5​ 6 Chris Anson¹, Joe Moxley², Djuddah Leijen³, Damian Finnegan​ , Anna Wärnsby​ , Asko Kauppinen​

¹NCSU, North Carolina, USA ²USF, Florida, USA ³University of Tartu, Tartu, Estonia 4​ University of ​ Malmö, Malmö, Sweden 5​ University of ​ Malmö, Malmö, Sweden 6​ University of ​ Malmö, Malmö, Sweden

In “Big Rubrics and Weird Genres: The Futility of Using Generic Assessment Tools Across Diverse Instructional Contexts,” Anson, Dannels, Flash, and Housley (2012) argue that "generic, all­purpose criteria for evaluating writing and oral communication fail to reflect the linguistic, rhetorical, relational, and contextual characteristics of specific kinds of writing or speaking that we find in higher education." In contrast, Moxley (2013) has argued use of a community rubric across genres, course sections, and courses may enable instructors in writing programs to grade students' work in equivalent ways; may provide a baseline measure of a particular group's reasoning and writing abilities; may enable WPAs to make evidence­based curriculum changes in response to real­time assessment results and then compare other cohorts' baseline performances; and may provide evidence regarding the development of transfer of writing and critical thinking competencies. This roundtable explores the efficacy of using rubrics for assessing writing­­particularly at the program level. More specifically, participants will explore the contexts in which common rubrics may and may not work. Roundtable participants will explore how teachers negotiate use of a rubric across courses, genres, disciplines based on their experiences deploying rubrics in various contexts­­a program in English for academic purposes, a first­year composition program, a program in communication across the curriculum, and a university­wide graduate course for graduate students on scholarly publishing. Speaker 1 (Chris Anson) will orient the panel by asking an overarching question: can generic rubrics effectively serve the purpose of assessing writing across contexts? To answer this question, he will consider the application of criteria across contexts based on how bounded the contexts are, how closely aligned their learning goals are, and how synchronously their disparate instructors align their courses within the curriculum. Community rubrics may work most effectively within highly bounded, goal­based, synchronous curricula, while more particularized rubrics may be required across contexts that do not share these features, as documented in his and his colleagues’ work studying writing and speaking genres in different disciplines. The presentation will result in a theoretical framework for considering the development and use of common community rubrics. Speaker 2 (Joe Moxley) will report on use of a community rubric (1) to assess 52,001 intermediate and final essays from 7,722 students spanning 2 courses, 7 terms, 3 years, and over 482 sections and (2) to compare 107 instructors’ assessments of 16,312 papers and and 5,857 students’ reviews of these essays­­a total of 30,377 peer reviews in the first­year composition program at the University of South Florida. Speaker 2 will report on a model of development—accounting for the joint effects of student competency, learning rate, and instructor bias on student development—that provides strong evidence of predictive validity for ratings. Although this method provides evidence that student development transfers, results raise questions regarding ways a quality­based community rubric and traditional grading practices may restrain student development. Regarding the analysis of 46,689 reviews (16,312 instructor reviews and 30,377 student reviews), speaker 2 will identify the crucial variables that impinge on quality peer review processes, especially as it relates to inter­rater reliability with instructors: (1) rater bias (the average rating assigned compared to how others rate the same or similar works), (2) rater discrimination (the variance of the scores assigned), (3) the quality of the rater’s writing as measured by average instructor ratings, and (4) the experience of the rater (the more peer reviews completed the better).


Speaker 3 (Djuddah Leijen) will report on a study investigating the effectiveness of peer feedback in a web­based peer review environment among novice second­language chemistry writers. All 43 students participating in this study use peer feedback prompts and ratings using specific rubrics to assist them in providing effective peer feedback comments that focus both on higher­ and lower­order concerns. Effectiveness is measured through the uptake of the comment in a student’s next draft. Overall, the study explores whether the prompts and ratings sufficiently support L2 writers to comment on higher­order concerns and whether these comments therefore also enhance the effectiveness, i.e., whether an uptake can be measured in a student’s next draft. In addition, comparisons are drawn to comments where the rubric specifically asks them to comment on or rate lower­order concerns. As these are L2 writers, the assumption is that lower­order concern comments are likely to be more abundant and more effective, whereas comments focusing on higher­order concerns, guided by the prompts and rubric, are less effective and cause greater ambiguity, hence resulting in fewer uptakes. Overall, the results provide a discussion of how rubrics within web­based peer review systems can be and should be used and adapted to accommodate for L2 writers. Speakers 4, 5, and 6 (Damian Finnegan, Asko Kauppinen, Anna Wärnsby) will discuss the shift of focus from rubrics to commenting as the main facilitator for assessment. It has been put forward that community rubrics work best within bounded, goal­based, synchronous curricula and that more particularised rubrics are required in more particularised contexts. However, we argue that this focus on rubrics for purposes of assessing writing across contexts may be misplaced. The real question is how are rubrics actually implemented in the assessment of writing; that is, how do we consistently detect in writing the features that the rubric is designed to address? Although there may be a high degree of agreement about the rubric categories, the realisation of these categories in commenting and assessment may remain inconsistent. We propose a shift of focus from rubrics in general to what we call targeted comments to complement rubrics. Targeted comments are aligned not only with the different rubric categories but also customised for designated assignments. Thus the feedback can be calibrated more precisely to reflect specific course learning outcomes and different assignment aims. Across writing courses, the rubric may remain the same, while the targeted comments (targeted community comments or targeted particularised comments) concretise particular learning outcomes/goals through a common rubric. In this way, rubrics are less important as a tool for assessment but rather an instrument to categorise and focus the commenting to realise assessment.

References

Anson, CM, Dannels, DP, Flash, P & Housley Gaffney, AL 2012, ‘​ Big rubrics and weird genres: The futility of using generic assessment tools across diverse instructional contexts. Journal of Writing Assessment’, vol.5​ , no. 1 Retrieved from ​ http://www.journalofwritingassessment.org/article.php?article=57

Moxley, JM, 2013 ‘Big Data, Learning Analytics, and Social Assessment Methods’, ​ Journal of Writing Assessment vol.​ 6,no.1


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.