Ts orca

Page 1

PROJECT ONE: ONLINE READING COMPREHENSION ANALYSIS

Elizabeth Oliver Marsh

With over two billion Internet users worldwide, the Internet has saturated every aspect of life. We use this powerful resource in our personal lives as a means of communication and expression and we use it in the workplace “to share information, communicate, and solve problems (Leu, Coiro, Castek, Hartman, Henry, Reinking, 2008).” To be successful in this increasingly digital world, one must possess “the skills, strategies, and dispositions necessary to successfully use and adapt to the rapidly changing information and communication technologies [that] influence all areas of our personal and professional lives (Leu, Zawilinski, Castek, Banerjee, Housand, Liu, O’Neil, 2007).” As educators, it is our primary mission to prepare our students to not only function in this digital, networked world of information and communication, but to navigate efficiently within that world and to contribute to that world constructively and authentically. Because “[r]eading a Web text makes greater demands on critical reading skills than reading printed texts, owing to the large proportion of nontextual elements, the possibilities for interactivity, and the demands the nonlinear character of the Web make on the associative ability of the student,” the integration of ‘new’ literacies is fundamental to a relevant education. In addition to enhanced critical reading skills, “[t]o be able to use the abundance of information properly, students must be capable of continually making decisions on their own needs for information (Kuiper and Volman, 2007).” According to Leu et al, these new literacies include the abilities to identify important questions, locate information, analyze the usefulness of that information, synthesize information to answer those questions, and then communicate the answers to others (2007).” As a History educator, I believe these skills are essential to being an informed student and to becoming an informed citizen. Developing a strong research question establishes a pathway and direction for one’s search, which is incredibly useful in navigating the wealth of information (and misinformation) available. In such a content-rich environment, it is “of paramount importance that you know precisely what you are looking for” (Kuiper and Volman, 2008). According to Henry’s findings, “the ability to search and locate information can be described as a gatekeeper skill in online reading (2006)”, wherein “those who possess the online reading comprehension skills necessary to locate information can continue to read and solve their problem; those who do not possess these skills cannot (Leu et al, 2008).” However, “[s]tudents must not only have the skills to search for information, but also the skills to process and use the information they find (Kuiper and Volman, 2008).” Central to processing this information is the ability to


evaluate sources for usefulness, validity, and credibility. While evaluating historical documents and texts for bias and context occurs in my History course, “it is perhaps more important online, where anyone can publish anything (Leu et al, 2008).” This evaluative process is central to determining what information provides students with the evidence and details necessary to support an historical claim. As the developers of the ORCA (Online Reading Comprehension Assessment) explain, “[i]n order to develop these skills, we require assessment instruments for online reading comprehension that are both psychometrically sound and practical (Leu et al as cited in ORCA Project Overview, 2008).” Curriculum designs feature assessment as a crucial element to inform instruction and to identify areas and students needing the most support. Therefore, assessments in new literacies are needed to inform such instruction. The results of such online assessments “have provided us information to systematically refine our evolving understanding of the online reading comprehension skills demonstrated by proficient and less skilled adolescent online readers (Leu et al, 2008).” The data gathered allows educators to identify “patterns of ineffective online reading processes across several populations of adolescent readers [which has] helped inform our decisions about which skills, strategies, and dispositions we might focus on” in instruction (Leu et al, 2008). Perhaps the data will best serve to convince educators, administrators, and policy-makers that the skills necessary for meaning-making with online texts differ from the literacy skills we currently value in education. For example, in the study and research conducted by Leu et al, “a low correlation between” online and offline reading scores, providing evidence “that online reading is not isomorphic with offline reading” and that “[d]ata from a third study … revealed additional evidence that online reading comprehension ability is not isomorphic with offline reading comprehension ability (2008).” It is only with the support of educational leaders that new literacies will be integrated into our state and national standards, be implemented into our classrooms, and bring about professional development resources to provide the teacher education necessary to support students in online learning. The first step in this implementation is getting all of the stakeholders to agree that fundamental change is necessary. It is often data such as that gathered from ORCA and like projects that drive such paradigm shifts in education. To have the opportunity to evaluate several of my students with such a tool and to see such data was incredibly enlightening and informative for my own practices. Generally, Student 1 is a strong offline reader and consistently performs well as compared with her peers. She frequents Facebook, Instagram, Tumblr, Pinterest, and


surfs the web for a variety of videos and has constructed several multi-media products in my course. Because Student 1 is a strong reader and a deliberate writer, she performed well on the ORCA. Scoring 14/16 overall points, she scored particularly high in locating (4/4) and synthesizing information (4/4). Because these two skills are taught explicitly in our school system, it was not surprising that the results were high. In fact, the synthesis tasks began to seem repetitive to her, as she felt like she “just [kept] typing the same thing again and again.” While somewhat proficient in her communication, Student 1 struggled in her evaluative performance. Just as Kuiper and Volman’s findings, that “[s]tudents hardly assessed the reliability and authority of the information,” so it was with Student 1 (2007). While she scored well in the area of evaluation by addressing the questions presented to her in the chat, her performance (and the accompanying field notes) indicate that the evaluation of sources was an afterthought to accomplishing the task and only occurred because she was prompted to do so. While she may have internalized the process of evaluating, Student 1’s recorded test shows no attempt to identify the authors of any of the sites prior to extracting useful information needed to address the question presented. Unlike Student 1, Student 2 is assuredly less technologically savvy. When given opportunities to digitally display her learning, she will often opt for the traditional, written task instead, often writing it by hand. She is a strong reader and writer and generally performs well on tasks that are linear and concrete. As for her performance, she scored 11/16 possible points, which I would consider an impressive showing for Student 2. Like Student 1, Student 2 failed to properly address her email and to reference from where she drew the information shared with Mrs. Marin (Communication 2/4). She scored 3/4 possible points in the areas of location, synthesis, and evaluation—a fair performance—however, Student 2 grappled with the search process most. Because the task of “gloogling” an article featured in a Boston newspaper was timed, Student 2 was not able to accomplish the task within the timeframe provided. In fact, developing a combination of key words to access the site was the biggest challenge for her. Confirming studies that indicate “searching for precise, concrete information makes high demands on the search strategies of the Web user”, Student 2 was unable to locate the specific article she was seeking (Kuiper and Volman, 2007). Unfortunately, as Henry rightly points out, “[i]f one does not have the ability to locate information in an effective and strategic manner, then all other reading activities are impeded, as the user cannot get beyond this point (Henry, 2006)”. Student 2 also neglected to assess the reliability or credibility of sources used in her search, stating “I figured, ‘Why would you give them to me if they were bad?”


While searching and locating information are considered “gatekeeper” skills and are crucial to developing the subsequent literacies, I believe my instructional interventions will be best leveraged in the area of evaluative skills. Because both student performances indicate the need for instructional intervention in the evaluation of online texts, integrating the evaluative process within the online environment is a definite necessity for my curriculum. As a History teacher, I already instruct on evaluating primary and secondary historical documents, modern texts, and documentary films for reliability, credibility and bias. While I teach this using a variety of texts, I had not previously thought about integrating this evaluation process to online texts. In history, we learn to evaluate texts we encounter with a series of questions: Who wrote the document, who was the intended audience, what was the story line, why was the document written, what was its purpose, what were the basic assumptions made by the author, can I believe this document, what can I learn about this society, and what does this document mean to me? This series of questions varies only slightly from the evaluative ‘checklist’ offered by Leu and his colleagues, wherein students could quickly and effectively determine the usefulness and validity of information offered on a site: “1) Evaluating understanding: Does it make sense to me? 2) Evaluating relevancy: Does it meet my needs? 3) Evaluating accuracy: Can I verify it with another reliable source? 4) Evaluating reliability: Can I trust it? 5) Evaluating bias: How does the author shape it?” (Leu et al, 2007). While it would not be feasible to work a checklist like the TICA II into every search and research challenge in my classroom, it is definitely feasible to work Leu’s quick list above into my instructional practices. In fact, as many other teachers in my school utilize the web as a source for content and as a search tool for student use, this list provides the specificity necessary to glean quality results from a search, but remains broadly applicable to all courses and contents. Leveraging this tool across the curriculum would ensure that students are consistently expected to evaluate online content in a meaningful way EVERY TIME they access online information. In my observations of Students 1 and 2, I noticed that once prompted to evaluate the source used, they both did so and in doing so exhibited the basic skills necessary to evaluate for credibility. So if skills possession is not the issue, this observation led me to wonder if it is enough to simply prompt students to ask critical questions as they encounter new online texts. Perhaps an increased accountability in several courses, placing value on critical evaluation of sites, would help students to eventually internalize the skill of evaluating sources to the point of being a second nature skill.


Unfortunately, as many early studies have shown, “[t]he sophistication, complexity and specificity of information obtained through electronic resources frequently exceed the comprehension levels of the students as well as their needs (Kuiper and Volman 2007).” Because of this known limitation, sending our students into the world, illequipped to navigate and construct meaning within the world of the Internet would be to fail in our mission to develop within our students the skills to be constructive and positively contributing citizens in our increasingly digital world. In order for these skills to take center stage in our classrooms, educational stakeholders must be convinced that the difference between traditional and nontraditional texts exists and that the skills necessary to grapple with such different texts must be addressed in our education system. To convince them, we must arm ourselves with inarguable evidence gathered from a variety of valid and reliable assessments To build this body of evidence, the assessments used must withstand tests of validity and reliability, requiring a strengthening and refinement of the assessment instruments used in measuring online reading proficiencies. While every assessment measure is inherently limited, the ORCA does provide a window into the skills required for effectively locating, evaluating, synthesizing and communicating information. However, it does not measure all facets of digital literacies. Because students must “ask themselves continually what it is they want to know, what is the purpose of knowing it, and what sort of information can contribute to that purpose (Kuiper and Volman, 2008)”, there was no method provided to gather this ‘continuous questioning’. While my field notes do reflect a record of some of those questions, I was hesitant to prompt the questioning, for fear that it would activate awareness within the subject that might skew the results collected. Additionally, because online navigation often provides students with a “choose your own adventure” research path, one of ORCA’s shortcomings is that the search tasks were tightly constructed and incredibly limited. Students could not navigate without restriction in their research task to locate information freely, limiting the validity of the results gathered from the assessment measure. Given that my students performed relatively well, I also question how such an assessment can show growth over time. Performance scores were focused on accomplishing the given task, not necessarily judging the quality or craftsmanship of the performance. The assessment rubric served as a checklist for requirements, e.g. having referenced websites and integrating information from more than one site, but did not provide a guide for the quality and skill with which students performed these tasks. If accomplishment is all that is measured, how can we measure growth and development within those skill sets? Although I prefer to think that timed tasks are not the best way to ensure quality


performance, perhaps measuring growth through assessing time ‘on task’ may be one way to show growth, rather than setting time limits. In the case of Student 2, I believe she would have found the specific site she was seeking, given more time. Her skills growth could then be monitored through time measurements. The ORCA is also not a feasible assessment to administer regularly or within the classroom setting, as it is laborious to record the data necessary for evaluation and it is incredibly time-consuming to meaningfully assess student performance. Because tests such as the ORCA, by nature, are limited, it is essential to supplement such assessments with a variety of other valid and reliable measures of online comprehension. Studies have shown some successes with the Formative Assessment of Students’ Emerging Knowledge of Internet Strategies (FASEKIT). Through several administrations over several weeks, this 15-minute assessment “can help to quickly determine the declarative, procedural, and conditional knowledge (Paris, Wasik & Turner, 1991)” using short answer questions which are much easier to administer and much less time-consuming to assess than the ORCA (Leu et al, 2008). Another possibility is to leverage a forced-response assessment to measure online reading. While not an ideal tool for gaining insight into the comprehension abilities of a student, instruments such as Henry’s Digital Divide Measurement Scale for students, “which included 14 forced-response items that measured reading to locate and reading to critically evaluate online information [,]… proved to be both statistically valid and reliable among scores of 1,768 middle school students, and thus provided an objective alternative to a rubric scoring system for estimating skills in online location and critical evaluation (Leu et al, 2008). Such a test would provide a quick look into areas and students needing focused interventions that could be administered by the average classroom teacher. As data-driven instruction continues to gain ground as ‘best practices’ in education, using such data to flex group students and to target any skill deficiencies could happen in the average classroom with such a convenient tool. As Leu et al point out in their research on assessing online literacies, “[m]uch more work must be conducted in this area to more fully understand optimal assessment strategies for the variety of needs that we have (2008).”


Kuiper, E. & Volman, M. (2008). The web as a source of information for students in k-12 education. In J. Coiro, M. Knobel, C. Lankshear, & D. Leu (Eds.). The handbook of research in new literacies. Mahwah, NJ: Erlbaum Chapter 9 in Handbook (pdf) Henry, L.A. (2006). SEARCHing for an answer: The critical role of new literacies while reading on the Internet. The Reading Teacher, 59, 614-627. Leu, D. J., Zawilinski, L., Castek, J., Banerjee, M., Housand, B., Liu, Y., and O’Neil. M (2007). What is new about the new literacies of online reading comprehension? In A. Berger, L. Rush, & J. Eakle (Eds.). Secondary school reading and writing: What research reveals for classroom practices. National Council of Teachers of English/National Conference of Research on Language and Literacy: Chicago, IL. (pdf) ORCA Project Overview. University of Connecticut, Neag School of Education. ORCA Project Overview. Retrieved March 16, 2013, from http://www.orca.uconn.edu/orca-project/project-overview/

Leu, D. J., Coiro, J., Castek, J., Hartman, D., Henry, L.A., & Reinking, D. (2008). Research on instruction and assessment in the new literacies of online reading comprehension. In Cathy Collins Block, Sherri Parris, & Peter Afflerbach (Eds.). Comprehension instruction: Research-based best practices. New York: Guilford Press.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.