Investigation of Oral Proficiency & Course Level in Undergraduate French SLA Sophia Minnillo
FRENCH
Abstract This study compared human and computerized measures of oral proficiency in a second language and explored the distribution of oral proficiency levels across university course levels. Previous investigations that have used the ACTFL Oral Proficiency Interview (OPI), a human-rater assessment to measure oral proficiency in second languages have found positive but varied relations between oral proficiency and level of instruction. Given the potential for variability in the human scoring of the OPI, this study tested computerized assessments of oral proficiency, namely Computerized Language Analysis (CLAN), with the ultimate intention of rendering proficiency assessment more objective. We assessed a corpus of elicited speech samples from 16 undergraduate students learning French as a second language in university courses. We measured the oral proficiency of learners in terms of measures of complexity, accuracy, and fluency (CAF) through CLAN and native speaker evaluations. Computerized and human assessments of oral proficiency differed for the majority of measures, indicating significant differences in how computers and humans evaluate oral proficiency. Neither human nor computerized measures modeled strong oral proficiency differences between course levels.
Introduction Measures of oral proficiency serve as one subset of the measures that second language educators use to quantify learners’ proficiency in a second language. Due to the increased interest in oral proficiency during the communicative language teaching movement (Brown, 2013), researchers have begun to investigate the consistency with which educators meet oral proficiency objectives for specific courses and course levels. Studies from a variety of universities have found a significant inconsistency in the overall proficiency levels of students from the same course levels (Goertler et al., 2016; Swender, 2003) as well as in the oral proficiency levels of students from the same course levels (Tschirner et al., 1998). These studies demonstrate a need for further investigation into disconnects between standards for oral proficiency, as prescribed for each course level, and the actual oral proficiencies attained by and required of students. These incongruences warrant great concern from students, educators, and employers. Students may suffer from reaching a lower level of oral proficiency than they anticipated due to being placed in a course with students of vastly different oral abilities. Students in a teaching certification program may not qualify for certification based on inadequate oral training in a university program (Goertler et al., 2016). Educators may face challenges including a need for greater differentiation in the classroom and the possibility of being deemed ineffective instructors. Employers may make hiring decisions based upon the candidate’s completion of certain courses yet find that the candidate
8 | Emory Undergraduate Research Journal
does not have the requisite oral proficiency to succeed in the position (Brown, 2013). Standardization of oral proficiency objectives and outcomes for specific courses might alleviate these concerns. Many of the studies investigating the relation between oral proficiency and course level have measured oral proficiency using the American Council on the Teaching of Foreign Languages (ACTFL) Oral Proficiency Interview (OPI) (Thompson, 1996; Tschirner, 1992; Magnan, 1986). OPI scores are determined by two or more professional evaluators, who rate speakers’ performance according to ACTFL guidelines. The possibility for human subjectivity in OPI ratings has fueledcriticism of the evaluation from SLA researchers. Lennon (1990, p. 412) noted that basing scores on individuals’ ratings can result in great score variability. Tschirner and Heilenman (1998, p. 151) reported that the scores that ACTFL professionals assigned to test-takers differed in 40% of cases, finding that “sixty percent of perfect agreement may still fall somewhat short of the level of reliable judgment upon which important educational decisions such as certification or the satisfaction of a language requirement should be based.” In response to Lennon and Tschirner et al., the current study intends to investigate measures of oral proficiency that may eliminate the variability and subjectivity of human ratings. In addition to the OPI, measures of complexity, accuracy, and frequency (CAF) are often used to assess proficiency in a second language (L2) in SLA studies (Ahmadian, 2012; Ellis, 2009; Skehan, 1989). Researchers measure CAF by performing linguistic analysis using tools, including CLAN and Praat software (Baker-Smemoe