Journal for Effective Schools research
practice policies
Volume 11, Number 1 In This Issue: The Latest Research on Effective Teachers: Implications for Principals Leslie S. Kaplan
Efficacy, Consequences and Teacher Commitment in the Era of No Child Left Behind Herbert Ware, Jehanzeb Cheema, and Anastasia Kitsantas
School District Budget Development: A Shift to Link Purse to Performance Scott Burchbuckler
Our Mission The Journal for Effective Schools provides educators and administrators involved or interested in the Effective Schools process with the opportunity to share their research, practice, policies, and expertise with others.
Book Review: How to Create and Use Rubrics for Formative Assessment and Grading Susan M. Brookhart Alexandria, VA: Association for Supervision & Curriculum Development, 2013 $27.95, 158 pages Reviewed by Leslie S. Kaplan, School Administrator (retired) Newport News Public Schools, Newport News, VA
Published by the Journal for Effective Schools at Old Dominion University College of Education Educational Foundations and Leadership
William Owings, Professor of Educational Leadership Old Dominion University, Norfolk, VA ISSN#1542-104x ISSN#2166-2908 online
Journal for Effective Schools Spring 2013 EDITORIAL BOARD EXECUTIVE EDITOR William A. Owings, Old Dominion University EXECUTIVE CO EDITOR Leslie S. Kaplan PRODUCTION EDITORS Holly A. Chacon, Colley Avenue Copies and Graphics, Inc. EDITORIAL BOARD Harriet Bessette, Kennesaw State University Linda Bol, Old Dominion University Linda Darling-Hammond, Stanford University Michael Fullan, OISE, University of Toronto Robert Gable, Old Dominion University Joseph Murphy, Vanderbilt University Edward Pajak, Johns Hopkins University Anthony Rolle, University of South Florida
Send all submissions and inquiries to the Editor: William Owings Journal for Effective Schools Old Dominion University Norfolk, Virginia 23529 wowings@odu.edu Ph.757.683.4954 Fax 757.683.4413
Volume 11, Number 1
Journal for Effective Schools
Volume 11, Number 1
The Journal for Effective Schools Volume 11, No. 1 2013
i
Journal for Effective Schools
Volume 11, Number 1
ii
Journal for Effective Schools
Volume 11, Number 1
AIM and SCOPE The Journal for Effective Schools publishes original contributions in the following areas:
Research and Practice – Empirical studies focusing on the results of applied educational research specifically related to the Effective Schools Process.
Educational Practices – descriptions of the use of the Effective Schools Process in classrooms, schools, and school districts to include instructional effectiveness, evaluation, leadership, and policy and governance
Preparation of Educational Personnel - Research and practice related to the initial and advanced preparation of teachers, administrators, and their school personnel including staff development practices based on the Effective Schools Process.
Other – Scholarly reviews of research, book reviews, and other topics of interest to educators seeking information on the Effective Schools Process.
CORRELATES OF EFFECTIVE SCHOOLS
A clearly stated and focused mission on learning for all – The group (faculty, administration, parents) shares an understanding of and a commitment to the instructional goals, priorities, assessment, procedures, and personal and group accountability. Their focus is always, unequivocally, on the student.
A safe and orderly environment for learning - The school provides a purposeful, equitable, businesslike atmosphere that encourages, supports, allows mistakes, and is free of fear. School is a place that does no harm to developing psyches and spirits.
Uncompromising commitment to high expectations for all – Those who are leaders empower others to become leaders who believe and demonstrate that all students can attain mastery of essential skills. This commitment is shared by professionals who hold high expectations of themselves.
Instructional leadership - Although initially coming from the principal, teacher, or administrator, the goal is to include all participants as instructional leaders as their knowledge expands as a result of staff development. New insights excite and inspire. In the accountable learning community, everyone is a student and all can be leaders.
Opportunity to learn is paramount - Time is allocated for specific and free-choice tasks. Students take part in making decisions about goals and tasks.
Frequent monitoring of progress – Effective schools evaluate the skills and achievements of all students and teachers. No intimidation is implied. Rather, monitoring often is individualized, with improvements in learning as the goal.
Enhanced communication - Includes home, school, and community coming together as partners in learning for all.
*Adapted from Phi Delta Kappa International
iii
Journal for Effective Schools
Volume 11, Number 1
4
Journal for Effective Schools
Volume 11, Number 1
From the Editors Welcome to Volume 11, number 1 issue of the Journal for Effective Schools. Over the next six months, we will be making changes in the Board’s composition to better promote the journal’s aims, ideals, and publication quality. We invite your input, both positive and negative, to help us in this revisioning process. Over the last month, the following folks have agreed to work with us on the Editorial Board:
Linda Darling-Hammond (Charles E. Ducommun Professor of Education at the Stanford University School of Education) Michael Fullan (Professor Emeritus at OISE/University of Toronto) Joseph Murphy (Professor of Education and Endowed Mayborn Chair in the Department of Leadership, Policy and Organizations and Associate Dean, Peabody College of Vanderbilt University) Edward Pajak (Associate Dean for Research and Doctoral Programs at Johns Hopkins University) Anthony Rolle (Chair of the Department of Educational Leadership and Policy Studies at the University of South Florida)
This issue presents three articles and a book review. The lead article, “Hiring Effective Teachers: Latest Findings from the Field,” provides readers a thorough update in the literature related to identifying, hiring, and keeping effective teachers. Over the last ten years, we have learned much about this topic. Kaplan’s article synthesizes this research and makes practical applications for principals. She begins by referencing Hanushek’s 2011 study which indicates that replacing the lowest performing 5 to 8% of teachers with those of only average effectiveness could take the United States near the top of international math and science standings and create economic value of $100 trillion. Written from the effective schools’ perspective, the article contains an overview of the latest research traditional and alternative teacher preparation and certification and student achievement and principals’ teacher evaluation and student achievement – each with practical implications for principals. The article continues with addressing preparation factors that affect teaching effectiveness and student achievement. As any experienced administrator knows, not all teacher preparation programs are created equal. Given that, the author reviews state-level studies linking teacher preparation programs, teacher effectiveness, and student achievement and characteristics and practices that make teachers effective. Asking a teaching applicant a question like, “Describe for me the main focus of your teacher preparation program and give examples of how these affected what you know about teaching” is one concrete tip offered to
5
Journal for Effective Schools
Volume 11, Number 1
administrators who want to hire a teacher with characteristics linked to increased student achievement. Kaplan concludes that principals who build sustainable learning cultures and know the research – rather than those who depend on the “old and reliable” proxies for teacher quality – are more likely to find and keep teachers whose students make at least one year’s worth of academic gain in one school year. The implications bring to mind John Lennon’s song, Imagine. The second article, by Ware, Cheema, and Kitsantas, entitled “Efficacy, Consequences, and Teacher Commitment in the Era of No Child Left Behind,” is an empirical article that discusses the impact of a high-stakes achievement environment on teacher and principal efficacy and their implications for effective schools. Principal efficacy – the belief in one’s ability to create change, and teacher efficacy – the degree to which one feels personally capable of effectively teaching all students to successful outcomes – are important aspects of a positive school culture. Looking at these factors using data from the Schools and Staffing Survey, the authors’ findings show teacher commitment was higher where student performance goals were met. Moreover, teacher efficacy and the principal’s role in establishing student performance standards had a positive influence on teacher commitment. Interestingly, the study showed offering supplemental educational services and school attendance choice were associated with increased teacher commitment while offering supplemental educational services to students was associated with reduced teacher commitment. You will have an interesting research design (HLM) and a good read with implications for effective schools in this article. The third article, by Scott Burckbuchler, is a relatively new topic to JES – performance-based budgeting. Performance-based budgeting, also known as results-based budgeting, ties funding decisions to specific performance outcomes. “School District Budget Development: A Shift to Link Purse to Performance” is a mixed methods study that examines the state of budgeting practices in Region 2 of Virginia’s 8 regions. Region 2 contains 15 school divisions of the 134 in the Commonwealth (districts are called “divisions” in Virginia) with more than 280,000 students. The study examined if a performance-based budgeting orientation has increased since the start of NCLB and if this budgeting practice is associated with student achievement. His findings indicate that since NCLB, school districts appear to have made significant changes in the budget decision-making process – moving to performance-based practices. Moreover, a linear relationship appears to exist between the increased use of performance-based budgeting and increased
6
Journal for Effective Schools
Volume 11, Number 1
student achievement. It is a good study, a good read, and should foster some thought about budgeting practices and future research. Finally, we review Susan Brookhart’s new book, How to Create and Use Rubrics for Formative Assessment and Grading. Published by ASCD (2013), the book is a “must read” for undergraduates and graduate students in education programs and teachers and administrators who need better understanding in the concept, development, and use of rubrics to increase teaching effectiveness, gain formative assessment information, and enhance student learning. Well-designed and implemented rubrics, such as Brookhart describes, may prove a useful tool in helping learners – and their teachers – master the common core reading and communication standards. After reading the review, you may wish to purchase the book for your own learning or to promote professional development in your school. Happy reading! William A. Owings and Leslie S. Kaplan, Editors
7
Journal for Effective Schools
Volume 11, Number 1
Table of Contents The Journal for Effective Schools
Articles The Latest Research on Effective Teachers: Implications for Principals .......................................... 1 Leslie S. Kaplan Efficacy, Consequences and Teacher Commitment in the Era of No Child Left Behind ............ 35 Herbert Ware, Jehanzeb Cheema, and Anastasia Kitsantas School District Budget Development: A Shift to Link Purse to Performance ............................... 66 Scott Burchbuckler
Book Reviews How to Create and Use Rubrics for Formative Assessment and Grading ............................................................................................... 83 Susan M. Brookhart Alexandria, VA: Association for Supervision & Curriculum Development, 2013 $27.95, 158 pages Reviewed by Leslie S. Kaplan, School Administrator (retired) Newport News Public Schools, Newport News, VA William Owings, Professor of Educational Leadership Old Dominion University, Norfolk, VA
i
Journal for Effective Schools
Volume 11, Number 1
The Latest Research on Effective Teachers: Implications for Principals Leslie S. Kaplan, Ed.D. Newport News Public Schools Administrator (retired) Adjunct Research Professor, Old Dominion University Abstract With school and educator accountability increasingly dependent on measured student achievement, principals want to identify, hire, and keep effective teachers. The current professional literature contains well-designed and conducted studies that address the importance of teacher certification; assess the relative effectiveness of traditionally and alternatively prepared teachers in generating student achievement; relate how well principals evaluate teacher effectiveness; and isolate the characteristics and classroom behaviors that differentiate more from less effective teachers. This paper describes these findings in terms that principals can use to make more informed hiring decisions. Key Words: teacher effectiveness, student achievement, alternate teacher preparation, traditional teacher preparation, classroom observations Type of Article: Literature Review with Practitioner Implications Introduction Teacher effectiveness is seen as the key to increased student learning, a reduced achievement gap among diverse student groups, and, eventually, a better prepared workforce. Scholars agree that although family background continues to predict most of the variation in student achievement, teacher effectiveness is probably the most important school-based factor affecting it (Sawchuk, 2011b). Public education policy, such as 2009’s $4.35 billion Race to the Top (RttT) grant program, is making student achievement outcomes a “significant factor� in determining teacher and principal effectiveness (Crowe, 2011; U. S. Department of Education, 2009). Therefore, principals are eager to identify, hire, and retain teachers who can help every child learn to high standards. The U.S. Department of Education (2009) defines effective teachers as those who can generate acceptable student achievement outcomes, that is, at least one grade level of student growth in an academic year. But effective teachers produce more than high student test scores; they have long-term personal and economic impacts. Economist Eric Hanushek (2011a) estimates that a teacher who performs one standard deviation above the mean effectiveness in a class of 20 students can 1
Journal for Effective Schools
Volume 11, Number 1
annually produce marginal gains of over $400,000 in present value of student future earnings. The bigger the class sizes, proportionally larger are the earnings. He also approximates that replacing the lowest 5% to 8% of teachers with colleagues of average effectiveness could propel the U.S. to near the top of international math and science rankings and a present value of $100 trillion! At the same time, Hanushek (2011b) concludes what principals already know: teacher effectiveness varies greatly classroom to classroom. And, so do student outcomes. Teachers who work in a given school with students of similar demographics can be responsible for increases in math and reading levels that span from one-half year to one-and-a-half years of learning every academic year (Hanushek, 2011b). Clearly, identifying, selecting, and supporting effective teachers has both short and long-term implications for the children, the community, and the nation. Understanding the current teacher effectiveness research – about which preparation programs generate the most effective teachers, how traditional “teacher quality” factors such as certification and advanced education affect student outcomes, about principals’ accuracy in identifying effective teachers from classroom observations, which teacher characteristics and classroom behaviors are linked to increased student achievement, and how classroom observations using standards-based rubrics can help principals identify and improve teacher efficacy and student achievement – can help school leaders create more effective schools. Comparing Traditional and Alternate Teacher Preparation The relative professional status of traditionally and alternatively prepared teachers is changing. Early research on this topic indicated that alternatively prepared teachers were less effective than traditionally prepared teachers in producing student achievement (Rivkin, Hanushek, & Kain, 2005; Rockoff, 2004). The latest research is presenting a different picture. In fact, in 2012, the National School Boards Association urged the 112th Congress and the Obama Administration to expand the pool of new and effective teacher candidates by supporting programs that offer alternative routes to certification (National School Boards Association, 2012). Traditional U.S. teacher education programs – 1,434 state-approved colleges of education – prepare elementary and secondary teachers (Alderman Carey, Dillon, Miller, & Silva, 2011). These programs can vary widely in rigor of selectivity, design, duration, program content, and clinical, field-based practice – even within institutions (Greenberg & Walsh, 2008; Walsh, Glaser, & Wilcox, 2006).
2
Journal for Effective Schools
Volume 11, Number 1
Alternative teacher preparation programs, also widespread and highly varied, are supplying a growing portion of today’s teachers. The National Center for Education Information (NCEI) defines alternative paths to teacher preparation as “state-defined routes through which an individual who already has at least a bachelor’s degree can obtain certification to teach without necessarily having to go back to college and complete a college, campus-based teacher education program” (National Center for Education Information, 2010, p. 1). In 2010, 48 states and the District of Columbia reported that they had at least some type of alternate route to teacher certification, making 136 state-defined alternate routes to teacher certification available. Nationally, one-third of first-time public school teachers hired annually now enter the profession through an alternative teacher preparation program (Committee on Education and the Workforce, 2012). Since the mid-1980s, approximately 500,000 teachers have entered the profession through alternative routes (NCEI, 2010). Alternate teacher preparation routes intend to address varying purposes: reduce teacher shortages, attract individuals with degrees in high-needs areas such as science and math, attract mid-career changers, or as vehicles to challenge the status quo. Alternative programs may be housed within higher education settings, school districts, or in other locations, and they differ from one another in curriculum content, comprehensiveness, duration, and intensity. They have divergent entry and program requirements, completion steps, and candidates’ ages or prior experiences. And, their graduates become teachers of record with differing degrees of competence. All these factors pose difficulties for investigators wanting to make comparisons or draw conclusions (Feistritzer & Haar, 2010) – as well as for principals trying to identify and hire effective teachers. Although alternative teacher preparation programs intend to provide innovate and flexible routes into the teaching profession, the distinctions between traditional and alternative preparation routes are not always clear. For instance, alternative programs located within schools of education are often repackaged traditional preparation programs with adjusted timelines or courses offered at night, online, or on weekends (National Governors Association, 2009). Since traditional teacher preparation programs are extremely diverse in terms of candidate selectivity, amount of required courses, duration and timing of coursework and fieldwork, and training intensity (National Research Council, 2010), overlap in practices within and between the two approaches are common (Johnson, Birkeland, & Peske, 2005; Perry, 2011). In fact, researchers are concluding that more variation exists within the “traditional” and “alternative” categories than between them (Grissom & Vandas, 2010; National Research Council, 2010; Sass, 2011). As a result, researchers and education policymakers question whether states’ alternative routes to licensure reflect a genuine alternative to the traditional teacher preparation programs (Walsh & Jacobs, 2007). 3
Journal for Effective Schools
Volume 11, Number 1
Any pathway is likely to involve tradeoffs – in rigor of candidate recruitment and selection, depth and amount of curricula related to teaching and learning, program length, and duration and quality of field experiences that tie theory to practice and provide timely and relevant feedback to the novice teacher – with more selective routes and those requiring greater effort and time to complete yielding fewer but more highly effective teachers (National Research Council, 2010). Within the past decade, research has described the features of alternatively prepared and certified teachers and compared their effectiveness on value-added outcomes for students and to their retention in their schools with traditionally prepared and certified teachers as well as to the unlicensed teachers they replaced (Boyd, Grossman, Lankford, Loeb, & Wykoff, 2006; Boyd, Grossman, Lankford, Loeb, & Wykoff, 2009; Boyd, Lankford, Loeb, Rockoff, & Wykoff, 2007; Constantine, Player, Silva, Hallgren, Grider, & Deke, 2009; Feistritzer & Haar, 2008; Grossman & Loeb, 2008; Decker, Deke, Johnson, Mayer, Mullens, & Schochet, 2005; Kane, Rockoff, & Staiger, 2006; Nunnery, Kaplan, Owings, & Pribesh, 2009; Xu, Hannaway, & Taylor, 2011). Research has also compared characteristics of alternative and traditionally prepared teachers. For example, 22% of teachers coming through alternate routes are men, compared with 16% of teachers entering the profession through traditional programs (Feistritzer, 2011). Both traditional and alternative teacher preparation routes have their critics. Traditional teacher preparation skeptics note that although these programs can produce teachers, they are less successful at ensuring that those teachers are effectively meeting schools’ and students’ needs (Wilson, Floden, & Ferrini-Mundy, 2001; National Council on Teacher Quality, 2013). Despite the requirement that all states must identify substandard teacher preparation programs, over half of all states have never identified a single program; and those named face few consequences (Alderman, et al., 2011). In addition, state approval and voluntary accreditation, the two quality control measures available for program accountability, have been unable to resolve this problem. Research has found no difference in the student achievement outcomes of teachers educated at accredited programs versus those educated at non-accredited programs, and half of all institutions are not accredited (Levine, 2006). To date, accreditation evaluates the process of preparing teachers; it does not directly evaluate graduates’ instructional skills in relation to their students’ actual achievement (Crowe, 2010). In turn, alternative teacher preparation detractors argue that most programs offer training that is inadequate to prepare new entrants for the challenges of teaching in urban schools, and their graduates are less effective teachers (National Commission on Teaching and America’s Future, 1996, National Council for Accreditation of Teacher Education, 2010b).
4
Journal for Effective Schools
Volume 11, Number 1
Accordingly, federal and state officials and policy makers are advocating teacher education reform that moves from counting inputs (such as the percent of teacher preparation students who pass state certification exams, number of graduates, and placement rates) to measuring outcomes such as student achievement (Alderman, et al., 2011). Also, the Obama Administration is supporting initiatives to improve teacher preparation – both traditional and alternative – by connecting the effectiveness of the certified teachers to both their teacher preparation programs (TPPs) and to their students’ measured academic achievement. The best programs will be scaled up, and the lowest performing will be supported to show substantially improved performance or be closed (Alderman, et al., 2011). To assist this reform, 31 states now require that teacher evaluations be partially based on student achievement growth on standardized tests (Rich, 2013), and in 2012, eight states had policies that included the use of student achievement data to hold teacher preparation programs accountable for their graduates’ effectiveness (National Council on Teacher Quality, 2013). Perhaps more importantly, educational accountability is coming to rely more on teachers’ actual classroom performance and student achievement outcomes – factors within a principal’s influence – rather than in external credentials (such as professional preparation or certification routes) to determine teacher effectiveness (Crowe, 2011). Research Update: Teacher Certification/Preparation, Teacher Effectiveness, and Student Achievement Just as principals want to identify and hire the most effective teachers, education researchers have long been interested in measuring a teacher’s contribution to student achievement (for example, Armour, 1976; Gordon, Kane, & Staiger, 2006; Hanushek, 1971; Mendro, Jordan, Gomez, Anderson, & Bembry, 1998; Murnane & Phillips, 1981; Rivkin, Hanushek, & Kain, 2005; Rockoff, 2004; Sanders & Horn, 1998). While empirical approaches have differed, each seeks to isolate an estimate of a teacher’s contribution to student achievement separate from the student, class, school, and other contributors. Their recent findings can provide guidance to principals about what to look and ask for when looking to hire effective teachers. Research on teacher certification. Since 2000, investigators have attempted to determine the relative effectiveness of different teacher preparation and certification routes in producing teachers capable of generating student achievement. Their results are bringing needed clarity. First, a teacher’s certification or licensure – the state’s document affirming the holder is qualified to teach certain subjects at identified grade levels in the public schools – improves their effectiveness in generating student learning as compared 5
Journal for Effective Schools
Volume 11, Number 1
with colleagues who lack certification/licensure. One study found that the positive effects of teachers’ certification on students’ mathematics achievement exceeded those of content majors in the field, suggesting that what licensed teachers learn in the pedagogical portion of their training adds to what they gain from a strong subject matter background (Goldhaber & Brewer, 2000). Similarly, another study found that students who had a certified teacher for most of their early school experience scored higher in reading than students who did not have a certified teacher (Easton-Brooks & Davis, 2009). Certification or licensure test scores seem to matter more for math than for other subjects, consistently appearing linked to improved student achievement in that subject at both the elementary level and at the high school level for algebra and geometry; but findings are mixed for other subjects (Clotfelter, Ladd, & Vigdor, 2007a, 2007b, 2007c). A large-scale North Carolina study of high school students’ learning gains found that teacher credentials – particularly licensure and certification – affect student achievement in systematic ways (Clotfelter, Ladd, & Vigdor, 2007a, 2007b, 2007c, 2010). Teachers who completed state-approved preparation programs and held a license in the specific field taught were more effective in generating student learning than colleagues prepared in alternative programs, at least during the first year of teaching. A different investigation in the Los Angeles United School District, K-12 identified a weak relationship between teachers’ credentials (such as experience, education, and licensure exam scores) and student achievement in math and reading (RAND, 2011). Having a license or certification to teach appears to be a necessary, but not a sufficient condition, to ensure teaching effectiveness. Researchers also concluded that all the gains in student achievement related to teacher experience occur within the first five years of teaching. And, they noted what many principals suspect: the uneven distribution of teacher credentials by race and socioeconomic status of high school students contributes to the achievement gap in high school (Clotfelter, Ladd, & Vigdor, 2010). What this means for principals: The research connecting teacher certifications and student achievement finds that teacher qualifications – degrees, experience, certifications, and teacher test performance – are meaningful but show only modest relationship to student achievement (Beteille & Loeb, 2009). Certification itself is important only to the extent that it is associated with differences in teachers’ instructional practices that reflect teachers’ pedagogical and content knowledge and their ability to draw on that knowledge in moment-to-moment classroom interactions (Darling-Hammond, 2000). When looking for an effective teacher, at a minimum, the candidate should have successfully “completed” a teacher preparation program (traditional and alternative programs may define “completed” differently) and hold certification in the subject to be taught. These criteria should be considered as necessary – but not sufficient – conditions for teaching effectiveness. 6
Journal for Effective Schools
Volume 11, Number 1
Research on traditional and alternative teacher preparation programs and student achievement. Positive relationships exist between teacher preparation and teacher effectiveness. An influential review of 57 rigorous teacher preparation programs identified positive empirical relationships between teacher qualifications and student achievement across studies using different units of analysis and different measures of preparation as well as in studies controlling for students’ socioeconomic status and prior academic performance (Wilson, Floden, & FerriniMundy, 2001). Further, the review found that alternate preparation routes attracted a diverse pool of candidates, with a mixed record for attracting the “best and brightest” whose performance evaluations showed mixed results. Nonetheless, the study concluded that teachers who come through high-quality alternative and traditional teacher preparation routes show some similarities (Wilson, Floden, & Ferrini-Mundy, 2001). Many alternatively prepared teachers agree that they may not be effective in producing student achievement. A survey by the National Comprehensive Center for Teacher Quality compared responses of randomly sampled first-year teachers from three alternative programs, Teach for America (TFA), New Teacher Project (NTP), and Troops for Teachers (T3) with those of first-year traditionally prepared teachers also teaching in high-needs schools. Only 46% of the alternate route teachers said they were prepared for their first year of teaching, compared with 80% of the traditionally prepared teachers (Immerwahr, Doble, Johnson, Rochkind & Ott, 2007). Notably, however, newer, well-designed investigations have determined that teacher preparation can make a measurable difference in student achievement – especially in the first year in the classroom – and certain TPP characteristics appear to positively shape teaching effectiveness. But with a few years of classroom experience, the differences in teacher effectiveness from varying preparation programs appear to fade (Boyd et al., 2006, 2007, 2008, 2009). One longitudinal study examined individual-level data for three different teacher training programs for New York City teachers – Teach for America (TFA), New York Teaching Fellows (NYTF), and traditional 4-year college preparation programs – and the effect of teachers’ qualifications on student achievement. Findings show that graduates of collegiate preparation programs were significantly more effective than teachers lacking certification and performed better than NYTF and TFA teachers during their first year in the classroom (Boyd, Grossman, Lankford, Loeb, & Wyckoff, 2006; Darling-Hammond, Holtzman, Gatlin, & Heilig, 2005; Kane, Rockoff & Staiger, 2006). Moreover, in this same study, certain preparation program and teacher characteristics (e.g., curricula that focused more on the work in the classroom, 7
Journal for Effective Schools
Volume 11, Number 1
provided opportunities for teachers to study what they will be doing, timing and oversight of student teaching, certification status, teaching experience, graduation from a competitive college, and math SAT scores) predict program and teacher effectiveness in elementary and middle school mathematics and English language arts during their first year teaching while those with stronger content knowledge from an alternative teacher preparation pathway are able to make use of that knowledge by their second or third year (Boyd, Grossman, Lankford, Loeb, Michelli, & Wyckoff, 2006; Boyd, Grossman, Lankford, Loeb, & Wyckoff, 2008; Boyd, Lankford, Loeb, Rockoff, & Wyckoff, 2007). In their study, researchers estimated that a one standard deviation move in their preparation’s focus on practice was similar to roughly one additional year of teaching experience in terms of teacher effectiveness, a very notable difference (Boyd, Grossman, Lankford, Loeb, & Wyckoff, 2009). Similarly, Harvard’s Strategic Data Project analyzed the teaching effectiveness of math teachers in Los Angeles schools, using students test score growth measures, grade three through eight (2004-2005 through 2010-2011), and determined that teachers who become certified teachers through Teach for America or the district’s career ladder program for paraprofessionals were slightly better, on average, than other math teachers, giving students an increase of about two months of learning in a school year. The difference between top- and bottom- performing elementary math teachers was nearly 8 months of learning (Sawchuk, 2012). These studies suggest that important variations in effectiveness exist in teachers graduating from different preparation programs – some of which may be large. At the same time, these investigators and others have identified more disparity in teacher effectiveness within preparation routes than between them (Boyd, Grossman, Lankford, Loeb & Wyckoff, 2006; Gordon, Kane, & Staiger, 2006; Kane, Rockoff, & Staiger, 2006). What this means for principals: The research on teacher preparation pathways and student achievement finds that although traditional teacher preparation can make a measurable difference in student achievement – especially in the teacher’s first year in the classroom – with a few years of classroom experience, the differences in teacher effectiveness between traditional and alternative preparation programs may fade. Fairly quickly, graduates of high quality alternative teacher preparation programs may become as effective in generating student achievement as teachers from high quality traditional preparation programs. Reviewing applicants’ transcript with them and discussing their coursework and field work – with particular attention to the time spent on learning and doing actual teaching, especially in real world settings with students similar to those in this school – may be a worthwhile activity that provides useful information about the teachers’ potential classroom effectiveness. 8
Journal for Effective Schools
Volume 11, Number 1
Principals’ evaluations of teacher effectiveness and student achievement. Until recently, principals’ evaluations of teacher effectiveness were not important tools for school management, school improvement, or school reform. State laws and district policies about teacher evaluation vary in their requirements for teachers and for their performance appraisers (National Association of Secondary School Principals, 2011). And, although administrators are responsible for assessing teachers’ effectiveness, these evaluations have too often been a perfunctory and inconsequential process (Weisberg, Sexton, Mulhern, & Keeling, 2009). In fact, Weisberg and colleagues’ (2009) coined the term, widget effect, to describe a school district’s “culture of indifference” to the wide variations in teacher quality, classroom-to-classroom, and the infrequency of dismissing ineffective tenured teachers from employment. In their study of 12 school districts in four states, investigators found that over 99% of tenured teachers in districts using a “satisfactory” or “unsatisfactory” rating system earned a positive rating. Among districts with more than two rating options, 94% of the teachers still earned one of the top two ratings, and less than 1% was rated “unsatisfactory” – even in schools where high percentages of students were failing to meet basic academic standards each year (Weisberg, et al., 2009). The powerful effect that the rater’s overall judgment has on the person being rated has long been recognized (Wells, 1907). It even has a name: the “halo effect” (Rugg, 1922). The halo effect means that the teacher who appears to be the most effective receives the highest ratings. Teacher performance ratings scales, therefore, have high face validity. Yet, early empirical studies connecting teacher evaluation results and students’ achievement scores find a low correlation (Hill, 1921). Medley and Coker (1987) identified eleven studies from 1921 to 1946 which reached the same conclusion: The correlations between the average principals’ ratings of teacher performance and direct measures of teachers’ effectiveness were near zero – slightly more accurate than if based on chance. Since the halo effect virtually decides the teacher’s ratings, the ratings’ actual validity depends almost entirely on the rater’s accuracy in judging the teacher’s instructional performance – making suspect both the validity of teacher ratings scales and principals’ judgment (Medley and Coker, 1987). Critiques of these early studies speculate that the small correlations found between principal evaluations and student achievement might be due to small, nonrepresentative samples, not accounting properly for measurement error, and relying on objective measures of teacher performance that were probably biased (Jacob & Lefgren, 2008a; Medley & Coker, 1987; Peterson, 1987, 2000). In fact, Medley and Coker’s (1987) own study examining the relationship between principals’ ratings of teachers’ effectiveness and their students’ achievement in reading and math came to similar conclusions: Principals could not 9
Journal for Effective Schools
Volume 11, Number 1
accurately judge teachers’ effectiveness in generating student test performance. Similarly, a qualitative literature review concluded that principals are not accurate evaluators of teacher performance, and both teachers and administrators have little confidence in the results of performance evaluations (Peterson, 2000). In attempts to explain this weakness, one investigation of teacher evaluation practices found that relatively few school districts had highly developed teacher evaluation systems; even fewer put the results into action (Wise, Darling-Hammond, McLaughlin, & Bernstein, 1985). Research suggests that many principals have a difficult time evaluating teachers. Reasons include lack of knowledge of the subject being taught; not wanting to upset working relationships by judging teachers strictly; viewing teacher evaluation as a cumbersome, time-consuming chore; and lack of sufficient training and guidance about how to conduct an effective evaluation (Halverson, Kelley, & Kimball, 2004; Nelson & Sassi, 2000; Peterson, 2000; Stein & D’Amico, 2000; Weisberg, et al., 2009; Wise, et al., 1985). A 2008 Regional Education Laboratory (REL) Midwest study on teacher evaluation policies found that fewer than one out of 10 district policies required training for personnel conducting the evaluations (Mathers, Oliva, & Laine, 2008). Consequently, until lately, principals have not seen formal teacher evaluation as a means to build capacity and improve their schools. A study of principals as human capital managers seems to confirm this (Milanowski, Kimball, & Heneman, 2010). Researchers inventoried principals in two large school districts, one on the East coast and one in the Midwest and in schools with consistently upward or flat/highly variable achievement trends. They found no substantial difference in teacher evaluation practices between the principals in achieving and non-achieving or inconsistently achieving schools. With a few exceptions, school leaders did not appear to use the formal teacher evaluation process as an on-going performance management tool to identify, measure, or develop key teaching competencies needed in the school (Milanowski, Kimball, & Heneman, 2010). This situation is rapidly changing, however. Currently, teacher evaluation is receiving considerable policy, federal, and state attention as a means to identify and develop effective teachers who can increase student achievement. Recent studies have identified an empirical relationship between a teacher’s measured effect on student achievement and overall subjective administrator ratings (Jacob & Lefgren, 2008a, 2008b; Rockoff & Speroni, 2010; Rockoff, Staiger, Kane & Taylor, 2009). Accordingly, empirical evidence now supports the conclusion that subjective evaluation or the use of objective performance measures in U.S. public schools can be valid and reliable assessments of teacher effectiveness; and principals’ evaluations of teachers do predict effectiveness (Gallagher, 2004; Kane, Wooten, Taylor, & Tyler, 2011; Kimball, White, Milanowski, & Borman, 2004; Milanowski, 2004; Milanowski, 10
Journal for Effective Schools
Volume 11, Number 1
Kimball, & Odden, 2005). Complex, longitudinal state data systems coming on-line make it possible to connect classroom teachers to their students’ academic progress over a school year, and student-growth components in these data systems allow administrators and other evaluators to assess whether or not teachers are helping students achieve a year’s academic progress in a school year (Zinth, 2010). Now, researchers are consistently finding strong correlations between teacher effect estimates and evaluations made by school principals and other professional educators (Harris & Sass, 2009; Jacob & Lefgren 2008a; Murnane, 1975; Rockoff & Spironi, 2010; Rockoff, Staiger, Kane, & Taylor, 2010, 2011). Several studies have examined the relationship between principals’ subjective teacher ratings based on formal standards and extensive classroom observations and the achievement levels of teachers’ students (Gallagher, 2004; Kimball, White, Milanowski, & Borman, 2004; Milanowski, 2004; Milanowski, Kimball, & Odden, 2005). All these studies find a positive and significant relationship, despite differences in the way they measure teacher value added and in the degree to which the observations are used for highstakes personnel decisions. One study examined the relationship between teacher evaluations and student achievement among second and third graders in the New Haven, CT, public schools, controlling for prior student test scores and demographics. The investigator found that principals’ evaluations of teachers were significant predictors of student achievement, but the size of the relationship was modest (Murnane, 1975). Another study compared principal assessments with measures of teacher effectiveness based on gains in student achievement; researchers found that principals using subjective teacher evaluations based on classroom observation protocols (“rubrics”) can generally identify teachers who produce the largest and smallest standardized achievement gains but have far less ability to distinguish between 60% of teachers in the middle of this distribution (Jacob & Lefgren 2008a; National Governors Association, 2011). Researchers also found that a teacher’s previous value-added score is a better predictor of current student outcomes than are current principal ratings. The principals in this study did not have to tell the teachers how they were rated, however, and the ratings had no consequences; this may have engendered more accurate, less lenient teacher ratings than might have been observed in an actual evaluation situation (Jacob & Lefgren, 2008a) . Adding to the growing consensus, a Florida school district study found positive correlations between teacher value-added estimates and principals’ subjective ratings (Harris & Sass, 2009). Investigators concluded that principals’ evaluations are better predictors of a teacher’s future performance than traditional factors such as experience or advanced academic degrees. Even when principals had only one year of value-added data, their evaluations of teachers were actually more accurate and predicted future teacher productivity better than value-added 11
Journal for Effective Schools
Volume 11, Number 1
scores alone. Likewise, researchers in New York City measured how principals’ subjective and objective evaluations of new teachers predict their future impacts on student achievement (Rockoff & Speroni, 2010). They found that examined separately, both subjective and objective evaluations bear significant relationships with the achievement of the teachers’ future students. Each form of evaluation contains information distinct from the others, helping construct a more complete and accurate understanding. Investigators also located evidence of variation in the leniency with which certain evaluators applied the standards (Rockoff & Speroni, 2010). Finally, one study examined the results of a randomized pilot program in which school principals were given estimates of individual teachers’ performance in raising their students’ test scores in math and reading (Rockoff, Staiger, Kane, & Taylor, 2010, 2011). Investigators found high correlations between objective teacher performance estimates based on student data and principals’ prior beliefs; the more detailed the objective or subjective data, the stronger the relationship. These results suggest that objective and specific performance data provides useful information to principals in constructing employee evaluations and using these evaluations to improve teacher effectiveness. These studies, however, use either summary scores or subjective teacher ratings on general attributes and do not identify the specific instructional practices which teachers use to advance student learning. Later investigations would affirm that with training and practice, principals can identify the precise instructional behaviors related to increased student achievement – and feedback from these observations actually can improve teaching effectiveness. What this means for principals: Studies find positive, meaningful correlations between principals’ detailed ratings of teachers’ classroom performance and teachers’ ability to generate student achievement. Principals can learn how to assess teacher effectiveness; the more particulars they have, the more accurate their predictions. And, the more principals can identify effective instructional practices, the more specific they can become during teacher applicant interviews and the better they can assess the candidates’ sample lessons. Teacher preparation and teacher retention. If principals and their districts are to invest time and money into identifying, hiring, and developing effective teachers, they want that investment to pay off not only in increased student achievement but also in teachers who will stay around to reinforce and expand the school’s learning culture. Research concludes that alternatively certified teachers are more likely to leave their initial schools and districts than traditionally prepared teachers. Two longitudinal studies in New York City found that by the fourth year, just over 50% 12
Journal for Effective Schools
Volume 11, Number 1
of the alternatively prepared New York Teaching Fellows and 80% of Teach for America – but only 37% of college-prepared teachers – had left teaching on New York City Schools (Boyd, Grossman, Lankford, Loeb, & Wyckoff, 2006; Kane, Rockoff, & Staiger, 2006). Similarly, a study in Houston found an average of 80% of TFA teachers left their jobs by the third year (Darling-Hammond, Holtzman, Gatlin, & Heilig, 2005). Meanwhile, in the Chicago Public Schools that hire about 100 TFA teachers each year, fewer than half remained in teaching for a third year (Glass, 2008). Earlier national data show that 49% of uncertified entrants left teaching after five years, compared to only 14% of those who entered teaching fully prepared (Henke, Chen, Geis, & Knepper, 2000; Ingersoll, 2002). But preparation pathway, by itself, does not appear to be the sole factor identifying teachers who exit the profession. In fact, the more effective teachers – regardless of preparation route – tended to remain in teaching while those less effective tended to leave. In the New York City study above, researchers used a detailed teacher database (2000-01 to 2007-08) to look at the long-term retention patterns of alternatively prepared and certified teachers (NYTF and TFA) as compared with traditionally prepared and certified teachers, all with more than three years of experience (Boyd, Dunlop, Lankford, Loeb, Mahler, O’Brien, & Wyckoff, 2011). Although the alternatively prepared teachers were much more likely to teach students who were poor, black or Latino, had been suspended from school, and who had lower math and English language arts achievement test scores, the teachers who were more effective in generating student learning and measured achievement were more likely to stay or transfer – regardless of the preparation route – while the least effective teachers were more likely to exit, regardless of pathway. What this means for principals: While one cannot look at the individual teacher applicant from any preparation route and generalize that this person will not remain in the school or profession for long, it is useful to ask about and, if possible, observe this applicant’s teaching practices – because effectiveness in the classroom is a better indicator of their likely commitment to remain in the school and in the profession. Preparation Factors that Affect Teaching Effectiveness and Student Achievement Research confirms what experienced principals already know: Not all teacher preparation programs do an equally good job in readying effective teachers. In 2010, U.S. Secretary of Education, Arne Duncan stepped on a few toes when he recounted the troubled history of schools of education and emphasized the need for reform if they were to produce more effective teachers. Calling education “the civil rights issue of our generation,” he scolded preparation programs that lacked a focus on increasing student learning and achievement. “To claim, ‘I taught it – but the 13
Journal for Effective Schools
Volume 11, Number 1
student didn’t learn it,’” Duncan related, “…is like a hospital administrator affirming, ‘The operation was a success – but the patient died’” (Duncan, 2010). Duncan recommended that teacher preparation programs use data, including student achievement data, to foster an ethic of continuous improvement for teacher educators, teachers, and students. Currently, researchers are accepting his invitation to do just that. State-level studies linking teacher preparation, teacher effectiveness, and student achievement. Assessing the efficacy of teacher preparation programs by means of K-12 students’ test scores is a complex and challenging endeavor. Few states have the extensive data requirements needed to link TPP graduates with their training programs and their students’ achievement (Gansle, Noell, & Burns, 2012). Meanwhile, RttT and Title II, federal and state governments are infusing substantial funds into states to investigate how to produce more effective teachers as measured by their students’ achievement. As a result, pioneering studies have occurred in New York (Boyd, Lankford, Loeb, Rockoff, & Wyckoff, 2007), Florida (Sass, 2008), Louisiana (Gansle, Knox, & Shafer, 2010; Gansle, Noell, & Burns, 2012; Gansle, Noell, Knox, & Schaffer, 2010); Kentucky (Kukla-Acevedo, Streams, & Toma, 2009), Texas (Mellor, Lummus-Robinson, Brinson, & Dougherty, 2010), North Carolina (Henry, Thompson, Fortner, Zulli, & Kershaw, 2010), Missouri (Koedel, Parsons, Podgursky, & Ehlert, 2012), and Washington (Goldhaber & Liddle, 2012). Their findings indicate that teacher preparation programs’ effects on student test scores gains can be estimated – and teacher preparation programs can be evaluated – in part, by using their credentialed teachers’ own students’ test scores. But these program ratings cannot be used to make high-stakes decisions about individuals. In these investigations, certain alternate preparation programs appear to produce teachers who are significantly more effective than teachers from traditional preparation programs (Gansle, Knox, & Schafer, 2010; Gansle, Noell, Knox, & Schafer, 2010; Goldhaber & Liddle, 2011, 2012; Goldhaber, Liddle, & Theobald, 2012) – and have characteristics that influence their graduates to earn higher value-added scores than veteran teachers (Tennessee Higher Education Commission, 2012). One study found that high productivity within traditionally or alternatively prepared cohorts depended on the subject taught and assessed as well as on the teachers’ characteristics (Sass, 2011). One study found small differences between teachers from different preparation programs but high variability of effectiveness within programs (Koedel, Parsons, Podgursky, & Ehlert, 2012) while another found more variation within pathways than between them; Sass, 2011). Several conclusions are relevant to principals. Researchers speculate that the advantage of certain alternatively prepared teachers may reflect a different population of potential teachers for whom teaching was not a first fulltime or professional position and who received more intense practical training that prepares 14
Journal for Effective Schools
Volume 11, Number 1
them for the classroom than programs that prepare new teachers as undergraduates (Gansle, et al, 2010). These second-career teachers are, by definition, individuals who may have more life experiences and greater maturity than recent (and younger) college graduates. Researchers also suggest that where teachers are credentialed explains only a small portion of the overall variation in the effectiveness of in-service teachers and point to the consensus that the best assessments of teacher effectiveness are based on actual classroom performance (Goldhaber & Liddle, 2011, 2012; Goldhaber, Liddle, & Theobald, 2012). In addition, researchers surmised that prior research has overstated differences in teacher performance across preparation programs for several reasons, mostly because some sampling variability in the data has been incorrectly attributed to the preparation programs (Koedel, et al, 2012). Additionally, researchers advise their audiences to judge their findings within a wider perspective. Some remind readers that classroom and student factors – apart from teacher effectiveness – influence student achievement. These include differences between student demographic subgroups (such as gender differences, students identified as receiving free and reduced-price lunch, those with reported learning disabilities, those enrolled in limited English proficient (LEP) or special education classes and gifted/highly capable programs); class size; teachers’ years of experience (up to 5 years); and teachers’ graduate degrees (Goldhaber & Liddle, 2011). Investigators also recommend that consumers of complex research that relies on large data sets and sophisticated modeling proceed with caution and not make high-stakes decisions when drawing conclusions from studies until they can confirm the correct methodology was used (Mihaly, McCaffrey, Sass, & Lockwood, 2011). Even then, conclusions drawn from value-added models can only be used for making high-stakes decisions about individuals when part of a more comprehensive set of assessment data (Baker, Barton, Darling-Hammond, Haertel, Ladd, Linn, Ravitch, Rothstein, Shavelson, & Shepard, 2010; Rothstein, 2010). What this means for principals: At present, few states have the data or technology to connect teacher preparation programs with student achievement to determine teacher and program effectiveness. And, even when studies find that certain TPP programs –traditional and alternative – appear to be measurably more successful in generating effective teachers, depending on the subject and grade level, the measurement and data issues are too fraught to use for making decisions about hiring individual teachers. More likely, principals will generalize from personal experiences about which local or regional teacher preparation programs consistently produce more effective teachers – but even these conclusions may not be correct for the individual applicant. Specific teacher preparation program factors and teacher effectiveness. Many studies affirm the relationship between teacher preparation, teaching effectiveness, and student achievement (Boyd, et al., 2006; Darling-Hammond, et al., 2005; Kane, 15
Journal for Effective Schools
Volume 11, Number 1
Rockoff, & Staiger, 2006), but only recently are studies identifying the specific program factors that most influenced teachers’ abilities to generate student learning. Regardless of preparation route, studies are finding that the best teacher preparation programs design their offerings around the goal of teaching teachers how to teach their particular content (Constantine, Player, Silva, Hallgren, Grider, & Deke, 2009; Grissom & Vandas, 2010; Winters, 2011). Likewise, after looking at how teacher education programs practiced accountability, the National Research Council (NRC) (2010) concluded that the evidence points to effective teachers having strong content knowledge (a body of conceptual and factual knowledge) and strong pedagogical knowledge (how learners acquire learning in a given subject and how to teach it). For principals, this means that traditional and alternative pathways to teaching can be equally successful at producing effective teachers, so long as they use approaches geared towards linking preparation to actual teaching practice. Consequently, knowing what types and durations of pre-service teaching experiences the applicant had – especially with students such as those in the principal’s school – through a transcript review and a discussion of instructional practices with the candidate, can be helpful for predicting the applicant’s effectiveness in generating student achievement at that school. Asking the candidate, “Describe for me the main focus of your teacher preparation program and give examples of how these affected what you know about teaching,” might be a useful entry into this discussion. Characteristics and Practices That Make Teachers Effective Although evidence has shown that teachers’ instructional practices have differential effects on student learning, knowledge gaps have existed about exactly which teacher characteristics and teaching behaviors led to increased student learning and achievement (Medley & Coker, 1987; Seidel & Shavelson, 2007). This situation is changing, giving principals more precise clues on what to look and listen for in teaching applicants. Research on effective teachers’ characteristics and student achievement. Over the past decade, investigators have been identifying certain teacher cognitive and personality factors (Rockoff, Jacob, Kane, & Staiger, 2008) and classroom-based measures of teaching effectiveness (Kane, Taylor, Tyler, & Wooten, 2010, 2011) that are related substantially to student achievement growth. One study in New York City found that although individual teacher characteristics had no predictive value regarding their students’ achievement, when combined into cognitive (such as intellectual ability, teaching-specific content 16
Journal for Effective Schools
Volume 11, Number 1
knowledge, scores on a commercially available teacher selection instrument) and non-cognitive (personality traits such as extraversion or introversion and feelings of self-efficacy) factors, they have a modest and statistically significant relationships with student and teacher outcomes, especially with student test scores (Rockoff, Jacob, Kane, & Staiger, 2008). Similarly, a summary of teacher effectiveness studies finds that, in general, effective teachers bring to teaching a similar set of personal traits, skills, understandings, and dispositions to act in certain ways (Darling-Hammond, 2010b). These include a strong general intelligence and verbal ability that help them organize and explain ideas, observe analytically, and think diagnostically; solid content knowledge in the areas they teach; expertise of how to teach others to develop higher-order thinking skills in that content; an understanding of students’ differences in learning and development and how to assess and support their academic growth; flexible skills in response to students’ needs in a given situation; a readiness to support every student’s learning; the desire to continue their own professional development; and the willingness to work with colleagues and parents to help individual students and the school (Darling-Hammond, 2010b). Likewise, in a unique study of fifth grade reading and math teachers that combined teachers’ value-added scores, classroom observations, and teacher surveys, Stronge, Ward, and Grant (2011) found that top-quartile teachers had students who achieved higher academic growth, had fewer classroom disruptions, better classroom management skills, and better relationships with their students than did bottom-quartile teachers. Investigators speculated that effective teachers who can generate strong student achievement results have some particular set of attitudes, approaches, strategies, or connections with students that manifest themselves in nonacademic ways (such as positive relationships, encouragement of responsibility, classroom management, and organization) and that lead to higher achievement (Stronge, Ward, & Grant, 2011). The fact that students were not randomly assigned to classrooms and teachers volunteered for the observations and surveys limited the study’s generalizability, however. For principals, asking candidates to describe how they would assess and support their most challenging (high and low ability) students’ academic growth; inquiring about experiences in which the applicant observed analytically and thought diagnostically about an individual having difficulty learning a new task or content; or querying about a time when they taught another person to develop higher-order thinking skills in a particular content – and then listen to how well they organize and explain their answers – can provide relevant data about their potential teaching effectiveness. Likewise, inviting a teacher leader in the applicant’s content area to participate in the interview may help sample the applicant’s depth of content
17
Journal for Effective Schools
Volume 11, Number 1
knowledge and ability to use it flexibly and relevantly with students. Such an interview will generate much data about the applicant’s potential effectiveness. Research on effective teachers’ behaviors and student achievement Although principals cannot control where teacher candidates receive their preparation for licensure nor can they influence teachers’ personal traits and dispositions, knowing which specific teaching behaviors can make a measurable difference in increasing student achievement enables principals to better identify effective candidates for their schools. Recent studies link intentionally-observed teaching practices to student achievement gains in real world classrooms (Kane, Taylor, Tyler, & Wooten, 2011). Findings from Cincinnati (Kane & Staiger, 2012; Kane, Taylor, Tyler, & Wooten, 2010, 2011; Kane, Wooten, Taylor, & Tyler, 2011) and New York Public Schools (Grossman, Loeb, Cohen, Hammerness, Wyckoff, Boyd, & Lankford, 2010) confirm that teachers who tend to generate higher student achievement growth are actually teaching differently than teachers associated with lower student achievement growth. In Cincinnati (2003-2004 to 2008-2009 and ongoing), externally-trained evaluators used an elaborate set of standards that described the behavioral practices, skills, and characteristics that effective teachers have in domains of “creating an environment for student learning” and “teaching for student learning” and connected these to their students’ measured achievement. Investigators found that teachers with higher classroom observation rubric scores had students who learned more. The difference in student learning gains on state math tests between teachers in the top and bottom 24% of teachers’ observation scores amounted to approximately 2.7 months of schooling (Kane & Staiger, 2012) – the equivalent of about 7-percentile points in reading and about 6-percentile points in math (Kane, Wooten, Taylor, & Tyler, 2011). Midcareer teachers even improved their effectiveness in the years after they were evaluated (Sawchuk, 2011a). Similarly, a New York City pilot study using structured observation protocols (along with teacher logs and student work) compared moderately performing (second quartile) and high-performing (fourth quartile) middle school English language arts teachers on value-added performance in 12 matched pairs. Despite the small sample, investigators found consistent evidence that high value-added teachers use different instructional practices than low value-added teachers on all 16 observed elements of instruction (Grossman, Loeb, Cohen, Hammerness, Wyckoff, Boyd, & Lankford, 2010). In a comparable Chicago study, a two-year pilot effort found that classroom observation ratings are valid and reliable measures of teaching practice and are related to value-added measures for math and reading test scores (Sartain, Stoelinga, Brown, Luppescu, Matsko, Miller, Durwood, Jiang, & Glazer, 2011). In classrooms of 18
Journal for Effective Schools
Volume 11, Number 1
highly rated teachers, students showed the most growth while in classrooms of teachers with low observation ratings, students showed the least growth. Interestingly, principals were able to rate teaching practice reliably at the low and middle ends of the scale while principals were less able or willing to differentiate effective instruction in the scale’s upper ranges, tending to give the highest ratings to “good” teachers (commenting to investigators that they do this to maintain their relationships with teachers) (Sartain, et al., 2011). Likewise, a Louisiana study using virtually the same observation rubrics as in Cincinnati and Chicago to assess prospective alternatively prepared teachers for initial certification (2004-2005 through 2008-2009), found a modest correlation between teacher evaluation scores and student achievement growth in math and reading. These correlations were lower than those found in Kane’s (2012) study, possibly due to low inter-rater reliability (Darling-Hammond, 2010a). Employing a different approach, investigators conducted a study with secondary school teachers using a web-mediated coaching method employing clear behavioral anchors (based on the Classroom Assessment Scoring System–Secondary (CLASS-S) protocol) to determine the effects of instructional coaching on students’ motivation and academic achievement. The interventions focused, in part, on boosting the teachers’ use of varied instructional approaches and involved students in higher-order thinking and using the new learning in problem solving. Researchers found that the intervention produced substantial gains in measured student achievement in the year after its completion, equal to advancing the average student from the 50th to the 59th percentile in achievement test scores. Gains appeared to be in response to changes in teacher-student interaction qualities that the interventions addressed. (Allen, Pianta, Gregory, Mikami, & Lun, 2011) Not surprisingly, students can tell perceive clear difference between more and less effective teachers. The Bill and Melinda Gates Foundation’s Measures of Effective Teaching (MET) six district, 3,000 teacher study – with surveys from 44,500 students – discovered that pupils could identify the most effective teachers in a school. Also, researchers can predict roughly how much students would learn if they rated their educators using a formula that put equal weight on student feedback, standardized test scores, and principal and peer observations employing a standards-based rubric (Kane, 2012; Kane & Staiger, 2012; Sawchuk, 2013; Simon, 2013). While judging teachers mainly by student achievement on state tests proved very unreliable, and depending primarily on principals’ observations of classroom practice did not help predict teachers who were able to increase student achievement in reading and math, combining the three measures into an appropriately weighted index produced a balanced and accurate profile of teacher performance. Critics of this study note, however, that the MET’s lack of students’ random assignment to classes, the voluntary nature of the teachers’ involvement, 19
Journal for Effective Schools
Volume 11, Number 1
and measurement error limit findings to comparisons of teachers within a school – and not generalizable beyond (Rothstein & Mathis, 2013). Research on teaching behaviors and school environment. The instructional environment in which teachers work also influences their effectiveness in increasing student achievement. One large-scale study in elementary schools using a multilevel constellation of teacher-related effects (e.g. classroom effectiveness, collective teaching quality, school academic organization) that could be changed to increase educational efficacy found that teachers’ effectiveness was a stable and continuing part of the school organization, and teaching processes were positively associated with achievement levels (Heck, 2009). As others have, the investigator observed that within schools, some students were assigned to more effective teachers than others; and over time, these assignment decisions resulted in differential achievement outcomes – to students’ academic advantage or disadvantage (Heck, 2009). Likewise, a different study surveyed a major national group of preK- 12 teachers and found that school working conditions – in this case, the culture that supports teacher collaboration – appears to be an important factor in teacher effectiveness and improved student outcomes (Berry, Daughtry, & Weiner, 2009). Another school environment study determined that teachers who switch schools are more effective after a move than before. This North Carolina study, grades three through five (1995 – 2006), examined the extent to which teacher effectiveness, as measured by ability to improve student test scores, changed depending on the schooling environment and quantified the importance of the match between a teacher and a school in determining student achievement (Jackson, 2010). A match effect is anything that makes a teacher more or less productive at one school as compared to another (that is not due to a school characteristic that affects all teachers equally). Using a longitudinal dataset, the investigator found that teachers who switch schools are more effective after a move than before ─ suggesting match effects. In contrast, teachers are less likely to leave their current school when match quality is high. The researcher’s conclusion: a sizeable part of teacher effectiveness may be a function of the teacher-school environment match and not portable across schools (Jackson, 2010). Despite their usefulness when well designed and conducted, classroom observations have their limitations. If this is the only data that school districts use to evaluate teachers, they may discourage innovation and pressure teachers to adopt a certain model of effective practice (Kane, 2012). Even when using standards-based rubrics to identify specific behaviors, observers must be trained to interpret behavior the same way in order to keep inter-rater reliability high and reduce subjective judgments. Also, teachers’ performance may change, depending on the content taught and the student audience. Accordingly, multiple trained raters must be 20
Journal for Effective Schools
Volume 11, Number 1
available to observe and score different lessons and average them for a more accurate measure of the teacher’s practice. Plus, the labor intensive nature of providing frequent, detailed classroom observations is costly in terms of principals’ time or peer observers’ salaries (Kane, 2012). Finally, even excellent observations can be only one of several valid and reliable means of evaluating teachers. What this means for principals: Principals who make the opportunity to increase their own – and their teachers’ – capacity to observe and assess teaching in their classrooms using detailed, standards-based performance rubrics are better able to identify and support teaching effectiveness. Likewise, since principals are their schools’ culture leaders, they have much influence in creating and sustaining the learning and working environments in which teachers can become more effective. Conclusions In the last decade, much has been learned about the varied factors which make teachers effective. As a result, principals now have more data to consider when making informed decisions about identifying and hiring effective teachers. Research finds that teacher qualifications – degrees, experience, certifications, and teacher test performance – are meaningful, but they show only modest relationship to student achievement. Certification is important to the extent that it is associated with teachers’ instructional practices, content knowledge, and their ability to draw on that knowledge in moment-to-moment classroom interactions. Holding a current teaching license or certification in the content to be taught is a necessary – but not sufficient – condition for effective teaching. Similarly, knowing that a candidate completed a traditional or alternative preparation program, taken by itself, will not help principals differentiate a potentially effective from an ineffective teacher. The distinctions between traditional and alternative preparation routes are not always clear, and more differences exist within teacher preparation pathways than between them. Research finds that the best teacher preparation programs – traditional and alternative – design their courses and experiences around the goal of teaching teachers how to teach. Depending on the specific program considered, alternative certification programs can be just as effective – if not more effective – than traditional programs in producing teachers who can generate student learning. Hence, teacher candidates who come through high-quality traditional or alternative preparation routes show certain similarities. In the end, effectiveness depends on the particular program and its curriculum as well as on the individual teacher’s characteristics and instructional practices – in addition to the principals’ own school and student factors.
21
Journal for Effective Schools
Volume 11, Number 1
Nevertheless, well-designed investigations have determined that teacher preparation can make a measurable difference in student achievement – especially in the first year in the classroom. But with a few years of experience, the differences in teacher effectiveness between certain traditional and alternate preparation programs fade. Also, the more effective teachers – regardless of preparation pathway – tended to remain in teaching while the least effective teachers were more likely to exit, regardless of pathway. Therefore, for principals wanting to build sustainable learning cultures with a cadre of effective teachers, looking for teaching candidates with the characteristics, behavior qualities, and experiences associated with student learning – rather than depend mainly on traditional “teacher quality” credentials such as degrees, education, and licensure – is more likely to find teachers who will keep students making at least one year’s worth of learning gains in a school year and remain in the profession. Gearing the interview and demonstration teaching around these experiences may prove fruitful. Finally, improving teacher evaluation systems and practices has become a national policy priority. Principals benefit their students and their teachers when they develop their own capacity to conduct valid and reliable classroom observations linked to standards-based performance ratings. Studies affirm that such principals’ ratings can actually improve teachers’ performance and efficacy. Additionally, principals need to recognize that school, classroom, and student factors – apart from or in combination with teacher effectiveness – influence student achievement. And, these are variables over which principals have considerable influence. References Alderman, C., Carey, K., Dillon, E., Miller, B., & Silva, E. (2011). A measured approach to improving teacher preparation. Education Sector Policy Brief. Washington, DC: Education Sector. Retrieved from http://www.educationsector.org/publications/measuredapproach-improving-teacher-preparation Allen, J.P., Pianta, R.D., Gregory, A, Mikami, A.Y., & Lun, J. (2011, August). An Interaction-based approach to enhancing secondary school student achievement. Science, 333 (6045), 1034-1037. Retrieved from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3387786/ Armour, D. T. (1976). Analysis of the school preferred reading program in selected Los Angeles minority schools. (R-2007-LAUSD). Santa Monica, CA: RAND. Baker, E.L., Barton, P.E., Darling-Hammond, L., Haertel, E., Ladd, H.F., Linn, R.L., Ravitch, D., Rothstein, R., Shavelson, R.J., & Shepard, L.A. (2010, August 29).
22
Journal for Effective Schools
Volume 11, Number 1
"Problems with the Use of student test scores to evaluate teachers," Washington, D.C.: Economic Policy Institute. Retrieved from http://www.epi.org/page/pdf/bp278.pdf?nocdn=1reports/2010/1117_eval uating_teachers.aspx Berry, B., Daughtrey, A., & Weiner, A. (2009, December) Collaboration: Closing the effective teacher gap. Carrboro, NC: Center for Teaching Quality. Retrieved from http://teachersnetwork.org/effectiveteachers/images/CTQPolicyBriefOn_C OLLABORATION__021810.pdf BÊteille T., & Loeb S. (2009). Teacher quality and teacher labor markets. In G. Sykes, B. Schneider, & D.N. Plank (Eds.), Handbook of education policy research. (pp. 596-612). New York: Routledge. Boyd, D., Dunlop, E., Lankford, H., Loeb. S., Mahler, P., O’Brien, R.H., & Wyckoff, J. (2011). Alternative certification in the long run: Student achievement, teacher retention, and the distribution of teacher quality in New York City. Palo Alto, CA: Center for Education Policy and Analysis, Stanford University. Retrieved from http://cepa.stanford.edu/content/alternative-certification-long-runstudent- achievement-teacher-retention-and-distribution Boyd, D., Grossman, P., Lankford, H., Loeb, S., & Wyckoff, J. (2006). How changes in entry requirements alter the teacher workforce and affect student achievement. Education Finance and Policy, 1(2), 176-216. Boyd, D., Grossman, P. Lankford, H., Loeb, S., & Wyckoff, J. (2008). Teacher preparation and student achievement. (CALDER Working Paper 20). Washington, DC: Urban Institute and National Center for Analysis of Longitudinal Data in Education Research. Retrieved from http://www.urban.org/UploadedPDF/1001255_teacher_preparation.pdf Boyd, D., Grossman, P., Lankford, H., Loeb, S., & Wyckoff, J. (2009). Teacher preparation and student achievement. Educational Evaluation and Policy Analysis, 31(4), 416-440. Boyd, D.J., P. Grossman, H. Lankford, S. Loeb, N.M. Michelli and J. Wyckoff. (2006). Complex by design: Investigating pathways into teaching in New York City schools. Journal of Teacher Education 57, 155-166. Boyd, D., Lankford, H., Loeb, S., Rockoff, J., & Wyckoff, J. (2007). The narrowing gap in New York City teacher qualifications and its implications for student achievement in high-poverty schools. (CALDER Working Paper 10). Washington, DC: National Center for Analysis of Longitudinal Data in Education Research. Retrieved from http://www.caldercenter.org/PDF/1001103_Narrowing_Gap.pdf Clotfelter, C.T., Ladd, H.F., & Vigdor, J.L (2007a). How and why do teacher credentials matter for student achievement? (Working Paper 12828). Cambridge, MA: National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w12828.pdf?new_window=1
23
Journal for Effective Schools
Volume 11, Number 1
Clotfelter, C.T., Ladd, H.F., & Vigdor, J.L. (2007b). Teacher credentials and student achievement in high schools: A cross-subject analysis with student fixed effects. (CALDER Working Paper 11). Washington, DC: The Urban Institute. Retrieved from http://www.caldercenter.org/PDF/1001104_Teacher_Credentials_HighScho ol.pdf Clotfelter, C. T., Ladd, H.F., & Vigdor, J.L. (2007c). Teacher credentials and student achievement: Longitudinal analysis with student fixed effects. Economics of Education Review, 26(6), 673-682. Clotfelter, C., Ladd, H., & Vigdor. J. (2010). Teacher credentials and student achievement in high school: A cross-subject analysis with student fixed effects. The Journal of Human Resources 45(3): 655-681. Committee on Education and the Workforce. (2012). Education reform: Discussing the value of alternative teacher certification programs. (Serial Nu. 112-66). Hearing before the Subcommittee on Early Childhood, Elementary, and Secondary Education, U.S. House of Representatives. Washington, DC: Author. Retrieved from http://www.gpo.gov/fdsys/pkg/CHRG112hhrg75109/html/CHRG-112hhrg75109.htm Constantine, J., Player, D., Silva, T., Hallgren, K., Grider, M., & Deke, J. (2009). An evaluation of teachers trained through different routes to certification. (NCEE 20094043). Final Report. Washington, DC: National Center for Education and Regional Assistance Institute of Education Sciences, U.S. Department of Education. Retrieved from http://www.mathematicampr.com/publications/PDFs/education/teacherstrained09.pdf Crowe, E. (2010). Measuring what matters: A stronger accountability model for teacher education. Washington, DC: Center for American Progress. Retrieved from http://www.americanprogress.org/issues/2010/07/pdf/teacher_accountabi lity.pdf Crowe, E. (2011). Race to the Top and teacher preparation. Analyzing state strategies for ensuring and fostering program innovation. Washington, DC: Center for American Progress. Retrieved from http://www.americanprogress.org/wpcontent/uploads/issues/2011/03/pdf/teacher_preparation.pdf Darling-Hammond, L. (2000). Teacher quality and student achievement: A review of state policy Evidence. Education Policy Analysis Archives, 8(1), Retrieved from http://epaa.asu.edu/epaa/v8n1/ Darling-Hammond, L. (2010a). Evaluating teacher effectiveness. How teacher performance assessments can measure and improve teaching. Washington, DC: Center for American Progress. Retrieved from http://www.americanprogress.org/wpcontent/uploads/issues/2010/10/pdf/teacher_effectiveness.pdf Darling-Hammond, L. (2010b). Recognizing and developing effective teaching: What policy makers should know and do. Policy Brief. Washington, DC: National Education Association. Retrieved from 24
Journal for Effective Schools
Volume 11, Number 1
http://www.nea.org/assets/docs/HE/Effective_Teaching_-_Linda_DarlingHammond.pdf Darling-Hammond, L., Holtzman, D., Gatlin, S.J., & Heilig, J.V. (2005). Does teacher preparation matter? Evidence about teacher certification, Teach for America, and teacher effectiveness. Education Policy Analysis Archives, 13(42). Retrieved from http://epaa.asu.edu/epa/v13n42. Decker, P.T., Deke, J.G., Johnson, A.W., Mayer, D.P., Mullens, J., & Schochet, P.Z. (2005). The evaluation of teacher preparation models: Design report. Princeton, NJ: Mathematica Policy Research. Retrieved from http://mathematicampr.com/publications/pdfs/techprepdesign.pdf Duncan, A. (2010). Teacher preparation: Reforming the uncertain profession, Education Digest, 75(5), 13-22. Easton-Brooks, D. & Davis, A. (2009). Teacher qualification and the achievement gap in early primary grades. Education Policy Analysis Archives, 17(15). Retrieved from http://epaa.asu.edu/epaa/v17n15/ Feistritzer, C.E. (2011). Profile of teachers in the U.S. 2011. Washington, DC: National Center for Education Information. Retrieved from http://www.ncei.com/Profile_Teachers_US_2011.pdf Feistritzer, C. E. & Haar, C. K. (2008). Alternate routes to teaching. Upper Saddle River, NJ: Pearson Merrill Prentice Hall. Feistritzer, E. & Haar. C. (2010). Research on alternate routes. Education research. Washington, DC: National Center for Alternative Certification. Retrieved from www.teachnow.org/RESEARCH%20ABOUT%20ALTERNATE%20ROUTES. pdf Gallagher, H. A. (2004). Vaughan Elementary’s innovative teacher evaluation system: Are teacher evaluation scores related to growth in student achievement? Peabody Journal of Education 79(4): 79–107. Gansle, G. H., Knox, M. R., & Schafer, M. J. (2010). Value added assessment of teacher preparation in Louisiana: 2005-2006 to 2008-2009 (Technical Report). Louisiana State University. Retrieved from http://regents.louisiana.gov/assets/docs/TeacherPreparation/2010VATech nical082610.pdf Gansle, K.A., Noell, G.H., & Burns, J.M. (2012). Do student achievement outcomes differ across teacher preparation programs? An analysis of teacher education in Louisiana. Journal of Teacher Education, 63(5), 304-317. Gansle K., Noell, G., Knox R., & Schafer M. (2010). Value added assessment of teacher preparation programs in Louisiana: 2005-06 to 2008-09. Year 5 – 2010. Retrieved from http://regents.louisiana.gov/assets/docs/TeacherPreparation/2010VATech nical082610. pdf Glass, G. (2008). Alternative certification of teachers. East Lansing, MI: Great Lakes Center for Education Research & Practice. Retrieved from 25
Journal for Effective Schools
Volume 11, Number 1
http://www.greatlakescenter.org/docs/Policy_Briefs/Glass_AlternativeCert .pdf Goldhaber, D.D. & Brewer, D.J. (2000). Does teacher certification matter? High school certification status and student achievement. Educational Evaluation and Policy Analysis, 22(1), 129-145. Goldhaber, D. & Liddle, S. (2011). The gateway to the profession: Assessing teacher preparation programs based on student achievement. Bothell, WA: University of Washington, Center for Education Data and Research. Retrieved from http://www.cedr.us/papers/working/CEDR%20WP%2020112%20Teacher%20Training%20%289-26%29.pdf Goldhaber, D. & Liddle, S. (2012). The gateway to the profession: Assessing teacher preparation programs. (Working Paper 65). Washington, DC: Center for Analysis of Longitudinal Data in Education Research. Retrieved from http://www.caldercenter.org/upload/Goldhaber-et-al.pdf Goldhaber, D., Liddle, S. & Theobald, R. (2012). The gateway to the profession: Assessing teacher preparation programs based on student achievement. (CEDR Working Paper). Seattle, WA: University of Washington. Retrieved from http://www.cedr.us/papers/working/CEDR%20WP%204.2012_Teacher%20 Training_5-17-2012.pdf Gordon, R., Kane, T.J., & Staiger, D.O. (2006). Identifying effective teachers using performance on the job. Washington, DC: The Brookings Institution. The Hamilton Project. Retrieved from www.brookings.edu/views/papers/200604Hamilton_1.pdf Greenberg, J. & Walsh, K. (2008). No common denominator: The preparation of elementary teachers in mathematics by America’s education schools. Washington, DC: National Council on Teacher Quality. Retrieved from http://www.nctq.org/p/publications/docs/nctq_ttmath_fullreport_2009060 3062928.pdf Grissom, J.A. & Vandas, S. (2010). Teacher preparation and student achievement. Reviewing the evidence. (Report 06-2010). Columbia, MO: Missouri P 20 Education Policy Research Center. Truman Policy Research. Retrieved from http://truman.missouri.edu/P20/documents/TeacherPrepStudentAchievem ent.pdf Grossman, P., & Loeb, S. (2008). Alternative routes to teaching: Mapping the new landscape of teacher education. Cambridge, MA: Harvard Education Press. Grossman, P., Loeb, S., Cohen, J., Hammerness, K., Wyckoff, J., Boyd, D., & Lankford, H. (2010). Measure for measure: The relationships between measures of instructional practice in middle school English language arts and teachers’ valueadded scores. (CALDER Working Paper No. 45). Washington, DC: National Center for Analysis of Longitudinal Education Data. The Urban Institute. Retrieved from http://www.urban.org/uploadedpdf/1001425-measure-formeasure.pdf 26
Journal for Effective Schools
Volume 11, Number 1
Halverson, R., Kelley, C., & Kimball, S. (2004). Implementing teacher evaluation systems: How principals make sense of complex artifacts to shape local instructional practice. In W. Hoy & C. Miskel (Eds.), Educational Administration, Policy, and Reform: Research and Measurement. (pp.153 – 188) Greenwich, CT: Information Age. Hanushek, E. A. (1971). Teacher characteristics and gains in student achievement; Estimation using micro data. American Economic Review, 61(2), 280-288. Hanushek, Eric A. (2011a). The economic value of higher teacher quality. Economics of Education Review 3 (3), 466-479. Retrieved from http://hanushek.stanford.edu/sites/default/files/publications/Hanushek% 202011%20EER%2030%283%29.pdf Hanushek, E.A. (2011b, Summer). Valuing teachers. Education Next, 11(3). Retrieved from http://educationnext.org/valuing-teachers/ Harris, D.N. & Sass, T.R. (2009). What makes for a good teacher and who can tell? (Working Paper 30). Washington, DC: The Urban Institute, National Center for Analysis of Longitudinal Data in Education Research. Retrieved from http://www.urban.org/uploadedpdf/1001431-what-makes-for-a-goodteacher.pdf Heck, R.H. (2009). Teacher effectiveness and student achievement: Investigating a multilevel cross-classified model. Journal of Educational Administration, 47(2), 227-249. Henke, R.R., Chen, X., Geis, S., & Knepper, P. (2000). Progress through the teacher pipeline: 1992-93 college graduates and elementary/secondary school teaching as of 1997. (NCES 2000-152). Washington, DC: National Center for Education Statistics. Retrieved from http://0nces.ed.gov.opac.acc.msmc.edu/pubs2000/2000152.pdf Henry, G.T., Thompson, C.L., Fortner, C.K., Zulli, R.A., & Kershaw, D.C. (2010). The impact of teacher preparation on student learning in North Carolina schools. Chapel Hill, NC: Carolina Institute for Public Policy, The University of North Carolina at Chapel Hill. Retrieved from http://publicpolicy.unc.edu/research/Teacher_Prep_Program_Impact_Final _Report_nc.pdf Higher Education Commission. (2012). 2012 Report card on the effectiveness of teacher training programs. Nashville, TN: Author. State Board of Education. Retrieved from http://www.state.tn.us/thec/Divisions/fttt/12report_card/PDF%202012%2 0Reports/2012%20Report%20Card%20on%20the%20Effectiveness%20of%20T eacher%20Training%20Programs.pdf Hill, C.W. (1921). The efficiency ratings of teachers. Elementary School Journal, 21, 438443. Ingersoll, R. M. (2002). The teacher shortage: A case of wrong diagnosis and wrong prescription. NASSP Bulletin, 86(631), 16–30.
27
Journal for Effective Schools
Volume 11, Number 1
Immerwahr, J., Doble, J., Johnson, J., Rochkind, J., & Ott, A. (2007). Lessons learned: New teachers talk about their jobs, challenges and long-range plans. Washington, DC: National Comprehensive Center for Teacher Quality and Public Agenda. Retrieved from: http://www.publicagenda.org/files/pdf/lessons_learned_1.pdf Jackson, C. K. (2010). Match quality, worker productivity, and worker mobility: Direct evidence from teachers. (NBER Working Paper 15990). Cambridge, MA: National Bureau of Economic Research. Retrieved from http://econweb.tamu.edu/common/files/workshops/PERC%20Applied%2 0Microeconomics/2011_11_16_Kirabo_Jackson.pdf Jacob, B. A. & Lefgren, L. (2008a). Can principals identify effective teachers? Evidence on subjective performance evaluation in education. Journal of Labor Economics, 26 (1), 101–136. Jacob, B. A. & Lefgren. L.J. (2008b). Principals as agents: Subjective performance measurement in education. Journal of Labor Economics 26(1), 101-136. Johnson, S.M., Birkeland, S.E., & Peske, H.G. (2005). A difficult balance: Incentives quality control in alternative certification programs. Cambridge, MA: Harvard Graduate School of Education. Project on the Next Generation of Teachers. Retrieved from http://www.nctq.org/nctq/research/1135274951204.pdf Kane, T.J. (2012, Fall) Capturing the dimensions of effective teaching. Education Next, 12 (4). Retrieved from http://educationnext.org/capturing-the-dimensionsof-effective-teaching/ Kane, T.J., Rockoff, J.E., & Staiger, D.O. (2006). What does certification tell us about teacher effectiveness? Evidence from New York City. Cambridge, MA: Harvard Graduate School of Education. Retrieved from http://www0.gsb.columbia.edu/faculty/jrockoff/certification-final.pdf Kane, T.J. & Staiger, D.O. (2012). Gathering feedback for teaching. Combining highquality observations with student surveys and achievement. MET Project Policy and Practice Brief. Seattle, WA: Bill & Melinda Gates Foundation. Retrieved from http://metproject.org/downloads/MET_Gathering_Feedback_Practioner_Br ief.pdf Kane, T.J., Taylor, E.S., Tyler, J.H., & Wooten, A. L. (2010). Identifying effective classroom practices using student achievement data. (Working Paper 15803). Cambridge, MA: National Bureau of Economic Research. Retrieved from http://www.danielsongroup.org/ckeditor/ckfinder/userfiles/files/Identify ingEffectiveClassroomPractices.pdf Kane, T. J., Taylor, E. S., Tyler, J. H., & Wooten, A. L. (2011). Identifying effective classroom practices using student achievement data. Journal of Human Resources, 46(3), 587-613. Kane, T.J., Wooten, A, L., Taylor, E.S., & Tyler, J.H. (2011, Summer). Evaluating teacher effectiveness. Can classroom observations identify practices that raise achievement? Education Next, 11(3). Retrieved from 28
Journal for Effective Schools
Volume 11, Number 1
http://educationnext.org/evaluating-teacher-effectiveness/ Kimball, S. M., White, B., Milanowski, A.T., & Borman, G. (2004). Examining the relationship between teacher evaluation and student assessment results in Washoe County. Peabody Journal of Education 79 (4): 54–78. Koedel, C., Parsons, E., Podgursky, M., & Ehlert, M. (2012). Teacher preparation programs and teacher quality: Are there real differences across programs? Columbia, MO: University of Missouri. Retrieved from http://economics.missouri.edu/workingpapers/2012/WP1204_koedel_et_al. pdf Kukla-Acevedo, S., Streams, M., & Toma, E.F. (2009). Evaluation of teacher preparation programs: A reality show in Kentucky. (IFIR Working Paper). Lexington, KY: Institute for Federalism and Intergovernmental Relations. Retrieved from http://www.ifigr.org/publication/ifir_working_papers/IFIR-WP-200909.pdf Levine, A. (2006). Educating school teachers. Washington, DC: The Education Schools Project. Retrieved from http://www.edschools.org/pdf/Educating_Teachers_Report.pdf Mathers, C., Oliva, M., & Laine, S. W. M. (2008). Improving instruction through effective teacher evaluation: Options for states and districts. TQ Research and Policy Brief. Washington, DC: National Comprehensive Center for Teacher Quality. Retrieved from http://www.tqsource.org/publications/February2008Brief.pdf Medley, D.M. & Coker, H. (1987). The accuracy of principals' judgments of teacher performance. The Journal of Educational Research, 80(4), 242-247. Mellor, L., Lummus-Robinson, M., Brinson, V., & Dougherty, C. (2010). Linking teacher preparation programs to student achievement in Texas. In Institute for Public School Initiatives & T. U. of Texas System (Eds.), Preparing Texas Teachers: A Study of the University of Texas System Teacher Preparation Programs (pp. 5–42). Austin, TX. Mendro, R., Jordan, H., Gomez, E., Anderson, M., & Bembry, K. (1998, April). Longitudinal teacher effects on student achievement and their relation to school and project evaluation. Paper presented at the Annual Meeting of the American Educational Research Association, San Diego, CA. Mihaly, K., McCaffrey, D., Sass, T., & Lockwood, J.R. (2011). Where you come from or where you go? Distinguishing between school quality and the effectiveness of teacher preparation program graduates. (RAND Working Paper 2012-3-2). Santa Monica, CA: RAND. Retrieved from http://aysps.gsu.edu/sites/default/files/documents/12-32_SassMihalyMcCaffreyLockwood-Where_You_Come_From.pdf Milanowski, A. T. (2004). The relationship between teacher performance evaluation scores and student assessment: Evidence from Cincinnati. Peabody Journal of Education 79(4): 33–53.
29
Journal for Effective Schools
Volume 11, Number 1
Milanowski, A., Kimball, S. M., & Heneman, H. G. (2010). Principal as human capital manager: Evidence from two large districts. Madison, WI: Consortium for Policy Research in Education, University of Wisconsin. Retrieved from http://cpre.wceruw.org/publications/School%20HCM%20paper.pdf Milanowski, A. T., Kimball, S. M., & Odden, A. (2005). Teacher accountability measures and links to learning. In L. Stiefel, A. E. Schwartz, R. Rubenstein, & J. Zabel (Eds.), Measuring school performance and efficiency: Implications for practice and research 2005 Yearbook of the American Education Finance Association. (pp. 137–159). Larchmont, NY: Eye on Education. Murnane, R. (1975). The impact of school resources on the learning of inner city children. Cambridge, MA: Ballinger. Murnane, R. J. & Phillips, B. R. (1981). What do effective teachers of inner-city children have in common? Social Science Research, 10(1), 83-100. National Association of Secondary School Principals. (2011, February). Teacher supervision and evaluation. NASSP Board Position Statements. Reston, VA: Author. Retrieved from http://www.nassp.org/Content.aspx?topic=Teacher_Supervision_and_Eval uation National Center for Education Information. (2010). Introduction and overview of alternative routes to certification. Washington, DC: Author. Retrieved from: http://www.teach-now.org/overview.cfm National Commission on Teaching and America’s Future. (1996). What matters most: Teaching for America’s future. Woodbridge, VA: Author. Retrieved from http://nctaf.org/wp-content/uploads/2012/01/WhatMattersMost.pdf National Council for Accreditation of Teacher Education. (2010a). Professional standards for the accreditation of teacher preparation institutions. Washington, DC: Author. Retrieved from http://www.ncate.org/LinkClick.aspx?fileticket=nX43fwKc4Ak%3D&tabid= 669 National Council for Accreditation of Teacher Education. (2010b). What makes a teacher effective? A summary of key findings on teacher preparation. Washington, DC: Author. Retrieved from http://www.ncate.org/LinkClick.aspx?fileticket=JFRrmWqa1jU%3d&tabid= 361 National Council on Teacher Quality. (2013). 2012 State teacher policy yearbook. Improving teacher preparation. National summary. Washington, DC: Author. Retrieved from http://www.nctq.org/stpy11/reports/stpy12_national_report.pdf National Governors Association. (2009). Building a high-quality education workforce: A governor’s guide to human capital development. Washington, DC. Retrieved from http://www.nga.org/files/live/sites/NGA/files/pdf/0905BUILDINGEDUWO RKFORCE.PDF
30
Journal for Effective Schools
Volume 11, Number 1
National Governors Association. (2011). Preparing principals to evaluate teachers. Issue Brief. Washington, DC: National Governors Association Center for Best Practices. Retrieved from http://www.nga.org/files/live/sites/NGA/files/pdf/1110PRINCIPALEVA LUATION. PDF National Research Council. (2010). Preparing teachers. Building evidence for sound policy. Washington, DC: The National Academies Press. Retrieved from http://www.nap.edu/openbook.php?record_id=12882&page=R1 National School Boards Association. (2012). Teacher and principal effectiveness. Issue Brief. Retrieved from http://www.nsba.org/Advocacy/KeyIssues/TeacherQuality/Teachers-Brief.pdf Nelson, B. S., & Sassi, A. (2000). Shifting approaches to supervision: The case of mathematics supervision. Educational Administration Quarterly, 36(4), 553–584. Nunnery, J.A., Kaplan, L.S., Owings, W.A., & Pribesh, S. (2009) The effects of Troops to Teachers on student achievement: A meta-analytic approach. NASSP Bulletin, 93(4), 249-272. Perry, A. (2011). Teacher preparation programs: A critical vehicle to drive student achievement. re:vision, 1. Chapel Hill, NC: The Hunt Institute. Retrieved from http://www.hunt-institute.org/elements/media/files/reVISION-Number-1November-2011.pdf Peterson, K. D. (1987). Teacher evaluation with multiple and variable lines of evidence. American Educational Research Journal 24(2), 311–317. Peterson, K. D. (2000). Teacher evaluation: A comprehensive guide to new directions and practice (2nd ed.) Thousand Oaks, CA: Corwin Press. RAND. (2011). What teacher characteristics affect student achievement? Findings from Los Angeles Public Schools. Research Brief. Santa Monica, CA: Author. Retrieved from http://www.rand.org/content/dam/rand/pubs/research_briefs/2010/RA ND_RB9526. pdf Rich, M. (2013, February 9). Holding states and schools accountable. The New York Times, News Analysis. Retrieved from http://www.nytimes.com/2013/02/10/education/debate-over-federal-rolein-public-school-policy.html?_r=0 Rivkin, S.G., Hanushek, E.A., & Kain, J.F. (2005). Teachers, schools and academic achievement. Econometric, 73(2), 417-458. Rockoff, J. E. (2004). The impact of individual teachers on student achievement: Evidence from panel data. American Economic Review, 94(2), 247-252. Rockoff, J. E., Jacob, B.A., Kane, T.J., & Staiger, D.O. (2008). Can you recognize an effective teacher when you recruit one? New York: National Bureau of Economic Research (Working Paper No. 14485). Retrieved from http://www.dartmouth.edu/~dstaiger/Papers/w14485.pdf Rockoff, J. E. & Speroni, C. (2010). Subjective and objective evaluations of teacher effectiveness. American Economic Review 100 (2), 261–66. 31
Journal for Effective Schools
Volume 11, Number 1
Rockoff, J. E., Staiger, D.O. Kane, T.J., & Taylor, E.S. (2009). Providing information on teacher performance to school principals: Evidence from a randomized intervention in New York City. (Center for Education Policy Research, Harvard University Working Paper). Retrieved from http://growththroughlearningillinois.org/Portals/0/Documents/Informatio nAndEmployeeEvaluation.pdf Rockoff, J. E., Staiger, D.O., Kane, T.J., & Taylor, E.S. (2010). Information and employee evaluation: Evidence from a randomized intervention in public schools. (National Bureau of Economic Research Working Paper 16240). Cambridge, MA: National Bureau of Economic Research. Retrieved from http://www.nber.org/papers/w16240 Rockoff, J.E., Staiger, D.O., Kane, T.J., & Taylor, E.S., (2011). Information and employee evaluation: Evidence from a randomized intervention in public schools. Retrieved from http://www.dartmouth.edu/~dstaiger/Papers/2012/Information%20and% 20Evaluatio n%20RSKT%20aer%20accepted.pdf Rothstein, J. (2010). Teacher quality in educational production: Tracking, decay, and student achievement. Quarterly Journal of Economics, 125(1), 175-214. Retrieved from http://qje.oxfordjournals.org/content/125/1/175.full.pdf Rothstein, J. & Mathis, W.J. (2013). Review of two culminating reports from the MET project. Boulder, C: National Education Policy Center. Retrieved from http://nepc.colorado.edu/files/ttr-final-met-rothstein.pdf Rugg, H.O. (1922). Is the rating of human character practicable? Journal of Educational Psychology, 12, 30-42. Sanders, W. & Horn, S.P. (1998). Research findings from the Tennessee value-added assessment system (TVASS) database: implications for educational evaluation and research. Journal of Personnel Evaluation in Education, 12(3), 247-256. Sartain, L., Stoelinga, S.R., & Brown, E.R., Luppescu, S., Matsko, K.K., Miller, F.K., Durwood, C.E., Jiang, J.Y., & Glazer, D. (2011). Rethinking teacher evaluation. Lessons learned from classroom observations, principal-teacher conferences, and district implementation. Research Report. Chicago, IL: Consortium on Chicago School Research at the University of Chicago Urban Education Institute. Retrieved from http://ccsr.uchicago.edu/sites/default/files/publications/Teacher%20Eval %20Report%20FINAL.pdf Sass, T. R. (2008). Teacher preparation pathways, Institutions and programs in Florida. Paper prepared for the Committee on Teacher Preparation Programs. Washington, D.C.: Division of Behavioral and Social Sciences and Education, National Research Council. Sass, T. R. (2011). Certification requirements and teacher quality: A comparison of alternative routes to teaching (Tech. Rep. No. Working Paper 64). Washington, DC: National Center for Analysis of Longitudinal Data in Education
32
Journal for Effective Schools
Volume 11, Number 1
Research. Retrieved from http://www.abcte.org/files/alt.cert.study.2011.pdf Sawchuk, S., (2011a, April 26). Studies link classroom observations to student achievement. Education Week, Retrieved from http://blogs.edweek.org/edweek/teacherbeat/2011/04/studies_link_classroo m_observa. html Sawchuk, S. (2011b): What studies say about teacher effectiveness. Education Writers Association Research Brief. Retrieved from: http://www.ewa.org/site/DocServer/TeacherEffectiveness.final.pdf?docID =2001 Sawchuk, S. (2012, December 5). Analysis finds wide variation in effectiveness of L.A. teachers. Education Week, 32 (13), 5. Sawchuk, S. (2013, January 16). Multiple gauges best for teachers. Education Week, 32 (17), 1, 16. Seidel, T. & Shavelson, R.J. (2007). Teaching effectiveness research in the past decade: The role of theory and research design in disentangling meta-analysis results. Review of Educational Research, 77(4), 454-99. Simon, S. (2013, January 9). Research finds way to grade teachers. Study: Test scores just part of formula. Reuters. Newport News, VA: Daily Press, p. 14. Stein, M. K. & D’Amico, L. (2000, April). How subjects matter in school leadership. American Educational Research Association. New Orleans, LA. Retrieved from http://www.lrdc.pitt.edu/hplc/Publications/MKS&LMD-MultSubjAERA2000.pdf Stronge, J.H., Ward, T.J., & Grant, L.W. (2011). What makes good teachers good? A cross-case analysis of the connection between teacher effectiveness and student achievement. Journal of Teacher Education, 62 (4), 339-355. Tennessee High Education Commission. (2012). 2012 Report card on the effectiveness of teacher training programs. Nashville, NT: Author. Retrieved from http://www.state.tn.us/thec/Divisions/fttt/12report_card/PDF%202012%2 0Reports/2012%20Report%20Card%20on%20the%20Effectiveness%20of%20T eacher%20Training%20Programs.pdf U.S. Department of Education. (2009). Race to the Top: Program executive summary (Technical Report). Washington, DC: Author. Retrieved from http://www2.ed.gov/programs/racetothetop/executive-summary.pdf Walsh, K., Glaser, D., & Wilcox, D.D. (2006). What education schools aren’t teaching about reading and what elementary teachers aren’t learning. Washington, DC: National Council on Teacher Quality. Retrieved from http://www.nctq.org/nctq/images/nctq_reading_study_app.pdf Walsh, K., & Jacobs, S. (2007). Alternative certification isn’t alternative. Washington, DC: Thomas B. Fordham Institute and National Council on Teacher Quality. Retrieved from http://news.heartland.org/sites/all/modules/custom/heartland_migration /files/pdfs/22264.pdf 33
Journal for Effective Schools
Volume 11, Number 1
Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The widget effect: Our national failure to acknowledge and act on teacher effectiveness. 2nd edition. New York, NY: The New Teacher Project. Retrieved from: http://widgeteffect.org/downloads/TheWidgetEffect.pdf Wells, F.L. (1907). A statistical study of literary merit. Archives of Psychology, 7. Wilson, S.M., Floden, R.E., & Ferrini-Mundy, J. (2001). Teacher preparation research: Current knowledge, gaps, and recommendations. A Research Report prepared for the U.S. Department of Education. Seattle, WA: Center for the Study of Teaching and Policy, University of Washington: Retrieved from http://www.stcloudstate.edu/tpi/initiative/documents/preparation/Teach er%20Prepa ration%20ResearchCurrent%20Knowledge,%20Gaps,%20and%20Recommendations.pdf Winters, M. (2011). Measuring teacher effectiveness: Credentials unrelated to student achievement. Issue Brief, 10. New York, NY: Manhattan Institute for Policy Research. Retrieved from http://www.manhattaninstitute.org/html/ib_10.htm Wise, A.E., Darling-Hammond, L., McLaughlin, M.W, and Bernstein, H.T. (1985, September). Teacher evaluation: A study of effective practices. Special issue: The master teacher. The Elementary School Journal, 6 (1), 60-121. Xu, Z., Hannaway, J. & Taylor, C. (2011). Making a difference? The effects of Teach for America in high school. Journal of Policy Analysis and Management 30(3), 447-469. Zinth, J. D. (2010). Teacher evaluation: New approaches for a new decade. ECS Issue Brief. Denver, CO: Education Commission of the States. Retrieved from http://www.ecs.org/clearinghouse/86/21/8621.pdf
About the author: Leslie S. Kaplan is a retired school administrator with middle, high school, and central office leadership experience. Please address all correspondence to lskaplan@cox.net.
34
Journal for Effective Schools
Volume 11, Number 1
Efficacy, Consequences and Teacher Commitment in the Era of No Child Left Behind Herbert Ware, Ph.D. Professor of Educational Leadership Emeritus, George Mason University Jehanzeb Cheema, Ph.D. Adjunct Associate Professor, The Chicago School of Professional Psychology in Washington, D.C. Anastasia Kitsantas, Ph.D. Professor and Academic Program Coordinator of Educational Psychology, George Mason University Abstract This study explored how NCLB-generated consequences relative to goal achievement interact with principal and teacher efficacy beliefs to impact teacher commitment. The 2003-2004 Schools and Staffing Survey (SASS), with 35,910 public school teachers nested in 7,900 public school principals was used in a hierarchical linear modeling (HLM) analysis that sought to determine these relationships. The findings showed that teacher commitment was higher in settings which met performance goals. Teacher efficacy and the principal’s role in establishing curriculum and student performance standards positively influenced teacher commitment. With regard to consequences, providing school wide resources and offering school attendance choices were associated with increased teacher commitment. Offering supplemental educational services to students was associated with reduced teacher commitment. Educational implications are discussed. The findings help to clarify relationships, including leadership practices, maintaining a goal focus and high expectations for student achievement, and monitoring progress toward those goals, all sustaining characteristics of effective schools Keywords Teacher efficacy, principal efficacy, teacher commitment, effective schools correlates, NCLB consequences, hierarchical linear modeling (HLM) Type of article: Empirical paper Efficacy, Consequences and Teacher Commitment in the Era of No Child Left Behind In the United States, the No Child Left Behind Act (NCLB) is a federal mandate passed in 2001 which requires all states to create achievement standards for all students with the ultimate goal of closing the achievement gap by 2014 and increasing accountability through Adequate Yearly Progress (AYP) testing (Dept of 35
Journal for Effective Schools
Volume 11, Number 1
Ed., 2004). The NCLB legislation supports a standards-based model of assessing schools based on students’ academic growth and achievement. That is, if certain school districts do not meet AYP, states can submit individual growth plans that document the states’ planned activities and initiatives to meet achievement goals by 2014. It is expected that schools that fail to meet AYP benchmarks would face consequences ranging from being labeled as “in needs of improvement” to a complete school restructuring. They also could include a requirement to write or modify a school improvement plan, placement on an evaluation cycle, replacement of the principal, or a requirement to implement some form of supplemental education services for students (Dept of Ed., 2004). Some of these consequences hold the potential for eroding teacher and principal efficacy beliefs as well as teacher commitment (McCullers & Bozeman 2010). Therefore, the purpose of the present study using the 2003-2004 Schools and Staffing Survey (2003-2004 SASS) was to answer the question, “How do the NCLB-generated consequences interact with principal and teacher efficacy beliefs to impact teacher commitment?” Edmonds (1979) prompted attention to factors affecting student achievement after examining research comparing effective schools with ineffective schools. These factors, or correlates, subsequently associated with effective schools included (a) strong administrative leadership, (b) an expectation that children would not be permitted to fall below a minimum level of basic skills achievement, (c) this focus on achievement would be the central mission of the school, (d) the school’s atmosphere would be orderly, and (e) a frequent monitoring of student progress toward the achievement goals. As expressed today, the effective schools correlates related to this study include a clearly stated and focused mission on learning for all; uncompromising commitment to high expectations for all; instructional leadership; and the monitoring of student progress (Journal for Effective Schools, 2011). The relationships among these factors remain important in responding to NCLB – or to any high-stakes accountability – requirements. Prior to NCLB’s passage, researchers investigated how principal support and teacher efficacy may influence teacher commitment and teacher turnover using SASS data (Ingersoll, 2001; Singh & Billingsley, 1998; Ware & Kitsantas, 2007, 2011). For example, Singh and Billingsley (1998) found that both principal and peer support had strong positive effects on teacher commitment. Further, Ingersoll (2001) found that after controlling for characteristics of both teachers and schools, “schools that provide more administrative support to teachers, turnover rates are distinctly lower” (p. 518) and "schools with higher levels of faculty decision-making influence and autonomy have lower levels of turnover" (p. 519). Ware and Kitsantas (2007), limiting their data to public schools that were expected to meet goals in some form, found that teacher commitment was positively influenced by teacher efficacy beliefs in (a) enlisting the support of their administrator, (b) acting collectively in influencing decision making, and (c) managing their own classroom. In a 36
Journal for Effective Schools
Volume 11, Number 1
subsequent multi-level analysis, Ware and Kitsantas (2011) found that principal efficacy beliefs relative to the principal’s influence on curriculum, standards, policy, and spending affected teacher commitment through the impact of those variables on the teacher efficacy beliefs. In their analysis, they did not find that meeting performance goals affected teacher commitment. However, “goals” at that time did not necessarily refer to student performance goals (Ware & Kitsantas, 2011). Other researchers have reported that teacher commitment relates to student learning (Firestone & Pennell, 1993), an important aspect of maintaining effective schools. The findings of these studies suggest that there is a complex, multi-level, relationship among teacher commitment, efficacy, turnover, and student achievement. These complex relationships need to be understood more clearly in order to address the demands of NCLB – or any high-stakes accountability. In the following sections, we will review the literature related to these constructs. Teacher Commitment Studies have examined teacher commitment from the standpoint of three constructs: organizational commitment; commitment to student learning; and professional commitment. Organizational commitment refers to the extent to which an individual views himself or herself as involved in a particular organization (Mowday, Steers & Porter, 1979). They characterized organizational commitment with three factors: “(1) a strong belief in and acceptance of the organization’s goals and values; (2) a willingness to exert considerable effort on behalf of the organization; and (3) a strong desire to maintain membership in the organization” (Mowday et al, 1979, p. 226). Kushman (1992) has indicated a linkage between teacher commitment to student learning and student achievement. In Kushman’s context, teachers’ commitment to student learning is characterized by three components: the teachers’ sense of efficacy, their expectations that students can learn, and their willingness to exert the effort required to facilitate that learning. Occupational commitment, or professional commitment, differs from organizational commitment (Coladarci, 1992; Kushman, 1992). While organizational commitment reflects a commitment to a specific workplace and an intention to remain there (Kushman, 1992), occupational commitment refers to commitment to the profession or organization (Coladarci, 1992). Coladarci (1992) addresses commitment to teaching as commitment to a profession and operationally defines it in terms of the teacher’s willingness to choose the profession again. In prior research using the SASS, this distinction between organizational commitment and professional commitment has manifested itself in two ways. Ingersoll (2001) has examined teacher turnover in the context of place—teacher commitment to remain at a site, an element of organizational commitment. Other researchers have operationalized “teacher commitment” in terms of intent to remain in the profession (Riehl & Sipple, 1996; Singh & Billingsley, 1998; Ware & Kitsantas, 2007; 2011). 37
Journal for Effective Schools
Volume 11, Number 1
Both professional commitment and organizational commitment have been found to impact Organizational Citizenship Behavior (OCB) (Bogler & Somech, 2004; Somech & Bogler, 2002). Paine and Organ (2000) have characterized OCB as discretionary behavior that promotes an organization’s effectiveness and that is not explicitly performed for a formal reward system. Somech and Bogler (2002) have noted that OCB goes beyond the role of expectations and, in the school context, can be directed to the student, the team (as colleagues) and the organization (as the school). They found that teachers who were highly committed to their organization (as the school) or to their profession reported themselves to be more engaged in OCB. These linkages among the forms of teacher commitment and OCB and commitments to student learning demonstrate the importance of teacher commitment. It should be pointed out, however, that one can have professional commitment while at the same time lacking organizational commitment— commitment to a particular site of employment. This can yield turnover at a site without creating withdrawal from the profession. If teacher commitment, generally, holds teachers in the profession or to a specific place in the profession and if teacher commitment affects student achievement (Firestone and Pennell, 1993; Kushman, 1992), then it becomes important to determine factors that affect teacher commitment. Issues associated with principal and teacher efficacy beliefs would be among those factors (Singh & Billingsley, 1998; Ware & Kitsantas, 2007, 2011). In sum, organizational commitment, professional commitment, and organizational citizenship behavior play important roles in schools, including maintaining a focus on the school’s mission and sustaining commitment to high expectations for student achievement. Efficacy in the School Context Teacher Efficacy Beliefs Teacher efficacy refers to the degree to which teachers feel personally capable of effectively teaching all students, even unmotivated students, to attain successful outcomes (Tschannen-Moran & Woolfolk-Hoy 2001). Marks and Louis (1997), in addressing teacher empowerment in different domains, have shown how teacher empowerment affects pedagogical quality and student academic performance indirectly through school organization for instruction. Ware and Kitsantas (2007; 2011) posited two forms of teacher efficacy: teacher efficacy to enlist administrative support and teacher efficacy for classroom management. The former refers to teacher’s sense of their ability to engage the principal in such matters as rule enforcement, discussion of instructional practices and goal setting. The latter refers to the teachers’ sense of control relative to selecting content, teaching techniques, 38
Journal for Effective Schools
Volume 11, Number 1
determining amount of homework, and evaluating and disciplining students. Teacher self-efficacy influences the kind of learning environment teachers provide students. For example, Bandura (1993) argued that teachers with a high sense of instructional efficacy provide students with a more challenging and supportive academic environment than do those with a low sense of instructional efficacy. Overall, research shows that teacher efficacy is a significant contributor to teacher commitment (Bogler & Somech, 2004; Coladarci, 1992; Ebmeier, 2003; Ware & Kitsantas, 2007). Collective efficacy beliefs Bandura (1993) viewed teacher collective efficacy as the result of the contributions of organizational interdependencies within a school. For Goddard, Hoy, and Woolfolk Hoy (2004), “for schools, perceived collective efficacy refers to the judgment of teachers in a school that the faculty as a whole can organize and execute the courses of action required to have a positive effect on students” (p. 4). For Ware and Kitsantas (2007; 2011), collective efficacy referred to teachers’ belief in their ability to influence decision making relative to establishing the curriculum, hiring and evaluating teachers, setting discipline policy and determining how the school budget would be spent. Greater teacher influence on decisions affecting their work increases faculty confidence in their ability to educate children (Goddard, Hoy, & Woolfolk Hoy, 2004). Goddard, using hierarchical linear modeling (HLM), found that teacher collective efficacy was positively related to between school differences in student achievement in reading and mathematics (Goddard, 2001). Collective efficacy beliefs also play a significant role in achieving group goals (Goddard & Skrla, 2006). However, in examining the relationships among faculty and school demographics and teacher collective efficacy, they found surprisingly less of the variance in collective efficacy explained by the demographic measures than they had expected. This led them to note, “Thus, it is important for researchers to continue the study of efficacy beliefs in search of their unique contributions to organizational performance” (Goddard & Skrla, 2006, p. 229). Collective efficacy strengthens teacher commitment, a potentially important aspect of school performance (Ware & Kitsantas, 2007). Principal efficacy beliefs In addressing principals’ efficacy, Tschannen-Moran and Gareis (2004) describe it as one’s belief in their ability to create change, which they view as an important characteristic of an effective school leader. Principals with higher perceptions of self-efficacy have a tendency to see within themselves the power to execute their roles, whereas principals with low self-efficacy tend to have a sense of inability to control their environment. Consequently, they are less likely to identify and execute appropriate strategies for addressing issues (Tschannen-Moran & 39
Journal for Effective Schools
Volume 11, Number 1
Gareis, 2004). Using questions from the 1999-2000 SASS, Ware and Kitsantas operationalized principal efficacy with four factor analytically derived scales: the principal’s personal influence beliefs relative to (a) curriculum and standards; (b) policy and spending; (c) professional development determination; and (d) the principal’s engagement in the operation of the school (Ware & Kitsantas, 2011). Using HLM, Ware and Kitsantas (2011) found mixed influences for principal efficacy variables on teacher commitment through the influence of those variables on teacher efficacy beliefs. The stronger the principal’s belief regarding his or her influence on policy and spending issues was associated with lower teacher commitment. On the other hand, if teachers felt they could enlist their administrator’s support, they were more likely to feel committed. In closing, teacher commitment and teacher efficacy, collective efficacy, and principal efficacy beliefs have an important bearing on student achievement. Issues Associated with NCLB Requirements In 1965, President Lyndon B. Johnson signed into law the Elementary and Secondary Education Act (ESEA) as a means to address the high and growing poverty rate, the first government enacted initiative to fund public schools in the United States (Dole, 2004). Title I was one of the legislation’s main cornerstones. Title I provisions allow the government to distribute funds to school districts with high poverty rates to help develop intervention programs to increase student achievement and reduce the dropout rate. Since then, congress has reauthorized the ESEA every five years, and it has undergone several changes. For example, in the 1980’s, President Ronald Reagan passed the Education Consolidation and Improvement Act (ECIA) to limit the federal government’s powers to regulate Title I funds (Stringfield, 1991). The next major change, signed into law by President George W. Bush in 2002, emphasized high-stakes accountability, measurable yearly progress, and a reduced achievement gap: the No Child Left Behind Act (NCLB). The NCLB Act was an important step towards ensuring that schools were held accountable for every student’s academic achievement. According to NCLB, states could face sanctions if each student subgroup did not achieve state-defined “proficiency” and if the achievement gap were not closed by 2014. The law’s strong focus on standardized testing led to some unintended consequences, such as a narrowed curriculum and a neglect of non-tested academic subjects (NSBA, 2012). These problems have led to the Obama administration’s recently developed Blueprint for Reform (US Dept of Ed, 2010). Although NCLB was due for reauthorization in 2007, Congress has yet to pass it, a fact attributed to philosophical differences among its members (Ayers & Brown, 2011). As a result, President Obama has granted 10 states a waiver from NCLB requirements. This waver allows specific states to adopt new reform initiatives in 40
Journal for Effective Schools
Volume 11, Number 1
place of NCLB. To eliminate the 2014 “proficiency for all” mandate and the achievement gap closure timeline, and avoid sanctions, states would need to maintain a high-stakes assessment framework and develop new annual measurable objective goals based on their own metrics (Muskal, 2012; NSBA, 2012). Additionally, states would also be required to develop college or career readiness standards for all students graduating from high school. While these developments have changed the breadth of the NCLB’s implementation in its original form, fundamental elements of effective schools criteria continue: a clearly stated and focused mission which sets achievement goals for all students, a commitment to high expectations for all, instructional leadership, and frequent monitoring of student achievement. For states that have not sought nor been granted this waiver, the consequences associated with a failure to meet AYP – and its school achievement goals – remain; the distribution of additional resources to the school or a requirement that it write or modify a school improvement plan (U.S. Department of Education, n.d.). Even without NCLB and AYP, however, the highstakes nature of school and student accountability persists as a school reform focus. Given the reciprocal relationship between student achievement and efficacy and the relationship of efficacy to teacher commitment, this prompts the question, “Do the consequences of a failure to meet NCLB AYP impact teacher commitment?” Research Questions Multiple factors have been found to impact teacher commitment: teachers’ efficacy beliefs, principals’ efficacy beliefs and their engagement in the school’s operation. A reciprocal relationship between teacher commitment and student achievement has been demonstrated, and teacher commitment has been shown to impact organizational commitment. These relationships were revealed in research without considering NCLB’s high stakes, test-based accountability. In our analysis, we sought to determine, for the early years of NCLB implementation: 1. To what extent does meeting, or failing to meet NCLB expectations impact teacher commitment? 2. Do teacher efficacy beliefs relative to enlisting administrative direction, influencing decision making, and enacting classroom management positively influence teacher commitment? 3. Do principal efficacy beliefs impact teacher commitment independent of their interaction with teacher efficacy beliefs? 4. Do principal efficacy beliefs impact teacher commitment through their influence on teacher efficacy beliefs? 5. When coupled with principal efficacy beliefs, how do the multiple forms of rewards and punishment affect teacher commitment through the interaction of rewards or punishments and principal efficacy beliefs with teacher efficacy beliefs? 41
Journal for Effective Schools
Volume 11, Number 1
6. Is the impact of teacher efficacy beliefs on teacher commitment influenced by the reward-punishment structure? Method Participants The data were collected through the Public School Teacher questionnaire (TQ) and Public School Principal questionnaire (PQ) of SASS 2003-04 (U.S. Department of Education, 2007). The restricted-use version of the dataset was utilized. The surveys sampled 43,240 teachers and 8,140 principals. These numbers were reduced to 35,910 teachers and 7,900 principals after merging the teacher and principal data files. Since the primary variable of interest in this study was commitment to teaching, teachers who chose “undecided” as their response to the question, “How long do you plan to remain in teaching?” were excluded from the analysis. This exclusion was justified on the grounds that such “undecided” responses indicate neither the presence nor the absence of commitment to teaching. Data Analysis Following Ware and Kitsantas (2007), commitment to teaching and three teacher efficacy scales were extracted through factor analysis from the teacher data file. Three principal efficacy scales based on Ware and Kitsantas (2011) were extracted through a similar procedure from the principal data file. The principal components method with varimax rotation was used for factor extraction in both instances. Several different extraction and rotation methods were tried but the variation in extraction and rotation methods did not have any large effect on the coefficients used to construct factor scores. Only factors with eigenvalues larger than 1.0 were retained. Factor analysis was based on a random selection of 3,590 teachers and 790 principals which represent 10% of the corresponding samples. Teacher and principal efficacy scales obtained from factor analysis were then used in a series of hierarchical linear models (HLM) with teachers (level 1 units) nested within principals (level 2 units) in order to predict commitment to teaching. For the HLM analysis, all cases were used (39,910 teachers and 7,900 principals). Appropriate sampling weights were used for all analyses. Measures The three teacher efficacy factors extracted were (1) Teacher efficacy to enlist administrative direction; (2) Collective efficacy – Teachers’ influence on decision making; and (3) Teacher efficacy for classroom management. With one exception, the items chosen for construction of the efficacy scales and teacher commitment were the same as those used by Ware and Kitsantas (2007). The exception was item Q59i in SASS 1999-2000 TQ which was absent from the SASS 2003-04 TQ. The affected scale was teacher efficacy to enlist administrative direction. However, this omission had virtually no effect on the reliability of the underlying scale. In order to 42
Journal for Effective Schools
Volume 11, Number 1
evaluate whether it was appropriate to use the factor analysis, every run of the factor analysis procedure was preceded by a test of sphericity. Bartlett’s test of sphericity was significant in all instances which led to rejection of the null hypothesis that any of the correlation matrices was an identity matrix (p < .001). The three teacher efficacy factors together accounted for 55.08% of the variation in their component items. Items comprising the three efficacy scales are presented in Table 1 along with descriptive statistics and factor loadings. Based on the cut-offs recommended for orthogonal rotation by Comrey and Lee (1992), 10 out of 16 factor loadings met the criterion for “excellent,” (loading at .71 or higher); 4 met the criterion for “very good” (loading at .64); and one met the criterion for “good” (loading at .55). The commitment to teaching scale accounted for 48.33% of the variation in underlying items with all of the loadings being either “very good” or “excellent” based on Comrey and Lee (1992) criteria. Items comprising the commitment to teaching are presented in Table 2 along with descriptive statistics and factor loadings. The correlations among commitment to teaching and its teacher-context efficacy predictors are shown in Table 3. Teacher efficacy to enlist administrative direction (T1). This scale included 5 items that measured a teacher’s perception of support provided in their work by the school principal. All items comprising this scale were measured on a 1 = “strongly agree” to 4 = “strongly disagree” Likert scale. Item responses were inverted and scaled in such a way that higher scores on this variable were indicative of higher perception of principal support. The Cronbach’s alpha for this scale was .86 . Collective efficacy – Teachers’ influence on decision making (T2). This scale included 6 items that measured a teacher’s perception of his or her own influence on administrative decision-making at school. Teachers responded on a 1 = “no influence” to 4 = “a great deal of influence” Likert scale. Higher scores on this variable were indicative of higher perception of involvement in decision-making. The Cronbach’s alpha for this scale was .78 . Teacher efficacy for classroom management (T3). This scale included 5 items that measured a teacher’s perception of control in the classroom. All items comprising this scale had the following four response categories: 1 = no control, 2 = minor control, 3 = moderate control, 4 = a great deal of control. Higher scores on this variable were indicative of higher perception of classroom control. The Cronbach’s alpha for this scale was .73 . Commitment to teaching (Y). This scale included 4 items that measured a teacher’s perception of their own commitment to teaching. The wording of all items except one (item 349) required inversion of response categories in order for all items to have an identical scale. Two of the items (item 349 and item 350) for this scale had the following four response categories: 1 = strongly agree, 2 = somewhat agree, 3 = 43
Journal for Effective Schools
Volume 11, Number 1
somewhat disagree, 4 = strongly disagree. Item 382 had the following five response categories: 1 = certainly would become a teacher, 2 = probably would become a teacher, 3 = chances about even for and against, 4 = probably would not become a teacher, 5 = certainly would not become a teacher. For the fourth item (item 383), the response categories were: 1 = as long as I am able, 2 = until I am eligible for retirement, 3 = will probably continue unless something better comes along, 4 = definitely plan to leave teaching as soon as I can. Higher scores on this variable were indicative of higher commitment to teaching. The Cronbach’s alpha for this scale was .65 . Based on the way the three teacher efficacy scales and the dependent variable were operationalized, we expected commitment to teaching to be positively associated with each of the three teacher efficacy scales. Three principal efficacy variables were derived from the PQ (one through factor analysis and two from individual items) for use as principal-context predictors of commitment to teaching. The items comprising these variables are presented in Table 4 along with descriptive statistics. The correlations among principal-context efficacy variables are presented in Table 5. Curriculum and standards influence (P1). This scale is based on 4 items that measured a principal’s perception of their own influence on establishing curriculum and for setting performance standards for students. All items comprising this scale had the following four response categories: 1 = no influence, 2 = minor influence, 3 = moderate influence, 4 = major influence. Factor analysis on these four items identified one factor with eigenvalue larger than 1.0 and which accounted for 69.22% of the variation in underlying items with all of the loadings being “excellent” based on Comrey and Lee (1992) criteria for orthogonal rotation . Higher scores on this scale were indicative of higher perception of influence on curriculum and standards. The Cronbach’s alpha for this scale was .85 . Policy influence (P2). This variable was based on item 98 in PQ and measured a principal’s perception of their own influence in setting discipline policy at school. The four response categories were: 1 = no influence, 2 = minor influence, 3 = moderate influence, 4 = major influence. Higher scores on this variable were indicative of higher perception of policy influence. Spending influence (P3). This variable was based on item 105 in PQ and measured a principal’s perception of their own influence in deciding how the school budget is spent. The four response categories were: 1 = no influence, 2 = minor influence, 3 = moderate influence, 4 = major influence. Higher scores on this variable were indicative of higher perception of spending influence.
44
Journal for Effective Schools
Volume 11, Number 1
In addition to the three teacher and three principal efficacy measures, a number of dummy variables based on rewards and punishments awarded to schools and their principals for meeting or failing to meet performance standards were investigated as predictors of commitment to teaching. The SASS 2003-04 PQ asked principals questions about three different types of rewards (if their school met performance standards) and eight different types of punishments (if the school did not meet performance standards). These questions and their descriptive statistics are presented in Table 6. Each of these eleven questions were answered as either “yes” or “no” and was coded as a dummy variable with a value of 1 for yes and a value of 0 for no. The three reward questions are denoted by Rs (s = 1, 2, 3) and the eight punishment questions are denoted by Xt (t = 1, 2,…, 8). Data analytic approach After validating all the scales, the three teacher efficacy measures and the three principal efficacy measures were used to predict commitment to teaching. All six predictors were standardized to have a mean of 0 and a standard deviation of 1. The principals were divided into four groups based on whether or not the state or school district had established school performance standards, whether or not the school was evaluated on such standards during the last school year (2002-03), and whether or not the school met those performance standards. The groups were formed in such a way that they were mutually exclusive and a principal could belong to one and only one of these groups. Group 1 (n = 550) included only those principals whose district or state had not established any school performance standards. This group was not evaluated on such standards. Group 2 (n = 600) included principals whose district or state had established school performance standards but the school performance was not evaluated during the 2002-03 school year. Group 3 (n = 3,360) included principals whose district or state had established school performance standards, the performance was evaluated in 2002-03 and the school partially met or failed to meet the standards. Group 4 (n = 3,390) included principals whose district or state had established school performance standards, the performance was evaluated in 2002-03 and the school met the standards. The school principal group membership decision process is summarized in Figure 1. Each of the first three groups was represented by a dummy variable that took a value of 1 if the principal belonged to that group and 0 otherwise. Group 4 served as the reference category. Once the school principal groups were formed, a series of HLM models were estimated. Following Raudenbush and Bryk (2002), the first of these is the unconditional or base model (model 1). This model allowed us to separate the proportion of variation in commitment to schooling that was due to differences between teachers (i.e. within principals) from that attributable to differences between principals. 45
Journal for Effective Schools
Volume 11, Number 1
Yij 0 j rij
(Model 1)
0 j 00 0 j In model 1, for a teacher i nested within principal j, Yij is the commitment to teaching, 0 j is mean teacher commitment for principal j, 00 is the grand mean commitment to teaching across all 35,910 teachers, and rij and 0 j are error variance components for level 1 and level 2 equations respectively. In order to determine whether there was any difference in mean commitment to teaching between teachers who belonged to schools that met all performance standards (group 4) and those that belonged to the remaining three groups, the three group dummy variables (G1, G2, and G3) were added to model 1. The resulting HLM equations are given as model 2. Yij 0 j rij
(Model 2) 3
0 j 00 0l Glj 0 j l 1
Next, model 3 was specified by adding the three teacher context predictors (T1, T2, and T3) to model 1. Estimation of model 3 allowed us to determine the proportion of within-principal variation in commitment to teaching that can be explained by the three teacher efficacy measures. 3
Yij 0 j kj Tkij rij k 1
0 j 00 0 j
(Model 3)
kj k 0 In order to estimate the proportion of between-principal variation in commitment to teaching that can be explained by principal efficacy measures, model 4 was fitted which included the three principal-context predictors (P1, P2, and P3) in addition to the three teacher-context predictors. However, for this model the level 1 partial slope coefficients were not allowed to vary across principals.
46
Journal for Effective Schools
Volume 11, Number 1
3
Yij 0 j kj Tkij r k 1
3
3
m 1
l 1
0 j 00 0 m Pmj 0l Glj 0 j
(Model 4)
kj k 0 Finally, the full model (model 5) was estimated which allowed level 1 partial slope coefficients to be functions of level 2 predictors. 3
Yij 0 j kj Tkij r k 1
3
3
m 1
l 1
0 j 00 0 m Pmj 0l Glj 0 j L
3
m 1
l 1
(Model 5)
kj k 0 km Pmj kl Glj kj In order to investigate how various rewards and punishments awarded to schools and their principals based on whether they did or did not meet performance standards affected commitment to teaching, two additional HLM models were estimated. The purpose of model 6 was to look at the effects s of various types of rewards (Rs) awarded for meeting school performance standards on commitment to teaching. 3
Yij 0 j kj Tkij r k 1
3
3
m 1
s 1
0 j 00 0 m Pmj 0 s Rsj 0 j L
3
m 1
s 1
(Model 6)
kj k 0 km Pmj ks Rsj kj
Model 7 was fitted in order to get estimates of the effects t of various types of rewards (Xt) awarded for meeting school performance standards on commitment to teaching. 47
Journal for Effective Schools
Volume 11, Number 1
3
Yij 0 j kj Tkij r k 1
3
8
m 1
t 1
0 j 00 0 m Pmj 0t X tj 0 j L
8
m 1
t 1
(Model 7)
kj k 0 km Pmj kt X tj kj Results In order to predict commitment to teaching from teacher and principal efficacy variables under a variety of conditions, seven HLM models were estimated. The coefficient estimates for these models are presented in Tables 7, 8, and 9 while the corresponding variance components are presented in Table 10. Variance components estimates produced by the model 1 showed that of the total variation in commitment to teaching, 0.8475/(0.8475+0.0945) = .90 or 90% was due to differences between teachers nested within principals (i.e. due to differences between teachers). The remaining variation in commitment, .0945/(0.8475+0.0945) = 0.10 or 10% can be attributed to differences between principals. In order to see if the various principal groups differed in terms of mean commitment on teaching, model 2 was estimated. In terms of mean commitment to teaching, results showed that group 4 which was comprised of principals whose schools successfully met performance standards, was significantly different from group 3 which was comprised of principals whose schools failed to meet performance standards. No such difference was observed between groups 1 and 4, and between groups 2 and 4. The estimate of within-principal variation in commitment to teaching due to teacher efficacy measures obtained from model 3 was (0.8475 - 0.7123)/ 0.8475 = .16 or 16%. This translated into 0.90 x 0.16 = 0.14 or 14% of the total variation in commitment to teaching explained by the three teacher efficacy measures. All three teacher efficacy measures were found to be significant predictors of commitment to teaching (p < .001). The largest effect was observed for teacher commitment to enlist administrative support. A one standard deviation increase in this measure raised commitment to teaching by 0.29 standard deviations. Similar effects for collective efficacy and classroom management efficacy were 0.12 and 0.10 standard deviations respectively. 48
Journal for Effective Schools
Volume 11, Number 1
The estimate of between-principal variation in commitment to teaching due to principal efficacy measures obtained from model 4 was (0.0945 - 0.0756)/ 0.0945 = .2 or 20%. This translated into 0.10 x 0.20 = 0.02 or 2% of the total variation in commitment to teaching explained by the three principal efficacy measures. Thus, the three teacher efficacy measures and the three principal efficacy measures together were able to explain 16% of the total variation in commitment to teaching. However, model 4 showed that the inclusion of teacher and principal efficacy predictors in the model resulted in disappearance of the difference in commitment to teaching between groups 3 and 4. The inclusion of principal efficacy measures did not have any impact on the estimate for teacher efficacy measures. The coefficient estimates and significance pattern for the teacher efficacy measures were identical in models 3 and 4. Of the three principal efficacy measures, only curriculum and standards influence was found to be a significant predictor of commitment to teaching (p < .01). However, the magnitude of the effect was small. A one standard deviation increase in curriculum and standards influence was found to raise commitment to teaching by a mere 0.02 standard deviations. Model 5 allowed level 1 partial slope coefficients to be specified as functions of level 2 predictors. However, this did not result in any change in the significance pattern observed in model 4 results. All new parameters estimated for model 5 were found not to be statistically significant while the magnitudes of significant coefficients remained unchanged. In order to assess the effect of different types of rewards for meeting school performance standards on commitment to teaching, model 6 was estimated. Since like model 5, a very large number of coefficients emerged as not significant, the model was re-estimated with only significant predictors included. Results showed a significant effect for only one type of reward, cash bonus or additional resources for teachers. It was found that there was a significant difference in mean commitment to teaching between the group of schools that was awarded such performance bonus and the group that was not (p < .001), with commitment to teaching being higher for the latter group. The effect of classroom management efficacy on commitment to teaching was also found to be significantly different between these two groups (p < .01). In order to assess the effect of different types of punishments for not meeting school performance standards on commitment to teaching, model 7 was estimated. Results showed that the group that was penalized by reduction in resources had a higher mean commitment to teaching as compared to the group that did not receive such punishment (p < .05). It was also found that the requirement to provide additional supplemental educational services had a negatively moderated the effect of teacher efficacy to enlist administrative support on commitment to teaching. In other words, the stipulation to provide extra, supportive services to low-achieving 49
Journal for Effective Schools
Volume 11, Number 1
schools tended to reduce teacher commitment, the requirement appearing to reduce teachers’ belief that they could obtain their principal’s assistance to help improve student achievement. The effect of classroom management efficacy on commitment to teaching was found to be significantly different between the group that was penalized by reduction in resources and the group that was not. Discussion In the era of NCLB—even with the explicit modifications in President Obama’s waiver to ten states—student achievement remains the outcome focus. In the context of effective schools correlates, this emphasis on setting and meeting high achievement goals for all students, frequent monitoring of student progress, and instructional leadership are important independent of NCLB. The reciprocal relationship between teacher commitment and student achievement (Firestone & Fennell, 1993; Kushman, 1992) provides some evidence that teacher commitment may be important in facilitating a school’s persistent attention to student achievement and monitoring student progress. The aim of the present study was to examine the relationship of NCLBgenerated consequences on principal and teacher efficacy beliefs and teacher commitment. Our first research question concerned the impact of failing to meet established performance standards on teacher commitment. Our Model 2 showed that teacher commitment was higher in settings that met performance standards. While we do not know if higher commitment led to meeting standards or if meeting standards led to higher commitment, the association between student achievement and teacher commitment is consistent with Kushman (1992), and Firestone and Pennell (1993). This finding is inconsistent with Ware and Kitsantas (2011) where, for SASS before NCLB, meeting performance goals had neither a direct nor a crosslevel interaction impact (with principal efficacy measures) on teacher commitment. In the 1999-2000 SASS, the meaning of “performance goals” was not specified and could have addressed goals as varied as student performance or increasing the number of volunteers at the school (Ware & Kitsantas, 2011). In addition to performance goals in the earlier study lacking specificity or concreteness, their outcomes were not directly tied publicly to teacher or principal accountability, factors that may also have affected efficacy beliefs. To us, this finding of a relationship of meeting student performance standards to teacher commitment adds credence to the importance of specifying NCLB goals in terms of student performance and public outcomes for principals and teachers. Next, we were interested in whether three forms of teacher efficacy beliefs – to enlist administration direction, to influence decision making, and for their own classroom management – positively impacted teacher commitment (our Model 3). All three forms of teacher efficacy beliefs were associated with increased teacher 50
Journal for Effective Schools
Volume 11, Number 1
commitment, findings consistent with Ware and Kitsantas (2007) using SASS data prior to NCLB. The importance of enlisting administrator support to teacher commitment is consistent with Singh and Billingsley (1998). The link between influencing decision making and commitment is consistent with Ingersoll’s (2001) finding that turnover rates are lower when administrative support to teachers is high. Ware and Kitsantas (2007; 2011) viewed “influence on decision making,” constructed as in this analysis, as a form of collective efficacy. The relationship of this variable to teacher commitment affirms its importance to organizational performance addressed by other researchers (Goddard, 2001; Goddard, Hoy & Woolfolk Hoy, 2004; Goddard, LoGerfo & Hoy, 2004; Goddard & Skrla, 2006). Our third question concerned the direct impact of the principal efficacy variables on teacher commitment, independent of the interaction of principal and teacher efficacy measures (Model 4). The results showed that only the principal’s belief in their own influence in establishing curriculum and setting performance standards for students had a direct positive impact on teacher commitment. This differs from the findings of Ware and Kitsantas (2011) with the pre-NCLB 1999-2000 SASS. In that analysis, the principals’ beliefs in their own influence on curriculum and student standards was not significant. This suggests that in the NCLB era, it is important for the principal to assume a significant role in this regard, and clarifies the meaning of strong instructional leadership in the effective schools context. In this model, all three of the teacher efficacy beliefs were associated with increases in teacher commitment. How does NCLB’s reward-penalty structure influence teacher commitment when considered in the context of principal and teacher efficacy beliefs (our Model 6)? Cash bonuses and additional resources for teachers (as opposed to school wide use) actually reduced teacher commitment directly. Indirectly, “cash bonuses or additional resources for teachers” increased teacher commitment slightly (but significantly) through its influence on teacher classroom management efficacy beliefs. Generally, extrinsic rewards of this type do not appear to strengthen teacher commitment. In this model, the three forms of teacher efficacy contributed significantly to increased teacher commitment, demonstrating the importance of teacher efficacy beliefs in building teacher commitment and suggesting that these forms of teacher efficacy may be essential in building persistent attention to the school’s mission. Other work has shown that bonuses paid directly to teachers do not affect student achievement and often have unintended and undesirable consequences (Sawchuk, 2010). Finally, our last question asked whether the impact of teacher efficacy beliefs on teacher commitment was influenced by the consequences structure of NCLB (our Model 7). Two consequences impacted teacher commitment. A reduction in resources was associated with a direct increase in teacher commitment as was the 51
Journal for Effective Schools
Volume 11, Number 1
requirement that a school choice program be offered. Reduction in resources also brought an increase in teacher commitment through its influence on teacher classroom management efficacy. The requirement that supplemental educational resources be provided, as a predictor of teacher efficacy to enlist administrator direction, reduced teacher commitment. In practice, this suggests that the availability of extra supplies that teachers might request from a principal are not necessarily required to sustain teacher commitment. It is likely that teachers will make do with what they have and they do not necessarily believe they have to be able to ask their principal for ‘everything.’ Consistent with Model 6, the three forms of teacher efficacy significantly increased teacher commitment. Two findings are consistent among our models. First, the three aspects of teacher efficacy were associated with higher levels of teacher commitment. Second, the principals’ belief in their own influence on establishing curriculum and setting performance standards for students also was associated with increases in teacher commitment. Both address forms of autonomy—autonomy for the teachers and autonomy for the principal. For teachers, Firestone and Pennell (1993) have pointed out that, historically, teacher autonomy has been associated with teacher commitment. They also reflect the effective school correlates of strong instructional leadership, commitment to high expectations for all students, and frequent monitoring of student progress – in action. Implications for Practice and Further Research The findings of the present study revealed that increases in teacher commitment are associated with certain teacher efficacy beliefs. An important practical implication of one of these findings is that teacher commitment can be enhanced when teachers feel they can seek direction from their principal—when they can approach their principal on, say, a sensitive issue and get informed guidance—not necessarily things, but direction—from the principal. A further implication is that if teachers believe, as a group, that they can influence decisions at their school, they are likely to bring increased commitment to their work. Among the ways in which this might be manifested in a school would be in those occasions when a principal seeks and uses teacher input on a decision likely to affect multiple members of a staff. From the principal’s standpoint, giving teachers increased opportunities to take on instructional leadership roles in the school is likely to increase their commitment. Additionally, when teachers believe they have control over practice in their classroom, their commitment increases. The variables in this study did not facilitate further interpretation of “practice in their classroom.” We do not know whether this means using certain instructional approaches or something as simple as seating arrangements. What the finding does suggest is that commitment may be enhanced by asking the teacher WHY they are using a specific practice, listening to their responses, and reviewing the relevant data on how it 52
Journal for Effective Schools
Volume 11, Number 1
affects student achievement before suggesting that practices or procedures be changed. For principals, it is important that they exert influence at their school on establishing the curriculum and setting standards for student performance. Coupled with maintaining teacher efficacy in the three areas noted, this action has the potential for sustaining teacher commitment even in the face of failing to meet performance standards. Principals who are strong in curriculum knowledge and well-informed about how to help teachers modify their instruction and supports to students so they reach high performance standards will serve the school well in terms of building staff commitment. But the principal also should be willing to respond to teachers in the three areas where teachers value their efficacy. Instructional leadership in effective schools involves both principals and teachers. While a failure to meet student performance standards was associated with lower teacher commitment, this can be ameliorated with principal leadership and the presence of the three forms of teacher efficacy, suggesting that clear and high student performance goals associated with NCLB play an important role in sustaining teacher commitment. Since evidence indicates that focus on student achievement plays an important role in building teacher commitment, educational leaders can view the emphasis on students’ academic growth as a teacher motivator. With regard to consequences for failing to meet performance standards, reducing resources and requiring a school choice option can bring increased teacher commitment. On the other hand, providing cash bonuses and additional resources directly to teachers and requiring supplemental educational resources likely will reduce teacher commitment. Leaders should be sensitive to these distinctions in fashioning consequences for failing to meet student performance goals. For example, when a school fails to meet performance standards, giving resources directly to the teacher or requiring provision of supplemental educational services may actually reduce teacher commitment while providing additional resources that support school wide activities may enhance it. As a simple example, teacher commitment may be strengthened by providing books to an elementary school on a school-wide basis for content area reading rather than giving funds to a specific teacher. Or, providing funding for evenings when parents might be instructed in how to use content area books with their children could also strengthen teacher commitment. Providing resources to the whole school reinforces the notion that fulfilling a school’s mission is everyone’s obligation. Offering students and families the choice of attending other schools in the district may actually increase teacher commitment. This is not likely to be an activity engaged in by the teacher at the school level, but rather an alternative available at the district level. What this analysis cannot determine is whether this variable’s impact on teacher commitment is attributable to the narrowing of the focus of a school’s mission by students moving to schools 53
Journal for Effective Schools
Volume 11, Number 1
deemed more appropriate by their families or to the teachers’ perceptions that they have the option of teaching in a setting they deem more appropriate for them. A major limitation of this study is its examination of data taken after only the first year of NCLB. Part of its value, however, lies with the contrast it draws with similar analyses conducted with the 1999-2000 SASS taken prior to implementation of NCLB. In Ware and Kitsantas (2007), meeting or failing to meet vague, undefined performance goals had no impact on teacher commitment. The principals’ efficacy beliefs relative to curriculum standards and setting student performance goals in the absence of concrete, public accountability measures had no significant impact on teacher commitment (Ware & Kitsantas, 2011). This changed with the introduction of specific and public student performance expectations. It will be important to examine these relationships in subsequent SASS and to determine the impact of consequences for failing to meet AYP or any high student performance goals on teacher commitment and its relationship to the forms of teacher and principal efficacy examined here. Additionally, all levels of schooling – elementary, middle, and high school – have been addressed in this analysis. Analyses for the separate schooling levels could produce different results for each level. Still, this analysis does show that teacher commitment, in the era of NCLB, is enhanced by applying important aspects of an effective school: strong instructional leadership in setting performance and curriculum standards by the principal, a collective belief by teachers that they can influence what happens in their school, teachers’ beliefs that they can enlist their principal’s support in achieving those standards, and frequent monitoring of student progress so all can see whether their decisions are affecting student outcomes in positive ways. Acknowledgements We would like to recognize the reviewers and editor for their invaluable comments during the review process. Our work has benefitted substantially from their comments. References Ayers, J., & Brown, C. (2011). A way forward: A progressive vision for reauthorizing the elementary and secondary education act. Retrieved from: http://www.americanprogress.org/wpcontent/uploads/issues/2011/06/pdf/a_way_forward.pdf Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28(2), 117. doi: 10.1207/s15326985ep2802_3 Bogler, R., & Somech, A. (2004). Influence of teacher empowerment on teachers’
54
Journal for Effective Schools
Volume 11, Number 1
organizational commitment, professional commitment and organizational citizenship behavior in schools. Teaching and Teacher Education, 20, 277-289. doi: 10.1016/j.tate.2004.02.003. Coladarci, T. (1992). Teachersâ&#x20AC;&#x2122; sense of efficacy and commitment to teaching. Journal of Experimental Education, 60, 323-337. Retrieved from http://www.jstor.org/stable/20152340 Comrey, A.L., & Lee, H.B. (1992). A first course in factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum. Dole, J. A. (2004). The changing role of the reading specialist in school reform. The Reading Teacher, 57, 462-471. Ebmeir, H. (2003). How supervision influences teacher efficacy and commitment: An investigation of a path model. Journal of Curriculum and Supervision, 18(2), 110141. Retrieved from http://web.ebscohost.com.mutex.gmu.edu Edmonds, R. (1979). Effective schools for the urban poor. Educational Leadership, 37, 15-27. Retrieved from http://web.ebscohost.com.mutex.gmu.edu/ehost/pdfviewer/pdfviewer?vid =3&hid=105&sid=ed53e1f3-3afe-41ac-b387-e43d3ef0f460%40sessionmgr115 Firestone, W. A., & Pennell, J. R. (1993). Teacher commitment, working conditions, and differential incentive policies. Review of Educational Research, 63(4), 489525. doi: 10.2307/1170498 Goddard, R. D. (2001). Collective efficacy: A neglected construct in the study of schools and student achievement. Journal of Educational Psychology, 93(3), 467476. doi: 10.1037/0022-0663.93.3.467 Goddard, R.D., Hoy, W.K., & Woolfolk Hoy, A. (2004). Collective efficacy beliefs: Theoretical developments, empirical evidence, and future directions. Educational Researcher, 33(3), 3-13. doi: 10.3102/0013189X033003003 Goddard, R.D., LoGerfo, L., & Hoy, W. K. (2004). High school accountability: The role of perceived collective efficacy. Educational Policy, 18(3), 403-425. doi: 10.1177/0895904804265066 Goddard, R.D., & Skrla, L. (2006). The influence of school social composition on teachersâ&#x20AC;&#x2122; collective efficacy beliefs. Educational Administration Quarterly, 42(2), 216-235. doi: 10.1177/0013161X05285984 Ingersoll, R. M. (2001). Teacher turnover and teacher shortages: An organizational analysis. American Educational Research Journal, 38(3), 499-534. doi: 10.3102/00028312038003499 Journal for Effective Schools (2011). Correlates of effective schools. Author. Retrieved from http://www.effectiveschoolsjournal.org/ Kushman, J. W. (1992). The organizational dynamics of teacher workplace commitment: A study of urban elementary and middle schools. Educational Administration Quarterly, 28(1), 5-42. doi: 10.1177/0013161X92028001002 McCullers, J. F., & Bozeman, W. (2010). Principal self-efficacy: The effects of No Child Left Behind and Florida school grades. NASSP Bulletin, 94(1), 53-74.
55
Journal for Effective Schools
Volume 11, Number 1
Marks, H. M., & Louis, K.S. (1997). Does teacher empowerment affect the classroom? The implications of teacher empowerment for instructional practice and student academic performance. Educational Evaluation and Policy Analysis, 19(3), 245-275. Retrieved from http://www.jstor.org/stable/1164465 Mowday, R. T., Steers, R. M., & Porter, L. W. (1979). The measurement of organizational commitment. Journal of Vocational Behavior, 14, 224-247. Retrieved from http://www.sciencedirect.com.mutex.gmu.edu/ Muskal, M. (2012, February 9). No Child Left Behind: Obama administration grants 10 waivers. Los Angeles Times. Retrieved from http://latimesblogs.latimes.com/nationnow/2012/02/obamaadministration-waiver-no-child-left-behind.html National School Boards Association (NSBA). (2012). Reauthorization of the Elementary and Secondary Education Act (ESEA). Retrieved from: http://www.nsba.org/Advocacy/Key-Issues/NCLB/NSBA-Issue-BriefReauthorization-of-the-Elementary-and-Secondary-Education-Act-ESEA.pdf Paine, J.B., & Organ, D.W. (2000). The cultural matrix of organizational citizenship behavior: Some preliminary conceptual and empirical observations. Human Resource Management Review, 10(1), 45-59. Retrieved from http://www.sciencedirect.com.mutex.gmu.edu Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. 2nd edition. Newbury Park, CA: Sage. Riehl, C., & Sipple, J. W. (1996). Making the most of time and talent: Secondary school organizational climates, teaching task environments, and teacher commitment. American Educational Research Journal, 33(4), 873-901. doi: 10.2307/1163419 Sawchuk, S. (2010, September 29). Study casts cold water on bonus pay. Education Week, 30(5). Retrieved from http://ehis.ebscohost.com.mutex.gmu.edu/ehost/detail?sid=1927169e-6169487d-99c3df731da8c9a3%40sessionmgr12&vid=26&hid=8&bdata=JnNpdGU9ZWhvc3Q tbGl2ZQ%3d%3d#db=a9h&AN=54467320 Singh, K., & Billingsley, B. S. (1998). Professional support and its effects on teacher commitment. The Journal of Educational Research, 91(4), 229-239. doi: 10.1080/00220679809597548 Somech, A., & Bogler, R. (2002). Antecedents and consequences of teacher organizational and professional commitment. Educational Administration Quarterly, 38(4), 555-577. doi: 10.1177/001316102237672 Stringfield, S. (1991). Introduction to the special issue on Chapter I policy and evaluation. Educational Evaluation and Policy Analysis, 13, 325-237. Tschannen-Moran, M., & Gareis, C. (2004). Principalsâ&#x20AC;&#x2122; sense of efficacy: Assessing a promising construct. Journal of Educational Administration, 42(5), 573-585. doi: 10.1108/09578230410554070 56
Journal for Effective Schools
Volume 11, Number 1
Tschannen-Moran, M., & Woolfolk-Hoy, A. (2001).Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17, 783-805. doi: 10.1016/S0742-051X(01)00036-1 U. S. Department of Education. (n.d.) Stronger accountability: Questions and answers on No Child Left Behind. Downloaded March 30, 2010 from http://www2.ed.gov/nclb/accountability/schools/accountability.html#5 U.S. Department of Education (2004, February 10). NCLB Executive Summary. Retrieved from: http://ed.gov/nclb/overview/intro/execsumm.html United States Department of Education. (2010). A Blueprint for Reform: The reauthorization of the Elementary and Secondary Education Act. Retrieved from http://www2.ed.gov/policy/elsec/leg/blueprint/blueprint.pdf Ware, H.W., & Kitsantas, A. (2007). Teacher and collective efficacy beliefs as predictors of professional commitment. The Journal of Educational Research, 100(5), 303-310. Ware, H. W., & Kitsantas, A. (2011). Predicting teacher commitment using principal and teacher Efficacy variables: An HLM approach. The Journal of Educational Research, 104(3), 183-193.
57
Journal for Effective Schools
Volume 11, Number 1
Table 1. Means, Standard Deviations, and Loadings for the Teacher Efficacy Scales M
SD
Factor loading
Teacher efficacy to enlist administrative directiona 330. The principal lets the staff know what is expected of them. 331. The administration’s behavior toward the staff is supportive and encouraging 337. My principal enforces school rules for student conduct and backs me up when I need it. 340. The principal knows what kind of school he or she wants and has communicated it to the staff. 342. In this school, staff members are recognized for a job well done.
3.31 3.46
.65 .73
.81
3.34
.84
.79
3.39
.80
.78
3.37 2.98
.80 .87
.83 .71
Collective efficacy – Teachers’ influence on decision makingb 312. Establishing curriculum 313. Determining the content of in-service professional development programs 314. Evaluating teachers 315. Hiring full-time teachers 316. Setting discipline policy 317. Deciding how the school budget will be spent
2.18 2.82
.61 .94
.54
2.47 1.70 1.84 2.39 1.83
.90 .81 .90 .93 .85
.62 .71 .72 .67 .68
Teacher efficacy for classroom managementc 319. Selecting content, topics, and skills 320. Selecting teaching techniques 321. Evaluating and grading students 322. Disciplining students 323. Determining the amount of homework
3.57 3.13 3.17 3.74 3.51 3.75
.46 .94 .57 .53 .67 .55
.62 .78 .78 .60 .73
Scale and questions
Note. n = 35,910. Numbers beside items refer to the variable names in the Schools and Staffing Survey 2003-04 teacher dataset. aResponse choices for these items were reversed for consistency with the scale and ranged from 1 (strongly disagree) to 4 (strongly agree). bResponse choices for these items ranged from 1 (no influence) to 4 (a great deal of influence). cResponse choices for these items ranged from 1 (no control) to 4 (a great deal of control).
58
Journal for Effective Schools
Volume 11, Number 1
Table 2. Means, Standard Deviations, and Loadings for Items on Commitment to Teaching Scale Scale and questions
Factor loading
M
SD
Commitment to teaching 349. I sometimes feel it is a waste of time to try to do my best as a teacher.a 350. I am generally satisfied with being a teacher at this school.b 382. If you could go back to your college days and start over again, would you become a teacher?c
3.54
.63
1.57 1.49
.87 .71
.64 .67
2.10
1.18
.75
383. How long do you plan to remain in teaching?d
1.69
.78
.71
Note. n = 35,910. Numbers beside items refer to the variable names in the Schools and Staffing Survey 2003-04 teacher dataset. aResponse choices for this item ranged from 1 (strongly agree) to 4 (strongly disagree). bResponse choices for this item were reversed for consistency with the scale and range from 1 (strongly disagree) to 4 (strongly agree). cResponse choices for this item were reversed for consistency with the scale and ranged from 1 (certainly would not become a teacher) to 5 (certainly would become a teacher). dResponse choices for this item were reversed for consistency with the scale and ranged from 1 (definitely plan to leave teaching as soon as I can) to 4 (as long as I am able). Table 3. Correlations among Commitment to Teaching and Teacher Efficacy Scales Scale 1 2 1. Commitment to Teaching 2. Teacher Efficacy to Enlist Administrative Direction 3. Collective Efficacy—Teachers’ Influence on Decision Making 4. Teacher Efficacy for Classroom Management Note. n = 35,910. *Correlation is significant at .01, two–tailed.
—
.36* —
3
4
.27* .38* —
.19* .20* .27* —
Table 4. Means, Standard Deviations, and Loadings for the Principal Efficacy Scales Scale and questionsa Curriculum and standards influence 062. Setting performance standards for students (principal). 063. Setting performance standards for students (teachers). 069. Establishing curriculum (principal). 070. Establishing curriculum (teachers).
M 3.39 3.37 3.36 3.40 3.44
SD .62 .76 .79 .72 .73
Policy influence 098. Setting discipline policy (principal).
3.85
.41
Factor loading .83 .85 .84 .81
Spending influence 105. Influence on spending (principal). 3.56 .67 Note. n = 7,900. Numbers beside items refer to the variable names in the Schools and Staffing Survey 2003-04 principal dataset. aResponse choices for these items were reversed for consistency with the scale and ranged from 1 (no influence) to 4 (major influence). 59
Journal for Effective Schools
Volume 11, Number 1
Table 5. Correlations among Principal Efficacy Scales Scale 1. Curriculum and standards influence 2. Policy influence 3. Spending influence
1
2
3
—
.26* —
.18* .20* —
Note. n = 7,900. *Correlation is significant at .01, two–tailed.
Table 6. Means and Standard Deviations for Reward and Punishment Itemsa Questionsb
M
SD
Items related to rewards 167. Receive cash bonuses or additional resources that support schoolwide activities? (R1) 168. Receive cash bonuses or additional resources to distribute to teachers? (R2) 169. Receive non-monetary forms of recognition? (R3)
.11 .11 .30
.32 .31 .46
Items related to punishments .50 .50 170. Required to write or modify a school or program improvement plan? (X1) .31 .46 171. Put on an evaluation cycle with required improvement by specific dates? (X2) .33 .47 172. Provided with additional resources to support instructional improvement? (X3) .04 .19 173. Penalized by a reduction in resources? (X4) 174. Required to replace the principal with a new principal, an administrative .02 .13 director, or a manager? (X5) .04 .19 175. Subject to reconstitution or takeover regulations? (X6) 176. Required to provide supplemental educational services (e.g., extra classes or tutoring by an outside provider) to students at no cost to themselves or their .17 .37 families? (X7) 177. Required to provide a school "choice" program in which students can attend other schools within the district, schools in other districts, or private schools at no tuition cost to themselves or their families? (X8) .14 .35 Note. n = 3,390 for items related to rewards. n = 3,360 for items related to punishments. Numbers beside items refer to the variable names in the Schools and Staffing Survey 2003-04 principal dataset. aSince all of the items in this table are dummy coded, the mean is simply the proportion of respondents who answered “yes” to the question. bResponse choices for these items were 1 (yes) and 0 (no).
60
Journal for Effective Schools
Volume 11, Number 1
Table 7. HLM Coefficient Estimates for Models 1–5 Predicting Commitment to Teaching from Measures of Teacher Efficacy, Measures of Principal Efficacy, and Group Membership Predictors Model Model Model Model Model 1 2 3 4 5 Level 1 Intercept,
0
Level 2 intercept,
00
0.08***
Curriculum and standards influence (P1), Policy influence (P2),
0.06***
0.06***
0.02**
0.02**
< 0.01
< 0.01
-0.01
-0.01
< 0.01
0.02
0.02
0.05
0.06
0.06
-0.05*
0.02
0.01
0.29***
0.29***
01
03
01 Group 2 dummy (G2), 02 Group 3 dummy (G3), 03 Group 1 dummy (G1),
Administrative direction efficacy (T1) ,
10
1 0.29***
Curriculum and standards influence (P1), Policy influence (P2),
0.07***
02
Spending influence (P3),
Level 2 intercept,
0.10***
12
Spending influence (P3),
11
-0.01 < 0.01
13
0.01
11 Group 2 dummy (G2), 12 Group 3 dummy (G3), 13 Collective efficacy (T2) , 2 Level 2 intercept, 20
0.05
Group 1 dummy (G1),
-0.02 < 0.01 0.12***
Curriculum and standards influence (P1), Policy influence (P2),
22
Spending influence (P3),
0.12***
21
< 0.01 < 0.01
23
0.01
21 Group 2 dummy (G2), 22 Group 3 dummy (G3), 23
-0.01
Group 1 dummy (G1),
Classroom management efficacy (T3) , Level 2 intercept,
-0.05 -0.02
3
30
0.10***
Curriculum and standards influence (P1), Policy influence (P2),
31
0.10***
0.11*** < 0.01
32
Spending influence (P3),
0.14***
-0.01
33
-0.01
31 Group 2 dummy (G2), 32 Group 3 dummy (G3), 33
-0.03
Group 1 dummy (G1),
-0.02 -0.02
* p < .05; ** p < .01; *** p < .001
61
Journal for Effective Schools
Volume 11, Number 1
Table 8. HLM Coefficient Estimates for Models 6 Predicting Commitment to Teaching from Measures of Teacher Efficacy, Measures of Principal Efficacy, and Reward Variables Predictors Coefficient Level 1 Intercept,
0
Level 2 intercept,
00
Curriculum and standards influence (P1), Policy influence (P2),
0.10***
01
0.03***
02
Spending influence (P3),
0.00
03
Cash bonus or additional resources for schoolwide activities (R1), Cash bonus or additional resources for teachers (R2), Non-monetary forms of recognition (R3), Administrative direction efficacy (T1) , Level 2 intercept,
10
02
03
-0.04 0.29***
12
Spending influence (P3),
11
13
Cash bonus or additional resources for schoolwide activities (R1), Cash bonus or additional resources for teachers (R2), Non-monetary forms of recognition (R3), Collective efficacy (T2) , Level 2 intercept,
13
12
11
2
20
0.14***
Curriculum and standards influence (P1), Policy influence (P2),
22
Spending influence (P3),
21
23
Cash bonus or additional resources for schoolwide activities (R1), Cash bonus or additional resources for teachers (R2), Non-monetary forms of recognition (R3), Classroom management efficacy (T3) , Level 2 intercept,
0.02 -0.19***
1
Curriculum and standards influence (P1), Policy influence (P2),
01
-0.02
23
22
21
3
30
0.09***
Curriculum and standards influence (P1), Policy influence (P2),
31
32
Spending influence (P3),
33
Cash bonus or additional resources for schoolwide activities (R1), Cash bonus or additional resources for teachers (R2), Non-monetary forms of recognition (R3),
33
* p < .05; ** p < .01; *** p < .001
62
32
31 0.09*
Journal for Effective Schools
Volume 11, Number 1
Table 9. HLM Coefficient Estimates for Models 7 Predicting Commitment to Teaching from Measures of Teacher Efficacy, Measures of Principal Efficacy, and Punishment Variables Predictors Coefficient Level 1 Intercept,
0
Level 2 intercept,
00
0.09***
Curriculum and standards influence (P1), Policy influence (P2),
01
0.03*
02
Spending influence (P3),
0.02
03
-0.02
Required to write/modify a school/program improvement plan (X1), Put on an evaluation cycle (X1),
01
02
-0.05
03 Penalized by a reduction in resources (X1), 04 Required to replace the principal (X1), 05
0.01
Provided with additional resources (X1),
0.17* 0.19
Subject to reconstitution or takeover regulations (X1),
06
Required to provide supplemental educational services (X1), Required to provide a school "choice" program (X1), Administrative direction efficacy (T1) , Level 2 intercept,
10
-0.08
07
0.00
08
0.09*
1 0.31***
Curriculum and standards influence (P1), Policy influence (P2),
-0.03
12
Spending influence (P3),
11
13
Required to write/modify a school/program improvement plan (X1), Put on an evaluation cycle (X1),
12
11
13 Penalized by a reduction in resources (X1), 14 Required to replace the principal (X1), 15 Provided with additional resources (X1),
Subject to reconstitution or takeover regulations (X1),
16
Required to provide supplemental educational services (X1), Required to provide a school "choice" program (X1), Collective efficacy (T2) , Level 2 intercept,
17
-0.08*
18
2
0.12***
20
Curriculum and standards influence (P1), Policy influence (P2),
22
Spending influence (P3),
21
23
Required to write/modify a school/program improvement plan (X1),
63
21
Journal for Effective Schools
Volume 11, Number 1
Predictors Put on an evaluation cycle (X1),
Coefficient
22
23 Penalized by a reduction in resources (X1), 24 Required to replace the principal (X1), 25 Provided with additional resources (X1),
Subject to reconstitution or takeover regulations (X1),
26
Required to provide supplemental educational services (X1), Required to provide a school "choice" program (X1), Classroom management efficacy (T3) , Level 2 intercept,
27
28
3
30
0.09***
Curriculum and standards influence (P1), Policy influence (P2),
31
32
Spending influence (P3),
33
Required to write/modify a school/program improvement plan (X1), Put on an evaluation cycle (X1),
31
32
33 Penalized by a reduction in resources (X1), 34 Required to replace the principal (X1), 35 Provided with additional resources (X1),
0.11*
Subject to reconstitution or takeover regulations (X1),
36
Required to provide supplemental educational services (X1), Required to provide a school "choice" program (X1),
37
38
* p < .05; ** p < .01; *** p < .001
Table 10 Estimates of Variance Components for HLM Models Model 1 Variance component Principal, U0 .0945*** Teacher, R .8475 Proportion of variance explained, Y * p < .05; ** p < .01; *** p < .001
Model 2
Model 3
Model 4
Model 5
Model 6
Model 7
.0935*** .8478
.0761*** .7123
.0756*** .7122
.0751*** .7121
.0661*** .6916
.0799*** .7454
.001
.144
.164
.164
.170
.124
64
Journal for Effective Schools
Volume 11, Number 1
Criteria for school principal group membership
Has either your district or state established school performance standards?
No
Group 1 Member n = 550
No
Group 2 Member n = 600
No
Group 3 Member n = 3,360
Yes
Was this school evaluated on district or state performance standards in 2002-03 school year?
Yes
Did the school meet all performance standards?
Yes
Group 4 Member n = 3,390
End
Figure 1. The school principal group membership decision process About the author: Herb Ware is Professor of Educational Leadership Emeritus at George Mason University. Please address correspondence to hwware@aol.com.
65
Journal for Effective Schools
Volume 11, Number 1
66
Journal for Effective Schools
Volume 11, Number 1
School District Budget Development: A Shift to Link Purse to Performance by Scott Burchbuckler, Ph.D. Superintendent Essex County Public Schools Tappahannock, Virginia Abstract Where education dollars are spent matters because it defines what programs and services are offered. These decisions are articulated in a budget. Budgeting decisions take on added significance in this era of increased high-stakes public accountability for all students’ achievement. Most notably, this movement is associated with 2002 Elementary and Secondary Education Act’s reauthorization, or No Child Left Behind Act (NCLB), used in this research as a proxy for the highstakes testing. Public accountability will certainly continue even after any revisions to NCLB take place. In this era, it will be necessary to ensure greater links between funding (inputs) and proven instructional programs which result in increased student performance (outputs). Indeed, without changing the way funds are allocated, school divisions can expect similar results as resources dictate what can and cannot be done. Budgeting merges resources to achieve desired results. The budget and the process of creating it are arguably among school districts’ most important tools in ensuring that their students achieve at high academic levels. A budget is more than a financial picture; it embodies the school district’s organizational and educational plan. It represents the tangible articulation of how school districts aim to fulfill their mission and achieve their Effective Schools correlate goals by delineating proposals for specific programs, staffing, and activities. A well-planned budget signals to the public that the schools are worthy of significant investment. The purpose of this study is to examine school district budgeting processes in light of NCLB. More specifically, this study examines the current state of budget practices at the school district level and explores if these processes have become more performance-based since the inception of NCLB. In addition, the study assesses how performance-based budgeting correlates with differences in student achievement. The study addressed the following research questions: 1. What budget decision-making criteria do the school districts use? Which criteria are the most important? 67
Journal for Effective Schools
Volume 11, Number 1
2. What budgeting methods do the school districts use? Are current school district budgeting practices moving away from traditional methods (which tend to make incremental changes to existing budget allocations) and towards performance-based budgeting systems? 3. Does a correlation exist between the school district’s use of performancebased budgeting and the level of student achievement? Key Words: performance based budgeting, No Child Left Behind, student Type of Article: quantitative and qualitative study with implications for practitioners Does money matter? Revenues for K-12 schools are down in California 10% as compared with 2007-2008 (Edwards, 2011). A Pennsylvania State Board of Education sponsored report noted that the state underfunds the schools by $4.6 billion a year (Dean, 2007). However, considerable debate exists over whether the amount of funds actually makes a difference in overall student achievement. Various studies have addressed whether funding makes a difference in student learning outcomes (Jefferson, 2005; Archibald, 2006; Ilon & Normore, 2006; Okpala, 2002; Odden, Goetz, & Picus, 2007; Willis, Durante, & Gazzerro, 2007). In her study of a Nevada school district’s categorical expenditures, Archibald (2006) found a significantly positive relationship between per-pupil spending and reading achievement (but not in mathematics). In a meta-analysis of the research on the topic, Hedges, Laine, and Greenwald (1994) determined that numerous studies reported a positive correlation between increased resources and higher student achievement. In contrast, a number of studies report little or no significant impact of the level of resources and student achievement (e.g., Hanushek, 1989; Okpala, 2002). In referencing the amount of education spending, Odden and Picus (2007) suggested: “[T]oday, the nation’s investment in K-12 education is almost enough to adequately fund an educational program that can double student performance…” (p. 40). Despite the lack of universal agreement as to whether money, by itself, makes a difference in student achievement, most researchers agree the purposes toward which schools spend its money impacts student learning (e.g., Jefferson, 2005; Odden, Borman, & Fermanich, 2004; Odden et al., 2007; Willis et al., 2007; Baker et al., 2010). As suggested in a Standard and Poor’s report of Pennsylvania school finances, how much a school spends is less important than how it is spent (Gehring, 2002). Therefore, it is clear that exploring how resources are allocated is critically important as they impact educational programs. As such, it is imperative that school
68
Journal for Effective Schools
Volume 11, Number 1
districts spend funds so as to get the highest educational return on investment, especially in light of high public accountability for student achievement. Budget Decision Criteria Without the reasonable expectation of receiving significant additional resources to fund instructional improvements aimed at increasing student achievement, it is important for school districts to critically evaluate the criteria and methods they have traditionally used in making budget allocation decisions. Prior to No Child Left Behind, Smotas (1996) conducted a study to determine the major decision-making criteria of school business officials. Study participants were asked to indicate the relative importance of 15 separate criteria in making budget decisions. Their top five selected criteria were: collective bargaining contract provisions, state and federal laws and regulations, number of students affected, governing board fiscal policies, and nonstudent expenditures. None of the top criteria focused on improving instruction or other effective schools correlate related to student learning and achievement. Budget approaches (methods) to allocating funds The selection of a budgeting method is one of the most important choices school districts make regarding budgeting (Kehoe, 1986). Traditionally, public sector budgeting has used line-item, incremental budgeting. In regard to school district budgeting, Owings and Kaplan (2006) stated that, “adding on to the previous year’s funding level is the most common budgeting method” (p. 308). Mundt, Olsen, and Steinberg (1982) described line-item budgeting as an approach “in which line items, or objects of expenditure-personnel, supplies, contractual services, and capital outlays-are the focus of analysis, authorization and control” (p. 36). As Hartman (1988) pointed out, the focus of this type of review is what is purchased rather than for what purpose the public expenditure is made. Line-item budgets provide details about spending but do not link these expenditures to results or how they support the district’s goals (Wagner & Sniderman, 1984). Part of a line-item budget approach is to make incremental percentage increases to existing budget line item amounts to form the next year’s budget. By definition, incremental budgeting results in limited changes from year to year as allocations within the budget’s “base” are not necessarily reviewed. Rather, the review’s focus is on the changes (usually relatively minor) in monetary amounts – rather than district priorities – from the prior year’s budget. As a result, allocating resources (including instructionally-related decisions pertaining to class size or professional development) does not deliberately target essential areas related to student learning but rather reflects prior budget priorities (North Central Regional
69
Journal for Effective Schools
Volume 11, Number 1
Education Lab et al., 2000). Wildavsky (2001) remarked, “The line item budget is a product of history, not of logic” (p. 139). As the high stakes public accountability calls for holding school districts, schools, and teachers accountable for certain metrics of students learning, a budgeting approach solely focused on what is purchased (or the inputs) is at odds with the legislation’s intent. Likewise, as NCLB calls for specific performance results, it seems counterintuitive that school divisions would continue to employ an incremental approach which never reviews district priorities as a whole (Wildavsky, 2001; Davis, Dempster, & Wildavsky, 1966). A growing body of research is devoted to budget methods that explicitly attempt to tie funding decisions (inputs) to specific performance outcomes (outputs). These methods are called by a number of names: Performance-Based Budgeting, Results-Based Budgeting (Friedman & Finance Project, 1996), Outcomes-Based budgeting, Performance-Driven Budgeting (Siegel & ERIC Development Team, 2003). Burke and State University of New York (1997) indicated performance-based budgeting “represents a dramatic shift in traditional budget practice” (p. 1). For this research study, the indicators of performance-based budgeting include: Strategic plans and related goals and priorities are formalized and utilized within the school district The budget process is open/transparent and involves stakeholder involvement The budget process includes consideration of alternative service delivery methods Performance goals are established and resources are linked to those goals Budget decisions are data informed, including developing and reporting performance indicators (that are in line with the district’s strategic goals) The process encourages active "program" evaluation (and links these evaluations to budget discussions) The budget process results in a reallocation (reprogramming) of funds (shifting resources to more effective activities) The district actively seeks to link resources (inputs) to specific results (outputs and/or outcomes) Research Design In order to address the research questions, the case study’s design is both descriptive (identifying current school district budget practices) and correlational (analyzing the relationship between performance-based budgeting and student achievement). Mixed (quantitative and qualitative) methods were used to address the research questions. Quantitative methods include budget practice surveys and
70
Journal for Effective Schools
Volume 11, Number 1
presentation of descriptive and bivariant statistical analyses. Qualitative methods include the use of open-ended survey questions and in-depth interviews. The target of this case study is school district business officials from Hampton Roads, Virginia. The selection of school district finance officers is most appropriate because these officials are most familiar with the budget development process, adding credibility to the findings. Of the 15 respondents (representing a 100% participation rate), 6 were male and 9 female. Participants’ ages ranged from 31 to 65 years. Of the participants who provided their ages, one was in their 30s, 3 in their 40s, 8 in their 50s, and 2 in their 60s. Hampton Roads, Virginia1, was purposely selected given its familiarity to the researcher. The school systems range from as few as 1,000 students to as large as 75,000 students and are a mix of urban and rural locales. These school systems all have significant numbers of students on free and reduced-price lunches, the highest over 60%, the lowest less than 14%, with an average of approximately 40%. Data collection was mixed. In this sequential mixed methods approach, data were collected from multiple sources. In Phase I of the study, the primary data source stem from a four-part budget practices survey with open and closed-ended questions. This was completed by school business officials of the districts targeted for the study. The four parts of the survey include questions about: demographic and budget methods; budget decision-making criteria survey; self-assessment of budgeting practice survey; and, budget practices open-ended survey. In the final section of the survey included qualitative open-ended questions concerning practices/methods/budget decision processes. The survey data collection effort for the school district budgeting survey was conducted May through August 2008. In Phase II, the qualitative portion, the data source consisted of the budget survey’s open ended qualitative questions and personal interviews of school business officials within Hampton Roads, Virginia. Validity of the instrument’s contents was increased because it is based on a review of literature (Wotring, 2007). In order to establish content and face validity, field testing of the instrument occurred. For the overall survey instrument, Cronbach’s Coefficient Alpha reliability calculations were performed resulting in a .975 alpha level. As a result, the survey results are considered reliable. In addition, the qualitative portion of the study allowed the researcher to validate, clarify, and amplify the responses given on the original survey. A defined protocol was developed and was sent to the interviewee prior to the interview. This protocol included a confidentiality statement, an explanation of the research effort, and a Hampton Roads communities as defined by Official Tourism Site of Hampton Roads, Virginia http://www.visithamptonroads.com/ 1
71
Journal for Effective Schools
Volume 11, Number 1
statement underscoring the fact that participation is voluntary. Transcripts were created and provided as an appendix to the study (with personally/district revealing information removed) to maintain confidentiality. Interview participants were given the opportunity to review, and amend the interview transcripts as necessary. School district student achievement statistics are used in the study. These include student achievement data district level reports provided by the Virginia Department of Education (VDOE) concerning Virginia Standards of Learning (SOL) test pass rates for the school year 2006-2007. Student achievement statistics represent the dependent variable used to calculate correlations between levels of student achievement and school districts use of performance-based budgeting methods. The data enabled the researcher to conduct statistical analyses (including correlations calculations) to ascertain how performance-based budgeting may be correlated to student achievement. Descriptive statistics, e.g., frequency of budgeting methods and means comparisons of groups of variables were calculated (e.g., ANOVA tests to compare survey responses before and after NCLB (2002). Given the study’s limited breadth and non-experimental design, caution is advised when using the data to generalize and make conclusions about school district changes as a result of NCLB. Since the participants include only Hampton Roads, Virginia school business officials, it is impossible to generalize on a state or national scale. Nor should this study be construed as indicative of all school districts’ budget practices and procedures. Also, a major credibility limitation in this study is that all the data were collected, analyzed, and reviewed by one researcher. Study Findings What budget decision-making criteria do the school districts use? What criteria are the most important? Examining school district budgeting requires exploring the decision criteria used in making budgeting decisions. Similar to prior research (Smotas, 1996), survey participants were asked to indicate the relative importance of 15 separate criteria on a range of 1 to 4 before and after NCLB. The instrument indicated that a “1” on the scale meant that the criteria was “not relevant,” “2” was classified as “Somewhat Relevant,” “3” indicated that the criteria was “Quite Relevant,” and “4” as “Very Relevant.” Unlike Smotas’ research, the survey was modified to ask participants to indicate the relevance of each criterion before and after NCLB to assess if there was a notable difference between responses.
72
Journal for Effective Schools
Volume 11, Number 1
Before NCLB, the participating school business officials ranked employee compensation, governing board fiscal policies, state and federal laws and regulations, number of students affected, and internal-organizational political pressures as the highest budget criteria (see Table 1). However, currently (post NCLB), school business officials selected state and federal laws and regulations, accreditation standards, employee compensation, number of students affected, and tied at a mean of 3.5, governing board fiscal policies and program quality and evaluation results as the most relevant to budget decision-making.
Table 1 TOP FIVE MEANS (PRE AND POST NCLB) Budget Decision Making Criteria
M
Pre-NCLB (before 2002) Employee Compensation
3.62
Governing Board Fiscal Policies
3.54
State and Federal Laws and Regulations
3.54
Number of Students Affected
3.46
Internal-Organizational Political Pressures
3.08
Post-NCLB (2008) State and Federal Laws and Regulations
3.93
Accreditation Standards
3.93
Employee Compensation
3.79
Number of Students Affected
3.64
Governing Board Fiscal Policies and Program Quality and Evaluation Results
3.50
The change in relative importance of the selected budget decision-making criteria is noteworthy as one would expect the results to be similar unless a contravening force explains the change. While causality cannot be proved, it may be that NCLB has been such a force for change. This may help explain the increase in 73
Journal for Effective Schools
Volume 11, Number 1
the relative importance of the budget decision-making criteria of state and federal laws. Additionally, the fact that accreditation standards are among the most important budget decision-making criteria is noteworthy given its plausible relationship to NCLB. It is also notable that program quality and evaluation results have increased in importance as this suggests that school districts appear more to be concerned with effective programming and evaluation results since NCLB. Changes in the relative importance of many of the criteria have occurred. In comparing means and calculating the mean percentage change, the impact of matching funds, state and federal laws, curricular trends, program quality and evaluation results, and accreditation standards all appear to significantly increase in importance since NCLB, whereas criteria associated with line-item, incremental budgeting (e.g., past-practice and principle of least opposition) decrease in relative importance (see Table 2). This suggests that historical budget allocations and vested interests in prior programming have become less important as school districts attempt to improve in order to meet the NCLB requirements for adequate yearly progress and 100% student proficiency by all students in reading and mathematics by 2014. Table 2 MEANS AND STANDARD DEVIATIONS OF BUDGET CRITERIA AND MEAN PERCENTAGE CHANGE (PRE NCLB TO CURRENT) Pre-NCLB (before 2002) Budget Decision Making Criteria
M
SD
Accreditation Standards
3.00
0.82
Administratorâ&#x20AC;&#x2122;s Judgment and Intuition
2.31
0.63
Employee Compensation
3.62
External-Community Political Pressures
Post-NCLB (currently 2008) SD
Change
3.93
0.27
31.0%*
2.14
0.54
-7.4%
0.77
3.79
0.58
4.7%
2.38
0.96
2.29
0.99
-3.8%
Governing Board Fiscal Policies
3.54
0.66
3.50
0.76
-1.1%
Impact of Matching Funds
2.77
0.83
3.07
0.83
10.8%*
Internal-Organizational Political Pressures
3.08
0.95
3.07
0.73
-0.3%
National and Regional Curricular Trends
2.46
0.88
2.71
0.91
10.2%
Non-Student Expenditures
2.85
0.80
2.86
0.77
0.4%
Number of Students Affected
3.46
0.88
3.64
0.63
5.2%
Past-Practice and Institutional Tradition
2.31
0.75
2.00
0.78
-13.4%*
Principle of Least Opposition
1.92
0.86
1.71
0.73
-10.9%
Program Quality and Evaluation Results
2.62
0.77
3.50
0.52
33.6%*
Staff Recommendations and/or Needs Assessment
2.54
0.66
2.86
0.77
12.6%
State and Federal Laws and Regulations
3.54
0.78
3.93
0.27
11.0%*
Note. N = 13 Pre-NCLB and 14 Post-NCLB. *p= â&#x2030;¤ .05. 74
M
Mean %
Journal for Effective Schools
Volume 11, Number 1
A series of paired-sample t tests were conducted to evaluate statistically whether the participant school business officials’ budget decision-making criteria were significantly different prior to NCLB than after NCLB. The results, at a 95% confidence level, indicated that the following budget criteria were more relevant after NCLB: impact of matching funds; state and federal laws and regulations; program quality and evaluation results; and, accreditation standards. Past-practice was found to be less important after NCLB. Overall, these findings suggest a change in the relative importance of school districts’ budget decision-making criteria in making budget decisions. It appears that the participating school districts consider state and federal regulations and laws, program quality and evaluation results, and accreditation results more important after NCLB in making budget decisions rather than past practice. NCLB suggests that such a change is needed in order to achieve greater student achievement, the metric by which school districts are held publically accountable. Interviews confirmed that 2008 (after NCLB), criteria including accreditation standards, program quality and evaluation results, and federal and state laws are important considerations in budgeting resources. The reasons for their importance, the following school business officials shared: They form really the basis of what we do in building our budget…. - School District Business Official #5 Interview Response Well they force us to really look at, as far as the accreditation standards and AYP; they force us to look at the indicators. We now are paying a lot more attention; actually using data …. - School District Business Official #4 Interview Response The focus is on accountability, student achievement... It is, I think, part of an evolutionary process which has occurred. - School District Business Official #12 Interview Response It appears that school district business officials view budget decisions through a new lens because they must show more concern with how students perform than they did in the past. In so doing, strategic planning appears to be important as well as indicators of success, inclusive of accreditation standards. What budgeting methods do the school districts use? Survey respondents were asked to select from four budgeting methods which best described how they formed their school district’s budget. The data show that 75
Journal for Effective Schools
Volume 11, Number 1
school districts appear to be moving away from traditional line-item, incremental budgeting towards other, more results-oriented methods. Prior to NCLB, over 85% of school districts used a line-item, incremental approach; currently, school districts show almost a 50/50 split between traditional line-item methods and non-traditional methods. Significantly, since NCLB, school systems may be less focused on historical practices and are more deliberative in their funding allocations. Consequently, better budgeting practices may support improved educational programming. Are current school district budgeting practices moving away from traditional methods and towards performance-based budgeting systems? In examination as to whether school districts are becoming more performance-based in their budgeting since NCLB, survey participants were asked a number of questions about performance-based methods: did school districts use long-term and annual performance measures, performance baselines, evaluations in their budgeting methods? In addition, these questions asked if the school districts had a budgeting prioritization process, considered alternative service delivery methods, and linked resources to specific outcome goals. Findings suggest that school districts using more performance-based budgeting systems. Specifically, the school divisions represented are considering more evaluation results and performance data in the resources budgeting. In addition, it appears that school districts have increased their efforts to link funding to specific outcomes which indicates the increase of performance-based budgeting. Performance-based-Budgeting (PBB) Pre and Post NCLB Comparisons In quantitative analysis was conducted to determine the degree to which school districts utilize performance budgeting. Each question in this section of the survey addressed a major theme: strategic planning, stakeholder involvement, alternative service delivery, performance goals and indicators, data informed decision-making, program evaluation, resource reallocation, and, linking funding and results. Participants were asked to rate on a 5-scale scale (1 indicating “not at all” and 5 indicating “always”) the school district’s practices before and after NCLB. Within each theme (category), a composite mean score was calculated before and after NCLB. A comparison of the mean response appears in Table 5. Table 5 PBB PRACTICES MEAN SCORE AND PERCENTAGE CHANGE Pre-NCLB (before 2002)
Post-NCLB (currently 2008)
Performance-Based Budgeting Theme
M
SD
N
M
Strategic planning
3.61
1.21
12
4.03
76
SD 0.88
Mean % N 12
Change 12%*
Journal for Effective Schools
Volume 11, Number 1
Stakeholder involvement
2.86
0.88
13
3.70
0.67
13
29%*
Alternative service delivery
2.83
0.73
12
3.33
0.64
12
18%*
Performance goals and indicators
2.81
0.91
12
3.60
0.56
12
28%*
Data informed decision making
3.20
1.24
12
4.13
0.78
12
29%*
Program evaluation
3.00
1.08
13
3.65
0.95
13
22%*
Resource reallocation
2.72
0.82
13
3.34
0.72
13
23%*
Linking funding and results
2.53
0.88
13
3.25
0.81
13
28%*
Overall Note. * p= â&#x2030;¤ .05.
2.91
0.77
13
3.59
0.54
13
23%*
From this analysis, with a 23% t mean change, it appears that school districts are exhibiting more performance-based budgeting practices since NCLB. To determine if the findings were statistically significant, t tests were performed within each category. All paired sample statistical t tests returned significant findings. These results suggest that since NCLB, school districts business officials have made significant changes regarding all facets of performance-based budgeting: strategic planning, stakeholder involvement, alternative service delivery, performance goals and indicators, data informed decision-making, program evaluation, resource reallocation, and linking funding and results School districts are more deliberately focusing their resources on achieving improved results. Does a correlation exist between the school districtâ&#x20AC;&#x2122;s use of performance-based budgeting and the level of student achievement? Survey responses were used to create a metric (mean score across survey responses related to performance-based budgeting) which measures a school districtâ&#x20AC;&#x2122;s use of performance-based budgeting. The interest was to determine if a linear relationship between performance-based-budgeting and student achievement levels existed and if this relationship were positive, indicating that as performancebased-budgeting increases so does the prediction of increased student achievement. A linear regression analysis was then conducted to evaluate the prediction value of the performance-based budgeting mean score (across all questions) on the overall student achievement (as measured by the average total pass rates for all Virginia SOL 2006-2007 tests). The scatter plot of the two variables, as shown in Figure 3, indicates that the two variables are linearly related: the performance-based budgeting score increases so does student achievement.
77
Journal for Effective Schools
Volume 11, Number 1
SOL Average Pass Rate % for 2006/2007
95.0
90.0
85.0
80.0
R Sq Linear = 0.232 75.0 2.5
3.0
3.5
4.0
4.5
Performance-Based Budgeting Overall Average
Figure 3. Scatter plot depicting the relationship between standardized performance-based budgeting scores and residual student achievement scores. The correlation between the use of performance-based budgeting and the
level of student achievement was .48 (p=.04). However, at the 95% confidence interval, statistical regression tests revealed that the overall mean score is not considered significantly related to student achievement (p = .08). Although it cannot be statistically established at a 95% confidence level from the data set, student performance is improved as a result of performance basedbudgeting. As high-stakes accountability calls for improvement, it is assumed that resources need to be directed towards programs that are proven to result in increased student achievement, regardless of whether or not they have been funded in the past. The school business officials interviewed generally agree that budgeting does have an impact on educational programming and human resources, as the following comments suggest: I think the focus of the budget and the process in and of itself is becoming more and more important and more critical and it is absolutely linked in some way or another to student achievement. You can not deny it. - School District Business Official #12 Interview Response When done well, yes. When you look at the specific programs and you say that program impacts kidsâ&#x20AC;Ś. - School District Business Official #7 Interview Response 78
Journal for Effective Schools
Volume 11, Number 1
[I]n my opinion the most important policy document that the school board approves every year is the budget because it is what funds and drives every decision that gets made in this division. It is the embodiment of where the board, where board perceived where the public desires, is for spending money or resources for studentsâ&#x20AC;&#x2122; educationâ&#x20AC;ŚAnd, the budget is the most import driver of what goes on in this division. - School District Business Official #5 Interview Response When one realizes that school district budgets impact educational programs, attract and retain effective teachers, and provide a strategic plan to guide improvement activities, it is logical to conclude that student achievement is impacted by school district budgeting. Findings Summary Overall, the findings indicate that the school districts represented have made a significant change in what they consider to be the most relevant criteria in making budget decisions. The findings presented also suggest that budgeting methods/practices have become more performance based, a change since NCLB began. Though not a statistically significant finding in this study, a positive, linear relationship appears to exist between the increased use of performance-based budgeting practices and increased student achievement. School business officials indicate that currently (after NCLB) the five most important budget decision-making criteria are: 1. 2. 3. 4. 5.
State and federal laws and regulations; Accreditation standards; Employee compensation; Number of students affected; and, Governing board fiscal policies and program quality and evaluation results.
The data also indicate that the impact of matching funds and curricular trends are significantly increased in importance with NCLB, whereas criteria associated with line-item, incremental budgeting and results-oriented methods (e.g., past-practice and principal of least opposition) decrease in relative importance. Furthermore, the data suggest that school districts are using more performancebased budgeting processes; they are more likely to conduct evaluations and establish performance baselines and targets. Particular emphasis appears on strategic planning, a review of alternative service delivery, the introduction of performance goals and indicators, and program evaluation. Moreover, it appears that school districts are increasing their efforts to enlarge stakeholder involvement, 79
Journal for Effective Schools
Volume 11, Number 1
are more inclined to consider alternative service delivery, are escalating evaluation efforts, and are attempting to link budget allocations to specific outcomes or results. This link and the idea of resource reallocation is an on-going process. The data also reveal that performance-based budgeting may have a positive correlation to student achievement. School district officials see a positive relationship in how their budget provides programs needed to increase student achievement. Awareness of this connection should encourage improved budgeting practices. Likewise, quantitative data suggest a positive correlation between performance-based budgeting and student achievement. This is an important finding. In addressing the challenges of meeting high accountability standards for every student, it is important that school districts utilize every possible tool in securing increased student achievement. The data suggest that school business officials have changed in their perceptions about the relative priority of certain budget decision-making criteria, and school districts are refining their budgeting processes to become more performance-based with the goal of positively influencing student achievement.
Conclusion Targeting resources towards the most effective strategies, programs, and initiatives is of vital importance to school administrators. In an era of high-stakes public accountability, the stakes are high for school districts as well as for budget professionals. The community is demanding results and is ready to consider other service providers (e.g., charter schools and on-line offerings) if public schools cannot deliver. Simply put, to do nothing only guarantees that public education will continue to come under greater scrutiny and control. As administrators, we need to think differently than we have in the past and meet our challenges. Effective budgeting promises to help in these efforts. Indeed, a Center on Education Policy (2008) study confirmed, student achievement has increased (and the achievement gap has decreased) since the introduction of NCLB, and this has resulted from many interconnected policies and programs. A budget process more focused on educational outcomes can help increase student achievement. As Odden & Monk (1995) indicated, the educational system needs to be restructured so that the significant resources which the taxpayerâ&#x20AC;&#x2122;s have provided 80
Journal for Effective Schools
Volume 11, Number 1
public education pay off in increased student achievement. To do this, school districts will have to ensure that resources are efficiently and effectively used for teaching and learning. School district budgeting should focus on a framework that aims on achieving this goal. This framework includes an emphasis on transparency and stakeholder involvement, data-driven decision making, focusing on teaching and learning, and re-allocating resources. This performance-based budgeting structure requires hard work, dedication, and perseverance but it also affords school districts the opportunity to align resources to where they will help address issues like the achievement gap between different categories of students and the need to increase overall student achievement. References Archibald, S. (2006). Narrowing in on educational resources that do affect student achievement. Peabody Journal of Education, 81(4), 23-42. Baker, B. D., Sciarra, D. G., & Farrie, D. (2010). Is School Funding Fair?: A National Report Card. Retrieved June 30, 2011, http://schoolfundingfairness.org/National_Report_Card.pdf Burke, J., & State University of New York, A. (1997, January 1). Performance-funding indicators: Concerns, values, and models for two- and four-year colleges and universities. (ERIC Document Reproduction Service No. ED407910) Retrieved June 21, 2007, from ERIC database. Center on Education Policy. (2008). Has student achievement increased since 2002?: State test score trends through 2006-07. Washington, DC: Author. Davis, O. A., Dempster, M. A. H., & Wildavsky, A. (1966, Sep.). A theory of the budgetary process. The American Political Science Review, 60(3), 529-547. Dean, M. M. (2007, November 15). PA study: State underfunding schools by $4.6B a year. Philadelphia Dailey Press. Retrieved November 21, 2007, from http://www.philly.com/daileynews/local/20071116_Pa_study_State_under funding_schools_by_4_6B_a_year.html Edwards, B. (2011). EdSource Report: School Finance Highlights 2010-2011. Retrieved June 30, 2011, http://www.edsource.org/pub11-school-financehighlights.html Friedman, M., & Finance Project, W. (1996, September 1). A Strategy Map for ResultsBased Budgeting: Moving from Theory to Practice. (ERIC Document Reproduction Service No. ED400102) Retrieved July 2, 2007, from ERIC database. Gehring, J. (2002, May 15). Standard & Poor's studies school spending in Pennsylvania. Education Week. 21(36), 20. Hanushek, E. A. (1989). The impact of differential expenditures on school performance. Educational Researcher, 18, 45-51. Hartman, W. T. (1988). School District Budgeting. Eaglewood, New Jersey: PrenticeHall. 81
Journal for Effective Schools
Volume 11, Number 1
Hedges, L. V., Laine, R. D., & Greenwald, R. (1994). Does money matter? A metaanalysis of studies of the effects of differential school inputs on student outcomes. Educational Researcher, 23, 5-14. Ilon, L., & Normore, A. H. (2006). Relative cost-effectiveness of school resources in improving achievement. Journal of Education Finance, 31(3), 238-254. Jefferson, A. (2005). Student performance: Is more money the answer. Journal of Education Finance, 21(2), 111-124. Kehoe, E. (1986). Educational budget preparation: Fiscal and political considerations (Chapter 6). Principals of School Business Management. Reston, Virginia: Association of School Business Officials International. Mundt, B., Olsen, R., & Steinberg, H. (1982). Managing public resources. New York: Peat Marwick International. North Central Regional Education Lab, Odden, A., & Archibald, A. (2000). A better return on investment: Reallocating resources to improve student achievement. [Booklet with Audiotapes]. (ERIC Document Reproduction Service No. ED470931) Retrieved June 13, 2007, from ERIC database. Odden, A., Borman, G., & Fermanich, M. (2004). Assessing teacher, classroom, and school effects, including fiscal effects. Peabody Journal of Education, 79(4). Odden, A., Goetz, M. E., & Picus, L. O. (2007, March 14). Paying for school adequacy with the national average expenditures per pupil. School Finance Redesign Project: Center on Reinventing Public Education. Available from http://www.schoolfinanceredesign.org/. Odden, A., Monk, D., Nakib, Y., & Picus, L. (1995, October). The story of the education dollar. Phi Delta Kappan, 77(2), 161-168. Odden, A., & Picus, L. (2007, Aug. 15). School Finance adequacy at a crossroads. Education Week,26(45), 40. Okpala, C. O. (2002). Educational resources, student demographics and student achievement. Journal of Education Finance, 27(3), 885-908. Owings, W. A., & Kaplan, L. S. (2006). American public school finance. Belmont, CA: Thomson Wadsworth. Siegel, D., & ERIC Development Team. (2003, May). Performance-driven budgeting: The example of New York city's schools. ERIC Digest, 1-8. ERIC Clearinghouse on Educational Management Eugene OR. Available from www.eric.ed.gov (ERIC Document Reproduction Service No. ED474305) Smotas, P. (1996). An analysis of budget decision criteria and selected demographic factors of school business officials of Connecticut school districts. ProQuest Digital Dissertations database. Wagner, I. D., & Sniderman, S. M. (1984). Budgeting school dollars: A guide to spending and saving. Washington, DC: National School Boards Association. Wildavsky, A. (2001). Budgeting and governing. New Brunswick, NJ: Transaction Publishers.
82
Journal for Effective Schools
Volume 11, Number 1
Willis, J., Durante, R., & Gazzerro, P. (2007, May 16). Toward effective resource use: Assessing how education dollars are spent. School Finance Redesign Project: Center of Reinventing Public Education. About the author: Dr. Scott A. Burckbuchler is Superintendent of Schools, Essex County, Virginia Please address all correspondence to: Superintendent@essex.k12.va.us
83
Journal for Effective Schools
Volume 11, Number 1
How to Create and Use Rubrics for Formative Assessment and Grading Susan M. Brookhart Alexandria, VA: Association for Supervision & Curriculum Development, 2013 $27.95, 158 pages Reviewed by Leslie S. Kaplan, School Administrator (retired) Newport News Public Schools, Newport News, VA and William A. Owings, Professor of Educational Leadership Old Dominion University, Norfolk, VA Who hasn’t walked into a classroom and seen highly engaged students involved in a “learning activity” that was fun, interesting, cross-disciplinary, and collaborative – but which had little cognitive value? And, when observing a lesson, who at least once hasn’t found teachers and students working from “rubrics” that were more lists of directions for activities or means to generate grades rather than as links to the intellectual concepts or complex skills that the curriculum standards were intended to deliver? These questions are rhetorical, but in the Common Core era amid widespread policy and practitioner interest in effective teaching, here’s one that isn’t: How do educators help teachers design and conduct learning that deepens and extends students’ thinking and academic production? Susan M. Brookhart, a former K-12 teacher, teacher educator, scholar and author, proposes using well-designed and appropriate rubrics to help teachers teach, coordinate instruction and assessment, and help students learn. In her new book, How to Create and Use Rubrics for Formative Assessment and Grading, Brookhart defines a rubric as a “coherent set of criteria for students’ work that includes descriptions of a performance quality on the criteria” – a definition, Brookhart asserts, that is rarely evident in practice. Well-designed and appropriate rubrics clarify the qualities students’ work should have, clearly identify the learning targets and the criteria for success, and provide performance-level descriptions to help students (and their teachers) understand what the desired performance is and what it looks like. When used formatively, rubrics can show students what they need to do next in order to enhance the quality of their performance. 84
Journal for Effective Schools
Volume 11, Number 1
Focusing rubrics on learning – not on tasks – is the book’s most important concept. Brookhart’s purpose is to clarify what rubrics are (and are not), demonstrate how to construct good rubrics for a variety of contents and grade levels, and explain by words and practical examples how to use them as an instructional strategy to describe, develop, and support learning – that is, to help teachers teach in increasingly effective ways. Organization of How to Create and Use Rubrics Each Rubrics chapter contains several Figures (tables, not illustrations) that highlight, simplify, and explain the idea under discussion. Self-Reflection boxes help readers link the book’s content with their own experiences and ideas so as to create personal meaning and relevance (itself, a useful learning strategy). The book contains two sections: Part I (All Kinds of Rubrics) has eight chapters, and Part II (How to Use Rubrics) has three. Typically, chapters are 12 to 14 pages long, totaling 127 pages until the Appendices. Part 1. All Kinds of Rubrics Chapter 1: What are Rubrics and Why Are They Important? The author asks and answers: What is a rubric? What is their purpose? What are the advantages and disadvantages of analytic and holistic rubrics – and of general and task-specific rubrics? Why are rubrics important? Notably, the author asserts that really good rubrics help teachers avoid confusing the task or activity with the learning goal by keeping the focus primarily on the criteria – the learning – and only secondarily on the doing. Research supporting the claims of rubrics increasing student learning is presented. Chapter 2: Common Misconceptions About Rubrics Hoping to “sharpen” the readers’ “radar” so they can avoid these rubric pitfalls when they select, adapt, or write their own, this chapter illustrates three important misconceptions: (1) Rubrics (and teachers) should not confuse the learning outcome to be assessed with the tasks used to assess it. (2) Rubrics are not about the assignment’s requirements or about counting things. A grade derived from such a “rubric” evaluates student compliance, not learning. (3) Rubrics are descriptive performance ratings – not to be confused with evaluative rating scales. An example of a poor “rubric” illustrates these misconceptions. Chapter 3: Writing or Selecting Effective Rubrics Intending to help readers become “more savvy” consumers of rubric resources, this chapter explains how to decide on appropriate criteria (that are definable, observable, distinct from one another yet describe a complete performance to match the description of learning in the standard or instructional goal, and able to vary along a continuum from high to low); and how to write performance-level descriptions. The chapter 85
Journal for Effective Schools
Volume 11, Number 1
also presents two general approaches to designing rubrics – top-down and bottomup. Brookhart offers a clever “Rubric for Laughing” as a model for understanding how to choose criteria and write performance-level descriptions. She also provides a first draft and a revised version of a rubric for a life cycle project to show how to strengthen weak rubrics. Chapter 4: General Rubrics for Fundamental Skills When rubrics clearly characterize what student work should look like, instruction, assessment, and learning improve. This chapter discusses the value of using general analytic rubrics for fundamental skills and presents several excellent examples of rubrics in student-friendly language: The 6 +1 Trait Writing (grades 3-12 and K-2), Mathematics Problem Solving, and General Rubric for Written Projects. Creativity – as a characteristic of high-quality student work which can be assessed – receives special attention, accompanied by two Creativity rubrics (analytic and holistic). Here, as elsewhere, the emphasis is on learning and intellectual growth rather than counting usage or mechanics errors. Recent research supporting these rubrics’ positive effect on teaching and student learning is included. Chapter 5: Task-Specific Rubrics and Scoring Schemes for Special Purposes This chapter recommends that teachers always use general rubrics, except in special cases – such as summative grading of students’ ability to recall and comprehend a body of knowledge, concepts, and facts. Guidelines for when to use task-specific rubrics, how to use task-specific rubrics or point-based scoring schemes for grading essay or other multipoint test questions, and how to write task-specific and pointbased scoring schemes are discussed. Chapter 6: Proficiency-Based Rubrics for Standards-Based Grading This chapter helps teachers rethink how they teach and grade in a standards-based environment. Consistently using proficiency-based rubrics – which describe student progress in terms of achieving the standard with which it is aligned – for all assessments changes the reference point for interpreting student performance. For example, a student’s test score of 100% only represents “Proficiency” if the tasks and questions allow students to show extended and higher-level thinking – rather than simple recall or recognition of basic facts – on content related to the standard. The chapter explains how to create proficiency-based rubrics in three steps and offers clear models. Additionally, it demonstrates how to use proficiency-based rubrics in formative assessment to help students track their own work and set goals for what they want to learn and how they will know when they have done so. The chapter concludes with a discussion of using proficiency-based rubrics in standards-based grading to simplify teachers’ evaluation of student progress and achievement.
86
Journal for Effective Schools
Volume 11, Number 1
Chapter 7: Checklists and Rating Scales: Not Rubrics, but in the Family This chapter distinguishes checklists and rating scales from rubrics, with which they are often confused (hint: checklists and rating scales lack descriptions of performance quality), and describes some situations when checklists and rating scales can be useful. Suitable examples and comparisons help describe the reasoning. Advocating for increased student learning over teacher expediency, Brookhart discourages the use of quality ratings (Excellent, Good, Fair, Poor) because they provide a verdict without describing the evidence, do not provide information that will move learning forward, yet lure teachers into thinking that they do. Chapter 8: More Examples Closing this section are more samples of rubrics in a variety of content areas and grade levels: elementary reading, middle school science, and high school technology. Offering instances of how rubrics can improve teaching and learning, the chapter argues that even when a student does not meet the criteria, rubrics are beneficial because they give the information necessary for the student’s next steps. Part 2. How to Use Rubrics Chapter 9: Rubrics and Formative Assessment: Sharing Learning Targets with Students Sharing learning targets and criteria for success with students is the first and most basic strategy for effective teaching – especially when the learning targets are complex or when several qualities must occur at the same time. This chapter presents a range of formative assessment strategies to make this happen. It defines the difference between “instructional objectives” – written for teachers – and “learning targets” which are written for students. Also, the chapter shows teachers, step-by-step, how to make their instructional activities “performances of understanding” that show students what they are supposed to learn, develop that learning through the students’ experience doing the work, and give evidence of students’ learning by providing work that is available for inspection – and assessment – by both teacher and student. Chapter 10: Rubrics and Formative Assessment: Feedback and Student Self-Assessment Declaring that “formative assessment is about forming learning,” the chapter begins by explaining formative assessment as an ongoing, systematic process of gathering evidence of learning to improve student achievement. It goes on to describe how to use rubrics to give students feedback that moves them forward, supports selfassessment and goal setting, and helps them ask effective questions about their work. Likewise, it presents several strategies and invites readers to devise others that fit their students, content, and grade levels.
87
Journal for Effective Schools
Volume 11, Number 1
Chapter 11: How to Use Rubrics for Grading Completing the circle, Chapter 11 briefly considers how to use rubrics in the grading process. It asserts that the goal for grading is to have the final grade represent, as accurately and reliably as possible, the information about student learning contained in individual grades or grade sets. It describes how to use rubrics to grade individual assessments and how to combine individual rubric-based grades for a report card grade. A decision tree helps teachers combine individual grades for report card summary grades. Afterward and Appendices Afterward: Ties up loose ends with a final restatement of purpose and a SelfReflection on the readers’ current view of rubrics to compare with the one completed in Chapter 1. Appendix A contains a Six-Point 6+1 Trait Writing Rubrics, Grades 3 – 12 (pages 128141); Appendix B contains an Illustrated Six-Point 6+1 Trait Writing Rubric, Grades K-2 (pages 142-153). Authors’ Orientation Brookhart uses a cognitive approach to improve teaching and learning. She views rubrics as a means for teachers to develop their own and their students’ intellectual capacities by: providing more learning-focused planning for instruction; devising opportunities for students’ to generate high-level thinking, creativity, and production in the content areas; and encouraging and assessing students’ creativity/originality in academic domains – in short, using rubrics to guide learning and provide formative assessments to build students’ knowledge and metacognitive capacities. During instructional planning, teachers think through the evidence that students would need to produce or perform that demonstrates higher-level thinking and creativity with the class content tied to standards. Similar to Grant Wiggins and Jay McTighe’s Understanding by Design, Brookhart asks teachers to begin with the end in mind. For instance, Chapter 3 advises teachers that when writing performance-level descriptions, they first decide the number of levels which can meaningfully describe performance differences. Next, they anchor the performance level intended for most students to reach – typically “Proficient.” And, then they work outward, higher and lower, from there. Importantly, rubrics must allow students to go beyond what is expected or required, to more deeply extend and expand their thinking and production about the subject to reach “Advanced” levels.
88
Journal for Effective Schools
Volume 11, Number 1
Likewise, Brookhart endorses using good general rubrics that give students ample room for creativity and metacognition. She advises teachers to use the lowestinference descriptors that still assess important qualities, leaving descriptions open to professional judgment that can recognize complexity and subtlety rather than “locking things down” with overly rigid descriptions. Accordingly, she discourages use of “bad rubrics” or directions and checklists “masquerading” as rubrics that actually constrain creativity and metacognition. Creativity receives special attention. Brookhart believes that teachers sometimes misinterpret creativity as a description of student work that is visually attractive or interesting, persuasive or exciting. In English Language Arts, mathematics, science, or social studies classes, however, visual arts skills may not be the assignment’s learning objective – and should not be on the rubric. Instead, Creativity/Originality criterion means very original, inventive, imaginative, and unique – as compared with using other people’s ideas. Accordingly, creativity can be assessed by looking at depth and quality of ideas, variety of sources, organization and combination of ideas, and originality of contribution. In the same vein, Brookhart believes that assessing student work should emphasize its intellectual merit, not judge it by elements tangential to conceptual development such as its visual attractiveness, spelling or usage mistakes, neatness, or “effort.” Brookhart considers these latter elements as work habits to be assessed separately from academic achievement. By way of example, Brookhart describes how teachers can evaluate grammar in ways other than by counting errors – such as determining how much editing would be needed to make the writing “readable.” Similarly, she offers, good rubrics allow multiple paths to quality work. Contributions to the Field Schools cannot improve – and students cannot increase their learning and achievement – without upgraded instruction. The Common Core State Standards in English Language Arts and mathematics – adopted in 45 states and the District of Columbia – place substantial expectations on teachers to progressively increase their students’ – and, implicitly, their own – intellectual capacities and their expression in all academic disciplines. If teachers are to help students build their content knowledge, engage in critical reading and thinking in a variety of media, use cogent reasoning backed by relevant evidence, and develop the broad range of language and numerical understandings and skills to enable them to become successful citizens in 21st century environments, teachers need to be able to do so, too. Many educators will need clear direction and professional development in how to design and assess the enhanced instructional approaches if their teaching is to help their students reach this ambitious goal.
89
Journal for Effective Schools
Volume 11, Number 1
Designing and using well-constructed and appropriate rubrics can be a key tool in this endeavor. In an approach 180-degrees unlike curriculum-narrowing’s “teaching to the test,” rubrics allow teachers and students to raise the learning ceiling, prompting both teachers and students to develop a progressively more complex and demanding engagement with the subject matter and transfer it into learning demonstrated in their own work. These formative, interactive, feedbackdriven, and often collaborative practices enhance both teaching and learning. What is more, this approach works. Research confirms that students of varying achievement levels appreciate teachers who model their thinking aloud because it clarifies, confirms, and expands their own metacognition.2 Further, instructional strategies that support rubric-aligned instruction – including receiving criterionreferenced feedback, engaging students in the feedback process, and using graphic organizers to focus and collect specific feedback – increase student achievement.3 The short, targeted chapters with easy-to-read language and clear definitions are appropriate for K-12 teachers to use in professional learning communities and academic departments as group study and collective practice for instructional improvement. Likewise, the book can be a valuable component of a teacher preparation program that concentrates on readying teachers for classroom effectiveness. Brookhart’s many detailed, real world examples represent a range of disciplines and grade levels, making it highly applicable to most teachers. Lastly, Self-Reflection boxes in each chapter make the learning both personal and relevant, creating a space for individual and group engagement to refine and deepen teachers’ understanding, build group cohesion, and reinforce adult learning. Our students won’t learn to higher levels unless our teachers do.
Review Authors Leslie S. Kaplan, Ed.D., a retired school administrator in Newport News, VA, has provided middle, high school, and central office instructional and school counseling leadership and program development. Her scholarly publications co-authored with William A. Owings appear in numerous professional journals. She and Owings have also co-authored Culture Re-Boot: Reinvigorating School Culture for Improved Student 2
Davey, B. (1983). Think aloud: Modeling the cognitive processes for reading comprehension. Journal of Reading, 27 (1), 44-47; Fisher, D. & Frey, N. (2012). Gifted students’ perspectives on instructional framework for school improvement. NASSP Bulletin, 96 (4), 285-301; Wilhelm, J. (2008). Improving comprehension with think-aloud strategies: Modeling what good readers do. New York: Scholastic. 3 Dean, C.B., Hubbell, E.R., Pitler, H. & Stone, B. (2012). Classroom instruction that works. Research-based strategies for increasing student achievement. (2nd Ed.). Alexandria, VA: ASCD; Kaplan, L.S, & Owings, W.A. (2001). Teacher quality and student achievement: Recommendations for principals. NASSP Bulletin, 85 (628), 64-73; Marzano, R.J., Pickering, D.J., & Pollock, J.E. (2001). Classroom instruction that works. Researchbased strategies for increasing student achievement. Alexandria, VA: ASCD. 90
Journal for Effective Schools
Volume 11, Number 1
Outcomes; American Public School Finance (2nd Ed.); Leadership and Organizational Behavior in Education: Theory into Practice; American Education: Building a Common Foundation; Teacher Quality, Teaching Quality, and School Improvement; Best Practices, Best Thinking, and Emerging Issue in School Leadership; and Enhancing Teacher and Teaching Quality. Kaplan is co-editor of the Journal for Effective Schools, and serves on the National Association of Secondary School Principals (NASSP) Bulletin Editorial Board. Currently a board member for Voices for Virginiaâ&#x20AC;&#x2122;s Children, she is a past president of the Virginia Association for Supervision and Curriculum Development and the Virginia Counselors Association. William A. Owings, Ed.D. is a professor of educational leadership at Old Dominion University in Norfolk, VA. Owings has worked as a public school teacher, an elementary and high school principal, assistant superintendent, and superintendent of schools. In addition, his scholarly publications co-authored with Leslie S. Kaplan include articles in the Eurasian Journal of Business and Economics; National Association of Secondary School Principals (NASSP) Bulletin, Journal of School Leadership, Journal of Effective Schools, Phi Delta Kappan, Teachers College Record, and the Journal of Education Finance. Owings has served on the state and international board of the Association for Supervision and Curriculum Development (ASCD), is currently the editor of the Journal for Effective Schools, and is on the Journal of Education Finance Editorial Advisory Board.
91
Journal for Effective Schools Spring 2013
Volume 11, Number 1
CALL FOR ARTICLES FOR UPCOMING ISSUES! Detailed information concerning the submission of manuscripts can be found on the internet at http://effectiveschoolsjournal.org Articles for potential publication in the Journal for Effective Schools may be submitted on an on-going basis to wowings@odu.edu Journal for Effective Schools at Old Dominion University College of Education Educational Foundations and Leadership Norfolk, Virginia 23529
Published by Journal for Effective Schools at Old Dominion University College of Education Educational Foundations and Leadership Norfolk, Virginia 23529