A perspective in science education

Page 1

DFG-NSF Conference

US Panelist Papers

An International Conference on Research and Development in Mathematics and Science Education U.S. Panelist Papers Table of Contents A Perspective on Research in Science Education Richard A. Duschl, King's College London

Teachers' Knowledge of Mathematics and Its Role in Teacher Preparation and Professional Development Programs Jeremy Kilpatrick, University of Georgia

A Framework for Assessment in Science Mark Wilson, UC Berkeley What is the state of research on how to work with various underachieving subpopulations (e.g., minorities, high poverty students, culturally different students)? Walter G. Secada, University of Wisconsin – Madison

A Summary of a Study Currently Underway Jo Ellen Roseman, American Association for the Advancement of Science

What is the Research on the Role and Effectiveness of Using Information and Communication Technology (ICT) in Mathematics and Science Education? Kathleen Roth, Lessonlab, Inc.

Conceptions of Mathematical Competence Edward A. Silver, University of Michigan

Research on Technology and the Teaching and Learning of Mathematics M. Kathleen Heid, The Pennsylvania State University

Page 1


DFG-NSF Conference

US Panelist Papers

A Perspective on Research in Science Education Richard A. Duschl Center for Informal Learning and Schools Department of Education and Professional Studies King’s College London richard.duschl@kcl.ac.uk Since the 1960s research and scholarly findings in cognitive and social psychology (Bransford, Brown & Cocking, 2000; Pellegrino, et al, 2002), in history, philosophy and sociology of science (Giere, 1988; Longino, 2002; Magnani, & Nersessian,1999), and in science educational research (Fraser & Tobin, 1999; Komorek et al, 1999; Millar, Leach & Osborne, 2001; Minstrel & Van Zee, 2001; NRC 1996, 2000; Psillos et al, 2001) have, respectively, developed new models of thinking and reasoning scientifically, new models of science as a way of knowing and new models of learning and teaching science. When this research is synthesized the messages I think we receive are: (1) curriculum, instruction and assessment frameworks for teaching and learning science should be integrated and focus on three integrated domains: •

the conceptual structures and cognitive processes needed for reasoning scientifically,

the epistemic frameworks needed for developing and evaluating scientific knowledge and inquiry methods, and,

the social processes and forums that shape how knowledge is communicated, represented, argued and debated.

(2) the conditions for science learning and teaching improve through the establishment of: •

learning environments that promote student centered learning,

instructional sequences and contexts that promote integrating science learning across each of the 3 domains in (1),

activities and tasks that make students' thinking visible in each of the 3 domains, and

teacher assessment practices that monitor learning and provide feedback on thinking and learning in each of the three domains.

Research informs us that students come to our classrooms with a diversity of beliefs about the natural world and a diversity of skills for learning about the natural world. Research tells us that accomplished teachers know how to take advantage of this diversity to enhance student learning and skill development. Accomplished teachers also know how to mediate student learning, that is, provide the helping hand and probing questions that enables students to move ahead in their understanding of conceptual structures, criteria for evaluating the status of knowledge claims, and strategies to communicate knowledge claims to others. Furthermore, accomplished teachers know how to create a classroom climate that promotes the sharing and display of students idea thus making learners' thinking visible and teachers assessment of inquiry possible. In thinking about contemporary research agendas for science education, I want to propose that we focus on two general domains. One, is the design of classroom learning environments, here to mean the curriculum, instruction and assessment models which comprise the learning environment, that promote learning science through inquiry. Two, is how to engage and facilitate students in thinking about the structure, meaning and communication of scientific information and knowledge (i.e., the conceptual, epistemic and social domains). Herein, I maintain, lie the fundamental educational problems for developing and assessing learners' engagement in scientific inquiry. First, create a learning environment that makes it possible to 'listen to inquiry', 'listen to learning', 'listen to reasoning'. Second, adopt and

Page 2


DFG-NSF Conference

US Panelist Papers

employ a set of mediation strategies and assessment criteria that makes it possible to give feedback on students' use of scientific information and the construction and evaluation of scientific knowledge claims. The research agenda can now be refined by looking at these two problems. What are the characteristics of activity structures that promote listening, inquiry, learning, reasoning, mediation and assessment? What are the conditions of learning (cognitive, epistemic and social) that promote the construction and evaluation of scientific knowledge claims? What are the guiding principles for the design of learning environments that promote accomplished teaching and learning among diverse communities of learners and within diverse contexts and conditions? To help us think further about possible contemporary research agendas in science education, let's consider the research on design principles by examining two alternative sets of guiding principles that can inform the framing or design of science teaching and learning. Linn and Hsi (2000) present the results of a science education research program based on the Knowledge Integration Environment framework. For them some guiding principles for the design of curriculum, instruction and assessment frameworks are: ~ Make science accessible ~ Make thinking visible ~ Help students learn from one another ~ Foster lifelong learning Engle and Conant (2002) based on an analysis of lessons from a Fostering a Community of Learners classroom propose a more generic set of guiding principles for facilitating learners productive disciplinary engagement. The 4 guiding principles they put forth are: ~ Problematizing content ~ Giving students authority ~ Holding students accountable to others and to disciplinary norms ~ Providing relevant resources There are many proposals for guiding principles I could have selected from other scholars researching the design of science and mathematics learning environments; some developed by researchers named in Table 1. However, we can begin to see from just these two contrasting sets some of the research questions and challenges confronting us as education researchers and developers. What are the conditions for making science accessible? What elements of science (e.g., conceptual, methological, cognitive, epistemic, representational) do we make accessible? Do these hold for the various contexts of science education (e.g., concept learning, process skill learning, discovery learning, inquiry learning, etc.) How does problematizing content facilitate, or hinder, making science accessible? What are the discourse processes that help give students authority? The one principle that fascinates me the most, given my interest in philosophy of science and argumentation discourse, is 'holding students accountable to disciplinary norms'. I would maintain we know very little about how to integrate domain-specific disciplinary norms into science learning and discourse contexts (Kelly & Duschl, in review). I would also maintain though that we are making significant progress. Consider the NSF funded research projects presented in Table 1. Each of these projects has made important contributions to our understandings about the design of science learning environments and the factors that facilitate learners' understanding of and thinking about the structure, meaning and communication of scientific information and knowledge. The confluence of the learning sciences, science studies, and educational research and development has generated new and exciting curriculum, instruction and assessment frameworks as well as new research problems and questions.

References: Bransford, J., A. Brown, & R. Cocking (Eds.). 1999. How people learn: brain, mind, experience, and school. Washington, DC: National Academy Press. (http://www.nap.edu).

Page 3


DFG-NSF Conference

US Panelist Papers

Engle, R. & Conant, F. (2002). Guiding principles for fostering productive disciplinary engagement: Explaining an emergent argument in a community of learners classroom. Cognition & Instruction, 20(4), 399-483. Giere, R. (1988). Explaining science: A cognitive approach. Chicago: University of Chicago Press. Kelly, G. & Duschl, R. (in review). Toward a research agenda for epistemological studies in science education. Review of Educational Research. Komorek, M. et al (1999). Research in science education past, present, and future. Volumes I and II. Proceedings of Second International Conference of the European Science Education Research Association (ESERA). August 31-September 4, 1999, Kiel Germany. Linn, M. & S. Hsi. 2000. Computers, teachers, peers: Science learning partners. Mahwah, NJ: Lawrence Erlbaum Associates. Longino, H. (2001). The fate of knowledge. Princeton: Princeton University Press. Magnani, L. & N. Nersessian. (Eds.) 1999. Model-based reasoning in scientific discovery. New York: Kluwer Academic/Plenum Publishers. Millar, R., J. Leach, & J. Osborne. (Eds.) 2000. Improving Science Education: The contribution of research. Philadelphia: Open University Press. Minstrell, J. & E. Van Zee. (Eds.) 2000. Inquiring into Inquiry Learning and Teaching in Science. Washington DC: American Association for the Advancement of Science. National Research Council 1996. National Standards in Science Education. Washington, DC: National Academy Press. (http://www.nap.edu). National Research Council. 2000. Inquiry and the National Standards in Science Education. Washington, DC: National Academy Press. (http://www.nap.edu). Pellegrino, J., J. Chudowsky. & R. Glaser. (Eds.) 2001. Knowing what student know: The science and design of educational assessment. Washington, DC: National Academy Press. (http://www.nap.edu). Psillos, D. et al (2001). Science education research in the knowledge based society, Volumes I and II. Proceedings of Third International Conference of the European Science Education Research Association (ESERA). August 21-24, 2001, Thessaloniki, Greece.

Page 4


DFG-NSF Conference

US Panelist Papers

Table 1 NSF Supported Full-Inquiry Science Units

Curriculum Project & Web Address LeTUS –Learning Technologies in Urban Schools www.ils.nwu.edu or www.letus.org Learning by Design www.cc.gatech.edu WISE -Web-based Inquiry Science Environment

Project Leader

Description of Units

Louis Gomez, Brian Reiser, & Daniel Edelson, Northwestern U.

Middle & Secondary Level Inquiry-based Units on Climate, Air Quality, Solar Energy, Create-aWorld; I, Bio

Janet Kolodner, Georgia Tech U.

Middle School; Physical and Earth Science Units designed around case-based reasoning

Marcia Linn, Jim Slotta

Grades 5-12; 2 week long capstone units, 13 projects in physical, life and earth science.

wise.berkeley.edu

U. of California, Berkeley

One Sky, Many Voices

Nancy Songer,

www.onesky.umich.edu

U. of Michigan

MUSE - Modeling for Understanding in Science Education

Jim Stewart, U. of Wisconsin

K-12, Environmental focus, 4-8 week long units; Kids as Global Scientist, BioKids Secondary Biology & Earth Science; extended 9 week courses; Units on genetics, evolution

www.wcer.wisc.edu SEPIA - Science Education through Portfolio Instruction and Assessment

Richard Duschl, King's College London

Middle school, 4-6 week long problem-based inquiry units; Units on flotation; acid/base chemistry; earthquakes & volcanoes

www.kcl.ac.uk/depsta/ed ucation BioLogica

Concord Consortium

High School; independent inquiry and scientific reasoning; Units on genetics, cell, DNA

HI-CE Ctr for Highly Interactive Computing in Educ hi-ce.eecs.umich

Joseph Krajick,

Middle and High School; modeling software (e.g., Science Laboratory) to investigate complex problems

BGUILE - Biology GUided Inquiry Learning Environments

Brian Reiser,

biologica.concord.org

U. of Michigan

Northwestern U.

Middle & High School; scientific investigation and argumentation using puzzling & authentic problems; Units on evolution; animal behavior

www.letus.org/bguile/

Page 5


DFG-NSF Conference

US Panelist Papers

Teachers’ Knowledge of Mathematics and Its Role in Teacher Preparation and Professional Development Programs Jeremy Kilpatrick University of Georgia jkilpat@coe.uga.edu In a 2000 survey of mathematics teachers in schools across the United States (Malzahn, 2002; Whittington, 2002a, 2002b), teachers at all grade levels tended to rate themselves very well qualified to teach mathematics. Of the elementary school (grades K to 5) teachers surveyed, 54% saw themselves as very well qualified, and only 1% did not consider themselves well qualified (Malzahn, p. 7). Two areas for which they indicated they were not so well prepared was in teaching students with limited proficiency in English and in using calculators and computers for more than drill and practice, particularly in using the Internet (p. 9). Only 45% reported that they had a moderate or substantial need for professional development to deepen their own mathematics content knowledge, which was much less than the number who saw a need to learn how to use technology, to use inquiry strategies, or to teach students with special needs (p. 11). The percent of middle school (grades 6 to 8) teachers of mathematics who saw themselves as very well qualified to teach various areas of the standard middle school mathematics curriculum ranged from 57% for algebra to 92% for computation. Only 9% did not consider themselves well qualified to teach algebra, and none thought they were not well qualified to teach computation. As a rule, however, they did not consider themselves very well qualified to teach topics from the high school curriculum. The use of technology in support of mathematics learning was also an area in which they tended not to see themselves as especially qualified (Whittington, 2002b, p. 6), although they were more confident than the elementary school teachers. The percent rating themselves as needing professional development to deepen their own mathematics content knowledge was 31%, again much lower than those seeing a need for other topics such as using technology (p. 9). High school (grades 9 to 12) teachers of mathematics were the most likely of all to see themselves as well qualified, but their perceptions varied across curriculum areas. Although 58% had majored in mathematics as undergraduates (Whittington, 2002a, p. 4), the percent rating themselves well qualified to teach various areas of the high school curriculum ranged from 12% for mathematical structures to 94% for pre-algebra and for algebra (p. 6). Less than 10% considered themselves not well qualified to teach any of the major curriculum areas such as algebra, geometry, and functions, and only 23% thought they were not well qualified to teach technology (in this part of the survey it was apparently treated as a curriculum area rather than as a tool for teaching). The percent of high school teachers rating themselves as needing professional development in knowledge of mathematical content was only 32%, once again the lowest of the topics they might have chosen. This picture of self-perception of U.S. teachers’ mathematical knowledge contrasts rather sharply with the perceptions of others. For example, Ma (1999), who surveyed the mathematical knowledge of U.S. and Chinese elementary school teachers, reported that the U.S. teachers tended to lack the profound understanding that the Chinese teachers commonly possessed. Others studying U.S. teachers’ knowledge of mathematics for teaching have also found it weak or lacking (for reviews, see Ball, Lubienski, & Mewborn, 2001; Mewborn, in press). When teachers do not know the mathematics they need in order to teach it effectively, the mathematical quality of their lessons declines. Reporting on the TIMSS Video Study, Stigler and Hiebert (1999, pp. 65) noted that a group of university mathematics teachers had rated 89% of the eighth-grade lessons taught by a sample of U.S. teachers to be low in quality of mathematics content, whereas the corresponding percents for Germany and Japan were 34% and 11%, respectively Ball et al. (2001) argue that the problem is not teachers’ mathematics knowledge per se but instead is their “knowledge of mathematics in and for teaching” (p. 449). In other words, we should view the knowledge base (including cognition and beliefs) for teaching mathematics as involving an application of mathematics. Teachers need to know mathematics in a special way so that they can use it in teaching, just as engineers and accountants need to know mathematics in a special way so that they can use it in Page 6


DFG-NSF Conference

US Panelist Papers

their work. And beyond knowing, understanding, and appreciating the mathematics they will use, teachers also need to know students and need to know classroom practice (see Kilpatrick, Swafford, & Findell, 2001, ch. 10). Reviewing programs to develop the proficient teaching of mathematics, a recent report from the U.S. National Research Council says the following: Programs of teacher education and professional development based on research integrate the study of mathematics and the study of students’ learning so that teachers will forge connections between the two. Some of these programs begin with mathematical ideas from the school curriculum and ask teachers to analyze those ideas from the learners’ perspective. Other programs use students’ mathematical thinking as a springboard to motivate teachers’ learning of mathematics. Still others begin with teaching practice and move toward a consideration of mathematics and students’ thinking. (Kilpatrick et al., 2001, p. 385) The report goes on to give examples of four types of programs for developing proficiency in teaching that illustrate some possible approaches. The report argues that such proficiency is best developed over time in communities of fellow practitioners and learners who work together on common problems of teaching: If teachers are going engage in inquiry, they need repeated opportunities to try out ideas and approaches with their students and continuing opportunities to discuss their experiences with specialists in mathematics, staff developers, and other teachers. These opportunities should not limited to a period of a few weeks or months; instead, they should be part of the ongoing culture of professional practice. Through inquiry into teaching, teacher learning can become generative, and teachers can continue to learn and grow as professionals. (p. 399) However teacher preparation programs and continuing professional development programs are organized, development of the teacher’s knowledge base seems to demand that programs be appropriate, coherent, collaborative, and sustained. Neither standard university courses in mathematics nor one- or two-day workshops for teachers have proven to be very effective in improving teachers’ knowledge of mathematics in and for teaching. Much more is required if that knowledge is to make a difference in practice. My priorities for further research and development come from the proposed work of the NSF-funded U.S. Center for Proficiency in Teaching Mathematics (see http://www.cptm.us/) and are the following: 1.

Where and how do teachers use mathematics during their work?

2.

What mathematical knowledge, skills, and dispositions are entailed in teaching mathematics for proficiency?

3.

How can teachers, prospective or practicing, develop their mathematical knowledge and use it more effectively in teaching?

4.

What constitutes a professional learning opportunity in mathematics that is consonant with making instructional practice and its development central to that learning?

5.

What are the characteristics of high-quality, self-sustaining professional development opportunities for P-16 teachers that make those opportunities effective?

6.

What does it take to develop productive alliances and collaborations across communities that can contribute to the professional learning of mathematics teachers?

Note Paper prepared for the German-American Workshop on Quality Development in Mathematics and Science Education, Kiel, Germany, 5–8 March 2003.

References Ball, D. L., Lubienski, S. T., & Mewborn, D. S. (2001). Research on teaching mathematics: The unsolved problem of teachers’ mathematical knowledge. In V. Richardson (Ed.), Handbook of research on teaching (4th ed., pp. 433–456). Washington, DC: American Educational Research Association.

Page 7


DFG-NSF Conference

US Panelist Papers

Kilpatrick, J., Swafford, J., & Findell, B. (Eds.). (2001). Adding it up: Helping children learn mathematics. Washington, DC: National Academy Press. Ma. L. (1999). Knowing and teaching elementary mathematics: Teachers’ understanding of fundamental mathematics in China and the United States. Mahwah, NJ: Erlbaum. Malzahn, K. A. (2002, December). Status of elementary school mathematics teaching (Report from the 2000 National Survey of Science and Mathematics Education). Chapel Hill, NC: Horizon Research. Available: http://2000survey.horizon-research.com/reports/#statusteaching Mewborn, D. S. (in press). Teaching, teachers’ knowledge, and their professional development. In J. Kilpatrick, W. G. Martin, & D. E. Schifter (Eds.), A research companion to Principles and Standards for School Mathematics. Reston, VA: National Council of Teachers of Mathematics. Stigler, J. W., & Hiebert, J. (1999). The teaching gap: Best ideas from the world’s teachers for improving education in the classroom. New York: Free Press. Whittington, D. (2002a, December). Status of high school mathematics teaching (Report from the 2000 National Survey of Science and Mathematics Education). Chapel Hill, NC: Horizon Research. Available: http://2000survey.horizon-research.com/reports/#statusteaching Whittington, D. (2002b, December). Status of middle school mathematics teaching (Report from the 2000 National Survey of Science and Mathematics Education). Chapel Hill, NC: Horizon Research. Available: http://2000survey.horizon-research.com/reports/#statusteaching

Page 8


DFG-NSF Conference

US Panelist Papers

A Framework for Assessment in Science1 Mark Wilson, UC Berkeley January 2003 mrwilson@socrates.berkeley.edu The National Research Council’s Knowing What Students Know: The Science and Design of Educational Assessment (2001) sets out three desirable features of a balanced assessment system; comprehensiveness, coherence and continuity. A comprehensive assessment system would use a range of measurement methods that would produce a variety of measures from which to make educational decisions at different levels. Coherence means that an assessment system should have a well-structured conceptual base that links together the models of student learning that underlay the assessments. Ideally, assessments should also be continuous, in that they measure student progress over time in a linked way. Knowing What Students Know gives examples of assessments that exemplify each of these traits, but concludes that no existing assessment systems meet all three of these criteria. At this key point, the Center for Assessment and Evaluation of Student Learning (CAESL) proposes to develop, validate and publish a practical, but theory-based framework to guide state assessment systems in science so that they will come as close as possible to meeting the criteria of comprehensiveness, coherence and continuity. The framework will draw on existing research and will reflect our belief in the need for assessment systems grounded in sound theories of science learning, valid systems of assessment and of assessment use. Our goal is to produce a framework in three parts that can be used by policy-makers, state and district education staff, principals, teachers and others involved in the decision-making processes that lead to measuring student achievement in science. The framework will support decision-makers who need to choose a matching set of curriculum and assessments (both formative and summative) or develop (or draw up specifications for) a new assessment system to match a set of standards and curriculum. As such, the framework will be for use primarily by those involved in choosing, specifying for, and designing state or district science assessments, such as State Board of Education members, State Department of Education staff and school district staff. The framework will do three things. •

provide a conceptual framework will enable users to link together existing standards-based classroom instructional assessments and statewide science tests, as well as national and international measures of science performance (e.g., TIMSS)

provide a measurement framework for the development of new assessment systems that will link standards-based classroom instructional assessments to larger scale assessments

provide a framework for teachers’ classroom instructional and assessment practices so that they maintain the linkage of effective science learning, standards and assessment use

Many of the intellectual threads that we need to pull together to tailor these pieces exist within the research of our CAESL partners, but the threads have not previously been woven together and formed into a scalable framework that is useful for decision makers in science education. What is also new about what we propose is that, although some of the ideas which are described later in this proposal have been tried in certain areas of science education, we are proposing to generalize the frameworks to be usable across all science domains. For the frameworks to be usable, we will not limit our thinking to the usual print-based informational formats, but will explore how technology can be used to make the frameworks

1

These notes are the result of a collaborative effort among the following people associated with the Center for Assessment and Evaluation of Student Learning: Joan Herman (UCLA), Steve Schneider (WestEd), Rich Shavelson (Stanford University), Mike Timms (WestEd) and Mark Wilson (UC Berkeley).

Page 9


DFG-NSF Conference

US Panelist Papers

more interactive and widely available. We will consider how the CAESL web site and online training modules could be used to deliver our message. We describe below the existing work by CAESL partners that will underpin the framework, how we will build upon this work to construct the framework, and a proposed timeline for the project.

A Framework for Science Achievement One of the goals of the framework is to make links between classroom instructional assessments with statewide science tests as well as national measures of science performance (e.g., TIMSS). One mechanism for constructing links between various assessment instruments is to develop a conceptual framework for science achievement that clearly distinguishes between different types of knowledge. Building on previous findings from brain research (Cohen & Tong, 2001; National Research Council, 1999) and cognitive science research (Pellegrino, Chudowsky, & Glaser, 2001; National Research Council, 1999), Shavelson and his students (Li & Shavelson, 2001; Shavelson & Ruiz-Primo, 1999) envision science achievement as comprised of four distinct but overlapping types of knowledge: declarative, procedural, schematic, and strategic. Declarative knowledge is “knowing that” including scientific definitions and facts, mostly in the forms of terms, statements, descriptions, or data. Procedural knowledge is “knowing how” to design a study that manipulates one relevant variable and controls others and includes if-then production rules or a sequence of operations (measurements or procedures) to achieve a certain goal. Schematic Knowledge is “knowing why” that includes principles, schemes, and mental models that are based on scientifically justifiable “theory” or “conceptions” that explain the physical world. Strategic Knowledge is “knowing when, where, and how” to apply domain-specific knowledge and strategies to solve a unique problem or approach a new situation. Once developed, the framework for science achievement can be used as a method for identifying and analyzing specific test items on different types of assessments. This will enable the assessor to determine the type of knowledge that a specific test item is attempting to measure and to examine the distribution of different knowledge constructs embedded within the assessment instrument. Li and Shavelson (2001) developed a system for logically analyzing science test items using the framework. Li’s (2001) factor analytic and cognitive research findings indicate that the logical analysis process is an effective method for categorizing TIMSS test items into the types of knowledge in the framework. Specifically, Li was able to reliably categorize TIMSS science-test items based on a logical analysis. Confirmatory factor analysis revealed the hypothesized three factors--declarative, procedural, and schematic knowledge--and think-aloud protocols provided further support. We propose using these methods in some combination to evaluate test items and to extend the use of this type of analysis to local and large-scale assessment instruments by creating an easy-to-use framework.

A Measurement Framework The purposes of large-scale and classroom assessments are often seen as being distinct. On the surface of it, large-scale assessments, including school district, state and national assessments, are directed at the formative and summative assessments of educational programs, while classroom assessments are primarily focused on the educational status or progress of individual students. Looking beyond this superficial view however, there needs to be several very important links between the two levels in order that they compromise a coordinated system of assessments. First, the educational advancement of each of these educational programs is composed of the educational advancements of its many individual students—ultimately, the aim of improving educational attainment in any program will depend on the progress of the students. Hence, the constructs being assessed at both levels need to be consistent—the constructs may need to be more differentiated at the classroom level and less differentiated at the large-scale level. This is the characteristic of a system of coordinated assessments that has been termed coherence by the NRC report mentioned above (NRC, 2001). Second, the system of assessments must cover the full range of ways of measuring those constructs that is reflective of the instruction that students are being given in the classrooms. This is termed comprehensiveness in the same NRC report. Again, one might expect some more differentiation at the classroom level than at the large-scale level, but the restriction in assessment formats at the higher level must not lead to deformation of the measured constructs in large-scale assessment. Otherwise the lack of fidelity between what is occurring in the classroom and what is being measured from the large-scale perspective will almost invariably lead to erroneous policy decisions. Third, individual assessments at both levels Page 10


DFG-NSF Conference

US Panelist Papers

need to be seen as part of a continuous stream of evidence that tracks the progress of both individual students and educational programs over time. Of course, this can only be contemplated where there is consistency in the definition of the constructs over time. This has been termed continuity in the same NRC report. To attain the coherence across an assessment system, there needs to be a measurement framework that specifies how this can be achieved. Toward this end, one of our current interests is to investigate the transferability of elements of the BEAR Assessment System (Wilson & Sloane, 2000) beyond its initial implementation in the Issues, Evidence, and You (IEY) curriculum. This curriculum and its accompanying assessment system were developed in tandem with the Science Education for Public Understanding Program (SEPUP) at the Lawrence Hall of Science. Of particular interest for developing a measurement framework for linking across assessments are two components: The first is what we call progress variables. These are well thought out and researched hierarchies of qualitatively different levels of performance related to a central concept or skill in a particular curriculum (and/or assessment tool). For example, the variables that form the heart of IEY are: Designing and Conducting Investigations (DCI), Evidence and Tradeoffs (ET), Understanding Concepts (UC), Communicating Scientific Information (CSI), and Group Interaction (GI). Each of these progress variables is a particular case of the four types of knowledge described in the previous section. The second component of interest is what we call a link test. In general, the BEAR Assessment System emphasizes embedded assessment; that is, assessment activities that are an integral part of the teaching and learning activities of a particular curriculum. However, to scaffold these embedded assessments, we have also developed sets of items that are relatively curriculum-independent (insofar as they do not relate to a particular activity or lesson), and are more efficient in producing responses. We call these items link items; a set of such items given some point in time is a link test. The link items developed as part of the SEPUP Assessment System are openended short-answer items related to one of the general themes studied in IEY (Water, Materials Science, Energy), each requiring between one paragraph and one page for an adequate response, where each response is scorable on one or more of the SEPUP variables. In recent work, we have developed a new format for link items that designed to be interpreted in the same way as open-ended link items using the framework of progress variables, but also to be more efficiently scorable than open-ended items. We aim to merge the efficiency of multiple choice (mc) items with the interpretability of the link items through their relationship to an underlying variable. To do so, we have developed a new type of mc item--we call them, generically, mc-link items—they are distinct from regular mc items in that the distractors each relate to different levels of the variable. Although we do not see these items as being reliable enough for interpretation at the individual item level, we believe that small sets of them, all developed with respect to a single variable, will indeed make for reliable diagnostic information. At the same time, we have also been considering the usage of sets of link items (mc or open-ended) to help make large-scale assessments more useful in the classroom. We have long believed that link tests, in addition to providing periodic summative classroom assessments within a given curriculum, may also serve as an alternative to traditional standardized tests. Rather than the relatively atheoretical collection of multiple-choice items that make up the conventional standardized test, it might be possible to put together collections of link items relating to the variables that are part of the curricula used within a particular system. This is especially the case because most science curricula would be based on at least a core of similar variables (such as the designing and conducting of scientific investigations), even if each curriculum had one or two variables unique to it. Periodic sampling of different progress variables could allow a system to cover a wide range of curriculum. The development of our framework will focus on bringing together these different research threads to create a practical measurement framework that will show assessment developers how to define variables that can be tracked through instructional and larger-scale assessments, and how to create items that incorporate these variables in measurable ways that produce reliable and valid inferences about student learning.

A Framework for Assessment Use The framework would not be complete if it did not guide users in how to achieve continuity in their balanced assessment system. While the identification of progress variables and of a way to classify Page 11


DFG-NSF Conference

US Panelist Papers

student knowledge works toward this, more specific guidelines that can be used by educators are needed. Current accountability systems mark a significant change in the traditional locus of evaluation responsibility and underscore important insights about where information must be applied if student learning is to be improved. Local schools and teachers must both engage in evaluation and use the data to improve in the long term. It is at the school and classroom levels where students’ needs can best be understood and served. While this is hardly a revolutionary idea in school reform, it is unprecedented in its expectations for local assessment and teacher capacity to engage in continuous assessment and to collect and use data to inform instruction (Herman & Golan, 1993). The traditional limits of educators’ knowledge of assessment and data usage have been fairly well documented (see for example, Stiggins, 1988) and form a primary rationale for CAESL’s mission. Beyond calls for increasing teachers’ assessment literacy (Stiggins, 1998) and general research findings about the value of teachers’ formative assessment practices (Black & Wiliam, 1998), the landscape of teacher assessment has not yet been fully charted. Researchers such as Stiggins call for teacher literacy on issues of how to develop and interpret formal types of tests for the traditional purposes of planning instruction, continuously monitoring progress, judging performance, and grading. Adding to the CAESL framework, we plan to develop and advance a comprehensive conception of effective teacher assessment practices that integrate large scale and classroom assessment to achieve the desired continuity of a balanced assessment system.

References Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, October, 139–148. Cohen, J.D., & Tong, F. (2001). The face controversy. Science, 293, 2405-2407. Herman, J. L. & Golan, S. (1993). Effects of standardized testing on teaching and schools. Educational Measurement: Issues and Practices, 12(4), 20-25, 41-42. Li, M. (2001). A framework for science achievement and its link to test items. Unpublished doctoral dissertation, Stanford University, Stanford, CA. Li, M., & Shavelson, R.J. (2001 April). Using TIMSS items to examine the links between science achievement and assessment methods. Paper presented at the annual meeting of the American Educational Research Association, Seattle, Washington. National Research Council (1999). How people learn: Brain, mind, experience, and school. Committee on Developments in the Science of Learning. J.D. Bransford, A.L. Brown, & R.R. Cocking (Eds.). Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. National Research Council (2001). Knowing what students know: The science and design of educational assessment. Committee on the Foundations of Assessment. J. Pellegrino, N. Chudowsky & R. Glaser (Eds.), Division on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press. Schultz, S.E. (1999). To Group or Not to Group: Effects of Group Interaction on Students' Declarative and Procedural Knowledge in Science. Unpublished doctoral dissertation, Stanford University, Stanford, CA. Shavelson, R.J., & Ruiz-Primo, M.A. (1999). On the psychometrics of assessing science understanding. In J.J. Mintzes, J.H. Wamhersee & J.D. Novak (Eds.), Assessing science understanding: A human constructivist view (pp. 303-341). New York: Academic Press. Stiggins, Richard (January, 1988) Revitalizing classroom assessment: The highest priority. Phi Delta Kappan 363-368. Wilson, M. & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181-208.

Page 12


DFG-NSF Conference

US Panelist Papers

What is the state of research on how to work with various underachieving subpopulations (e.g., minorities, high poverty students, culturally different students)? DFG/NSF Workshop Topic 3.a Kiel, Germany, 5-8 March 2003 Walter G. Secada Diversity in Mathematics Education/Center on Learning and Teaching University of Wisconsin—Madison wgsecada@facstaff.wisc.edu Current research on working with various mathematically- and/or scientifically-underachieving populations is grappling with a variety of theoretical and empirical issues. I will present three such issues as poles on an either-or choice for clarity’s sake; of course, the theoretical debates and research findings are much more nuanced than can be presented in a 3-page paper.

Policy Goals: Closing the Gap or Raising the Bottom? Concerns about population-based underachievement derive from a single observation: the existence of social-demographic group-based differences along a range of outcomes such as achievement, learning with understanding, course taking, persistence to the point of post-secondary degree attainments, and careers. These group-based differences suggest that the problem is one of differential achievement, as opposed to simply one of under-achievement. The distinction between under- and differential achievement is critical since one could generate four policy goals, depending on how the problem is framed: “Do no harm.” When interventions – be they new policies, new curricula, or new instructional practices – are first proposed, one of the most important criteria for their adoption is that they do no harm. If the problem is seen as one of underachievement, “doing no harm” becomes an issue of insuring that the populations in question not get worse along a set of predetermined outcomes. On the other hand, if the issue is one of differential achievement, “do no harm” becomes a matter of ensuring that gaps of those same outcomes are not exacerbated as a result of the intervention. The distinction between “closing the gap” versus “underachievement” can be seen very clearly in the case of Sesame Street whose evaluations found enhanced lower-SES student learning in language arts; hence, underachieving populations actually did better because of Sesame Street. However, a reanalysis of the original evaluation data revealed that the SES-achievement gap actually increased; that is, middle class children learn more from Sesame Street than do poor children. Hence, lower-SES children enter school at a greater disadvantage relative to their peers now than before Sesame Street was introduced. If “do no harm” means helping underachieving populations grow, Sesame Street is a success; “do no harm” means not exacerbating a per-existing gap, then Sesame Street is a failure. In the case of mathematics and/or science innovations, the distinction between closing the gap and focusing on underachieving student populations has not been fully explored. The few studies of reform curricula and instructional innovations find that lower SES, African American, limited English proficient, and/or female students do better with such interventions than without. With the exception of an exploratory study involving CGI and gender, I have seen no studies that look at whether or not the gap is exacerbated through such interventions. Designing interventions to actually close the gap versus designing them to “merely” improve achievement. I have seen no interventions, evaluations, and/or studies that are designed to focus on closing the gap in mathematics and/or science outcomes – let alone studies that seek to keep the gap

Page 13


DFG-NSF Conference

US Panelist Papers

closed once it has been closed. Such studies would be consistent with defining the policy as one of differential outcomes. Instead, interventions are designed to improve performance of one or another subgroup relative to a similar subgroup that does not participate in the intervention – a position that is consistent with polices tied to student underperformance. Moreover, seldom if ever, are purposeful variations from among a portfolio of interventions studied simultaneously which would represent a more nuanced articulation of policies tied to underperformance.

Notions of Equity Not all forms of student diversity, even those that are socially constructed, are necessarily issues of equity. Equity involves multiple conceptions that compete with one are another for dominance, that are often contradictory, and that can result in positioning someone where (s)he would not be comfortable if that position were taken to an extreme. In my own work, I have found at least 8 major ideas that seem to undergird people’s discursive practices. Equity in mathematics and science can be thought of as fundamentally an issue of: caring, social justice, socially-enlightened self interest, triage, opposed to excellence, democratic participation, equality based on social demographic groupings (typically, race, class, gender, and language), and power. Interestingly, these conceptually distinct ideas are often held by individuals who articulate positions falling under one or another contingent on the context in which they are operating. These notions have historical roots that find expression in other disciplinary fields. What is more, they interact with people’s conceptions of mathematics and of their students in ways that fundamentally trouble work in those domains.

Mechanisms of Inequality In light of calls for “more scientifically based research” in education, scholarship at the nexus of underachievement, differential achievement, and/or equity will need to seek to better understand the mechanisms by which socially-based inequality is constructed. As such, this work will need to engage, much more deeply than it has to date, in specifying the processes and/or mechanisms by which inequality is created and in more clearly tying those purported mechanisms to outcomes. Not only is this disconnect no longer viable, scholarly inquiry that moves in that direction will conduct basic research, help mathematics and science educators better understand and engineer interventions with clearly articulated predictions based on those interventions, and help us understand why an intervention worked (or failed). Work on the “mechanisms of inequality” will probably use mixed-methods research: quantitative descriptive studies showing the lay of the land, qualitative studies identifying mechanisms and showing how they function, mixed-methods studies tying mechanisms to their outcomes and making predictions for how interventions will perturb outcomes and the processes that are tied to those outcomes. Research focused on the mechanisms of inequality will need to address issues of bias in the assessment of student outcomes and propose way of overcoming those biases. What is more, this scholarship will need to inquire about whether students reason differently in mathematics and/or science based their backgrounds and to clearly show how such differences in thinking are consequential for learning. Finally, “mechanisms of inequality” can be specified at multiple levels within the system: the classroom (curriculum, instruction, assessment) and those processes that filter through the classroom (teachers’ conceptions of their students and of mathematics), the department and the school (teachers’ professional communities, school environment, school-level collective norms supporting academics and caring, tracking, placement of students), and the district (funding, policies). One can think of forces or processes that begin outside of these arenas but filter through them, for example: parental involvement in schools (filtering through both classroom and the school), desegregation law suits (school district), and housing patterns. Careful historical analyses might be used to reveal how particular current-day practices, which are accepted as normal and non-problematic, have resulted in inequality. That is, one can also think of sociohistorical mechanisms of inequality. For example, tracking in the United States began as a system of classifying students that would prepare them for their proper positions in the society based on their parents and social backgrounds. Noblesse oblige, of course, meant that some lucky individuals were sponsored for better than they were entitled to based on their backgrounds. Over time, tracking was given a scientific patina through the use of testing (I.Q. tests, specifically) for making placement decisions (though, of course, judgments about student worth or educability still entered such decisions), through the construction of formal syllabi, and self-referential validation of beliefs about student educability. Hence,

Page 14


DFG-NSF Conference

US Panelist Papers

seemingly-rational relationships between tracking outcomes and career aspirations replaced vague notions of people’s place in life. Achievement tests replaced I.Q. tests. And the rhetoric involving tracking shifted towards promoting it as a more efficient way of matching people to their reasonable aspirations. That I.Q. tests were themselves biased as evidenced by how whole banks of items were thrown out when urban Blacks outperformed rural Whites was not commented on. Nor is the fact that the validation of achievement tests were based on how well they correlated with I.Q. tests; and that items enter current day achievement tests based on how well they predict over all test outcomes. One of the most pernicious outcomes of this history, moreover, has been that most current day practices in school mathematics and science were created based on assuming tracked system. Hence, we have developed an entire way system of closely interlocked pieces that work synergistically to mutually reinforce each another. Detracking, as an intervention, becomes problematic because educators have not developed the technical knowledge and skills that are needed to work within such a system. One could think of similar historical analyses conducted on institutional practices that constrain curriculum development and other opportunity-to-learn processes.

Page 15


DFG-NSF Conference

US Panelist Papers

Background Paper for Topic 3: What does current research tell us about the impact of various approaches in curriculum and instruction? A Summary of a Study Currently Underway Jo Ellen Roseman, Director of Project 2061 American Association for the Advancement of Science jroseman@aaas.org The Center for Curriculum Materials in Science, represented at the German-American Workshop by Drs. DeBoer, Krajcik, and Roseman, focuses on critical research and development issues related to improving curriculum materials for K-12 science. At the same time, the center will help to foster a new generation of leaders with specific expertise in the analysis of curriculum materials and in their development, evaluation, and implementation. While the center is currently in the process of defining its research agenda, the research study described below and funded by another National Science Foundation program—the Interagency Education Research Initiative—is very much in keeping with the center’s mission and goals and cuts across all of the research topics to be discussed at the Workshop. Project 2061, in collaboration with the University of Delaware and Texas A&M University, is examining the interactions of teaching practices, curriculum materials, and professional development to understand how to provide, on a large scale, interventions that can optimize student achievement in mathematics. Drawing on findings from Project 2061’s evaluation of middle-grades mathematics textbooks, this study is expected to provide empirical evidence that highly rated materials—and professional development related to them—can support effective teacher practices and improved student learning and provide insights into how that is accomplished. To examine these complex interactions, the study addresses three key research questions: •

What is the relationship between the fidelity of use of research-based instructional strategies—supported by highly rated curriculum materials—and student learning of specific ideas and skills?

How does professional development and ongoing support—focused on improving teaching and learning of specific mathematics ideas and skills—build teacher knowledge and lead to more effective teaching practices?

How can technology help to provide effective teacher professional development and ongoing support cost-effectively on a large scale?

An initial three-year longitudinal study of teachers’ use of four materials (two received high ratings, one received a moderate rating, and one received a low rating in Project 2061’s textbook study) focuses on the role of professional development in helping teachers to use the materials with a high degree of fidelity to the criteria by which the materials were rated. A two-year experimental study will then investigate how technology can help scale up the delivery of the most effective strategies to a more diverse and larger universe of teachers. (see Attachment A)

Materials and Methods Instruments are being developed to examine the key variables in the study—student learning, classroom practice, teacher thinking, professional development—in light of the following specific learning goals, which are commonly found in U.S. national and state mathematics standards:

Page 16


DFG-NSF Conference

US Panelist Papers

Use, interpret, and compare numbers in several equivalent forms such as integers, fractions, decimals, and percents. (Benchmarks, 12B grades 6-8 #2) Symbolic equations can be used to summarize how the quantity of something changes over time or in response to other changes. (Benchmarks, 11C grades 6-8 #4) Comparison of data from two groups should involve comparing both their middles and the spreads around them. (Benchmarks, 9D 6-8 #4) These learning goals are essential to Project 2061’s conception of science literacy (American Association for the Advancement of Science, 1993). Student learning. Student achievement of the specified learning goals is a key indicator in the study. To appraise student learning, we are using data from state tests, customized goals-based assessments, and student interviews. While the state tests cover the topics of the learning goals, they are unlikely to probe deeply into students’ understanding of the specific learning goals. Development of goals-based assessments is being guided by the Project 2061 assessment-design procedure, which requires (a) careful attention to the intended meaning of the specific learning goals, their prerequisites, and commonly held student ideas on the topic and (b) analysis and revision of draft items/tasks in light of the Project 2061 criteria and indicators of alignment with those goals. The revised items/tasks will be independently tested and revised based on student interviews on those same ideas. For example, design of assessment items for the learning goal on symbolic equations started with the development of an assessment map that laid out the learning goal, its prerequisites, and common learning difficulties students have with them. (see Attachment B) Items were then developed to detect which ideas (both correct and incorrect) students have. Pilot studies have shown that the set of items is sufficiently robust to enable us to monitor growth of student understanding across middle school grades. Classroom practice. The study is based on the premise that students are more likely to achieve the specific learning goals if instruction aimed at them employs strategies that provide students with a sense of purpose, take account of and build on students’ ideas, engage students with relevant phenomena and problems, help students develop and use ideas and skills, and promote student thinking. These general strategies were elaborated into 25 criteria and accompanying indicators, which were then used to examine the quality of instructional support provided in U.S. science and mathematics textbooks (Kesidou & Roseman, 2002; AAAS, 2000; AAAS, 2002). The complete set of criteria and indicators, along with a summary of findings is on the Project 2061 website, http://www.project2061.org. To measure teachers’ implementation of these instructional strategies, we have adapted the instructional analysis criteria for analyzing videotaped lessons that focus on the specified learning goals. For example, the following indicators are used to examine whether the teaching conveys a unit purpose to students: 1. The teaching explicitly presents to the students a unit purpose that is aimed at the learning goals and in the form of a statement, problem, question or representation. 2. The presented purpose is likely to be comprehensible to students (given the students’ grade level and the difficulty of the learning goals). 3. The presented purpose is likely to be interesting and/or motivating to students. 4. This teaching gives students an opportunity to think about and discuss the unit purpose. 5. This teaching is consistent with the presented purpose. If the activity is explicitly identified as a digression, this indicator is not applicable. 6. The teaching (at the end of the unit or chapter) returns to the stated purpose. In most cases, the indicators are identical to those used for analyzing the support provided in a curriculum material. In a limited number of cases, where the indicator refers to information a material provides to teachers (such as the indicators for the criterion that addresses whether the material alerts teachers to commonly held student ideas), there is no corresponding observable aspect of teaching. Teacher knowledge of this information can be probed by questionnaire and/or interview.

Page 17


DFG-NSF Conference

US Panelist Papers

Teacher thinking. Teachers’ implementation of these strategies will be influenced by their knowledge, skills, and habits of mind relative to the learning goals. While we can observe their skill level in the videotapes, knowing something about teacher thinking can help us interpret patterns observed in their practice. Furthermore, a well-designed instrument can more rapidly suggest where professional development needs to start and help us monitor its progress. While several instruments have been used in the past to survey teacher knowledge (Weiss, 2001; Cohen & Hill, 2001), none has the precision needed for our study. The following list of knowledge, skills, and habits of mind related to the learning goals, content alignment, effective instructional practices, and student learning will be elaborated for each learning goal in the study and used to design a teacher thinking instrument: Knowledge, skills, and habits of mind related to student learning goals •

knowledge of the specific learning goals that are part of the study—the ideas and skills that students are expected to learn and on which they will be assessed

belief that these ideas and skills are among the most important for mathematics literacy

knowledge of prerequisites to the learning goals

knowledge of preconceptions and misconceptions relevant to the learning goals

Knowledge, skills, and habits of mind related to content alignment •

knowledge of what content alignment means

knowledge of how highly-rated materials achieve content alignment

ability to distinguish activities that align with the learning goals from those that do not

willingness to focus classroom activities on those that align with the learning goals, their prerequisites, and/or relevant misconceptions

Knowledge, skills, and habits of mind related to effective instructional practices •

knowledge of research-based criteria for examining the likely effectiveness of teaching

belief that effective teaching meets these criteria

ability to distinguish teaching that meets these criteria from teaching that does not

knowledge of how highly rated mathematics materials can support effective teaching

inclination to increase fidelity of implementation of the criteria

ability to use feedback to increase fidelity

ability to use the criteria and highly rated materials to teach effectively

Knowledge, skills, and habits of mind related to student learning •

knowledge of how student learning of the goals is assessed

ability to distinguish student work that shows evidence of achieving the learning goals from work that does not

ability to use feedback on student learning to select more helpful materials

ability to use feedback on student learning to increase fidelity of implementation

We welcome discussion on the design of such an instrument.

Results and Next Steps Year 1 work focused on recruiting and organizing study participants, developing instruments and rubrics for collecting and analyzing student learning data (on number and algebra learning goals), refining criteria and developing a computer utility for analyzing classroom practice, administering a baseline student assessment.

Page 18


DFG-NSF Conference

US Panelist Papers

We are currently developing assessment items/tasks for assessing student learning of the data and statistics learning goal, analyzing videotapes of the number and algebra lessons, and designing a teacher thinking instrument. As it becomes available, we will use our study data to refine our design of professional development. We are particularly interested in participants’ insights into the design of a teacher thinking instrument that is specific to the learning goals, the content and instructional criteria, and the support provided for them in the study textbooks. We look forward to discussions to enlighten our thinking about this challenging task.

References American Association for the Advancement of Science. (1993). Benchmarks for science literacy. New York: Oxford University Press. American Association for the Advancement of Science. (2000). Middle grades mathematics textbooks: a benchmarks-based evaluation. Washington, D.C.: Author. American Association for the Advancement of Science. (2002). Middle grades science textbooks: A benchmarks-based evaluation. Retrieved February 4, 2003, from http://www.project2061.org/tools/textbook/mgsci/INDEX.HTM Cohen, D. K. & Hill, H. C. (2001). Learning policy: When state education reform works. New Haven: Yale University Press. Kesidou, S., & Roseman, J. E. (2002). How well do middle school science programs measure up? Findings from Project 2061’s curriculum review. Journal of Research in Science Teaching, 39(6), 522–549. Weiss, I. (2001). Report of the 2000 national survey of science and mathematics education. Chapel Hill, NC: Horizon Research, Inc.

Page 19


DFG-NSF Conference

US Panelist Papers

German-American Science and Mathematics Education Research Conference Kiel, Germany March, 2003 What is the research on the role and effectiveness of using Information and Communication Technology (ICT) in mathematics and science education? Kathleen Roth, Panelist LessonLab, Inc. Santa Monica, CA kathyr@lessonlab.com In particular, how can Information and Communication Technology be used to provide continuing teacher enhancement opportunities and thus professionalize teaching, e.g., through internet-based learning platforms? Information and Communication Technology (ICT) is currently being used to support science and mathematics teacher professional development (PD) in a variety of ways. On-line courses, workshops, and masters degree programs are offered by universities, professional organizations, and private companies. Some courses, workshops, and conferences are held via teleconference so that teachers can “attend” and participate in these events in real time from a distance. Professional organizations, school systems, and less formally organized groups of teachers sponsor online teacher discussion groups focused around particular areas of interest. In addition, a variety of special web-based platforms have been designed to support particular kinds of teacher professional development (ClassroomConnect, LessonLab, Inc, Teachscape). ICT is an appealing route to improving teacher professional development. In the United States, it offers a remedy for at least two of the continuing problems in U.S. teacher professional development. First, most school schedules make it difficult for teachers to find time to meet together and work collaboratively in ongoing professional development activities. The internet enables teachers to access professional development support on their own schedules, while also promising the opportunity for collaboration through online interactions. Second, the wide accessibility of the internet offers the potential for high quality professional development opportunities to reach all teachers, including those in underrserved, lowperforming urban schools and those in remote, rural locations. But do existing ICT applications provide “high quality” professional development experiences? Do they enhance teachers’ learning of subject matter content and pedagogy, their teaching practice, and their students’ learning? Or do they simply provide another avenue for providing traditional kinds of professional development that research has demonstrated to be ineffective in changing teachers’ practice and improving student learning? What role can ICT play in enhancing teacher professional development in ways that impact science and mathematics teaching practice and student learning? These are the core research questions needing exploration in this area. Before suggesting more specific research questions needing exploration about the role of ICT in science/mathematics teacher professional development, I will first briefly summarize what research tells us about the kinds of activities that best support teaching learning. I will then consider the implications of this research base for defining the kinds of inquiries regarding the role of ICT in professional development that are needed. A description of our work at LessonLab is provided as an example of the use of ICT in supporting collaborative teacher inquiries into teaching practice and student learning. The paper concludes with a suggested list of more specific research questions needing investigation in this area.

Page 20


DFG-NSF Conference

US Panelist Papers

Research about How Teachers Best Learn Traditional professional development efforts seldom provide teachers with the necessary support to help them teach in ways that result in student learning of the kinds of science or mathematical understandings called for in national and state standards and other reform documents (American Association for the Advancement of Science, 1993; National Council for Teachers of Mathematics, 1991; National Research Council, 1996; National Research Council, 2000). Critics argue that traditional PD efforts fail to impact teaching because they are often: short-term, unconnected to teachers' practice, and treat the development of subject matter knowledge and pedagogical strategies as separate objectives (e.g., Ball & Cohen, 1999; Elmore, 2002). There is a growing consensus that effective PD programs do the following: •

engage teachers actively in collaborative, long-term problem-based inquiries into teaching practice and student learning;

treat content as central and intertwined with pedagogical issues;

enable teachers to see these issues as embedded in real classroom contexts through observations and discussions of each others’ teaching (in real time or through video), examination of their own students’ work, etc.

focus on the specific content and curriculum teachers will be teaching;

are guided by an articulated model of teacher learning that specifies what knowledge and skills teachers will gain, what activities will lead to this learning, and how this new knowledge and skills will appear in their teaching practices

(Ball & Cohen, 1999; Carpenter, Fennema, Peterson, Chiang, & Loef, 1989; Cobb, Wood, Yackel, Nicholls, Wheatley, Trigatti, & Perlwitzm 1991; Cohen & Barnes, 1993; Cohen & Hill, 1998; DarlingHammond & Sykes, 1999; Elmore, 2000; Garet, Porter, Desimone, Birman, & Yoon, 2001; Kennedy, 1998; Lewis & Tsuchida, 1997, 1998; Loucks-Horsley Hewson, Love, & Stiles, 1998; Putnam & Borko, 2000; Shimahara, 1998; Stigler & Hiebert, 1999; Takemura & Shimizu, 1993; Whitehurst, 2002; Yoshida, 1999). These research findings are supported by leading educational organizations, including the Association for Supervision and Curriculum Development (ASCD), the National Staff Development Council (NSDC) and the Learning First Alliance who have called for a major overhaul of U.S. systems for teacher professional development based on these research findings. Despite this growing consensus, there is little empirical research demonstrating impact of such PD efforts on teaching practice, and more significantly, on student learning. Many studies depend primarily on teacher self-report about their learning from PD experiences or descriptions of changes in teachers' abilities to analyze teaching practices (e.g., Anderson & Bird, 1995; Crockett, 2002; Sherin & van Es, 2002; Smith & Featherstone, 1995; Ogura, 2000; Zannoni and Santagata, 2002). Only rare studies document a connection from teacher learning to impact on teaching practice to improvement of students' learning. Kennedy (1998) found only four studies of science PD programs for teachers that provided any data about student science learning; none of these programs integrated issues of specific content and pedagogy (Lawrenz & McCreath, 1988; Marek & Methven, 1991; Otto & Schuck, 1983; Rubin & Norman, 1992). In mathematics, she identified eight such studies and found that three programs that intertwined teachers’ learning of content and pedagogy had greater influence on student learning (Carpenter, 1989; Cobb et al, 1991; Wood & Sellers, 1996). This limited research base means we must step back from simply advocating for better PD, and begin by identifying a framework for investigating the effects of PD. I offer two major issues to consider as we begin the discussions. First, research should focus on programs that are guided by a theory of teacher learning and that incorporate the features of professional development hypothesized to be effective, including but not restricted to the following: Engaging teachers actively in collaborative, long-term problem-based inquiries; treating content learning as central and intertwined with pedagogical issues; and allowing teachers to investigate teaching and learning issues in real classroom contexts, focused on specific curriculum used in their own classrooms.

Page 21


DFG-NSF Conference

US Panelist Papers

Second, research is needed that independently tests the effects of PD, in addition to collecting teacher self-reports. Specifically, we need studies that follow impact from teacher learning about content and pedagogy through changes in teaching practice to improvements in student learning. Such programs of research are needed to advance our knowledge of teachers as adult learners and to test rigorously what types of teacher-learning experiences change teaching practices and improve student learning. A broad vision of this path of PD, and the links that need to be investigated, are represented in the diagram below:

What Kinds of ICT are Worth Investigating in Teacher Professional Contexts? Despite the growing consensus on what is quality PD (and the limited research in support of the consensus), most ICT applications used for science and mathematics teacher PD do not support teachers in engaging in long-term inquiries. Therefore, it is not worth the time and expense to examine the impact of such programs on teachers’ practice and student learning. For example, online courses and workshops that do not provide opportunities for teachers to engage in classroom- and practice-based inquiries over a long period of time are unlikely to be effective in helping teachers transform their practice in ways that will impact student learning. As a research community, we should look for ways that ICT can enhance PD programs that focus on the most promising forms of PD--teacher collaboration in long-term, classroom-based inquiries that support teacher learning of both content and pedagogy. And we should examine uses of ICT that enable such kinds of PD activities to become available to larger numbers of teachers.

An Example: Our Research and Development Efforts at LessonLab, Inc. At LessonLab, Inc. (www.lessonlab.com), we are investigating the potential of an interactive, web-based video software platform that was designed specifically to support the kinds of professional learning activities described above while also overcoming some of the barriers that enable such high quality PD programs from reaching many teachers. The LessonLab web platform organizes teacher inquiries and learning around a digital library of videobased case studies of classroom practice that is available to teachers online. The software also enables teachers to post their own lessons and supplementary materials for online use in teacher inquiry groups. The videos are of full-length, unedited lessons that are time-linked to supporting materials (such as student work, textbook pages, worksheets, teacher lesson plans, teacher commentary on the lesson, commentaries by content experts, video interviews with the teacher and/or students, assessments, links to standards documents, etc.). Videos are also time-linked to these resources so that a teacher can view the resources at the particular moment in the video where the resources are relevant (or alternatively the teacher can immediately view a video episode that is relevant to a resource that he/she is using). These materials provide rich contexts for teacher inquiries into teaching practice and student learning. The video technology in the LessonLab software allows teachers to interact with the video, not just passively watch it. Teachers can easily navigate the video, make notes and video clips in a personal notebook, respond to tasks posed by a PD leader (for example, marking with time codes all the places in the lesson where students appear to be confused), and talk with other teachers in forums where they can make video-linked comments. In addition, the LessonLab platform supports discussions of practice that are evidence-based and linked to expertise from outside any given teacher inquiry group. Thus, teachers have a rich database of information for exploration to support evidence-based discussions about the case. Instead of talking in general terms about a lesson, teachers have tools that make it easy to revisit particular episodes in the lesson and to share those links to the video with others. Instead of relying on expertise within the inquiry group, teachers have access to a knowledge base that includes information that may not exist at the local school site (e.g., content expertise related to standards and curriculum, concrete examples of a variety of instructional strategies, etc.). LessonLab technology is designed to work simultaneously as a storage place for professional knowledge and as an environment to support teachers’ learning from this knowledge base. The LessonLab platform can support many kinds of PD approaches. For example, it can support teacher inquiries into teaching practice that are based on the Japanese lesson study model (Lewis & Tsuchida, 1998; Shimahara, 1998; Stigler & Hiebert, 1999; Yoshida, 1999). It enables teachers work together in Page 22


DFG-NSF Conference

US Panelist Papers

collaborative groups over time to learn how to look at teaching and learning more deeply through collaborative inquiries, focusing on how to best support student learning of particular subject matter content. For a second example, it can be used by “teaching coaches” to combine on-line and face-to-face sessions focused on “slowing down” the teaching cycle (plan, teach, & reflect) for collaborative study of lesson videos. A final example is pre-service course-work: Using the LessonLab platform a college instructor can assign students to observe lessons on-line, and return to the classroom for discussions that are linked to the lesson videos they have studied. Instructors can assign tasks, which can be completed on line, so that discussions of teaching are rooted in images of practice not words. Whatever the PD design, lesson study, coaching, or some other approach, the LessonLab approach to PD uses video technology to help teachers learn to analyze practice, to support the development of a shared language for analyses of teaching practice, and to enable multiple, frequent, and repeatable opportunities to observe other teachers’ teaching. A 70-30 mix of on-line to face-to-face interactions enables more teachers to participate in such inquiries, and at times that fit into busy schedules. Teachers can independently watch and analyze videos, complete online tasks, and communicate with other teachers through forums. Face-to-face interactions then focus on discussion and analysis of these shared observations. Thus, face-to-face interactions can immediately focus on the substantive issues, rather than on preparatory activities such as watching the video together. In addition to simplifying scheduling issues for teachers by using an on-line solution, we believe that enabling greater use of video in PD offers many learning advantages over real-time observations. Teaching is a complex process and it is impossible to detect, in real time, all of the important classroom events and interactions. Video enables teachers to slow the process down, to look multiple times at a given interaction, to share observations with other teachers, and through this process to enhance the development of a shared language for describing teaching practices. For example, there are words that are used widely to describe science and mathematics teaching – terms such as “inquiry” or “problem solving” or “constructivist. ” But what counts as actual instances of these ideas in teaching practice? Until teachers begin examining images of actual teaching and linking these images with theoretical constructs, we will never develop a shared understanding of these constructs and their implications for teaching. One line of research at LessonLab focuses on the role of ICT in making outside expertise available to science teacher inquiry/lesson study groups. We are interested in investigating the usefulness of a set of tasks designed to enhance teachers’ ability to “see” science teaching in new ways – from a science content perspective, from a student learning perspective, from a nature of science perspective, and from a pedagogical content knowledge perspective. As a first step in this project, we are currently developing and trying out a set of tasks that can be used in guiding teachers’ analyses of science teaching based on conceptual frameworks developed for use in the TIMSS-R Science Video Study and the Project 2061 Curriculum Materials Analysis Study. In our proposed research project, we will compare the learning of two groups of teachers in a content-focused, lesson study PD program – one group will have access to the conceptual frameworks and tasks to guide their analyses of the video cases while the other group will examine the standards and research literature to construct their own frameworks for analyzing lessons. Does the conceptual framework embedded in the tasks help teachers develop a shared language for analyzing teaching? Does it deepen teachers’ ability to analyze teaching? And most important, does this learning lead to changes in teachers’ practice and their students’ learning?

Research Questions I suggest that the major research question concerning the role of ICT in teacher professional development is: What role can ICT play in enhancing science and mathematics teacher professional development in ways that engage teachers in long-term inquiries into practice that lead to improvements in their knowledge of content and pedagogy, their teaching practice, and their students’ learning? Under this broad umbrella, there are many more specific issues needing examination. A few of particular interest to me are: •

How can ICT support teachers in developing a common language and style of discourse for learning from practice? What resources and expertise are needed to make teacher inquiry groups effective? How can ICT help deepen the dialogue about teaching and ground the dialogue in a Page 23


DFG-NSF Conference

US Panelist Papers

shared knowledge base? •

How can ICT enhance teacher learning about science/mathematical content in the context of practice-based inquiry?

Can ICT help teachers integrate content and pedagogical knowledge to create a new hybrid knowledge often described in the literature that can be glossed as “pedagogical content knowledge” which includes, among other features, the knowledge of how to make content comprehensible to learners at different developmental stages?

To what extent can practice-based, collaborative teacher inquiries take place on-line versus factto-face? What is the optimal balance between face-to-face vs. online interactions?

What role can video play in supporting teacher learning and teacher change? What are the advantages and pitfalls of video-based teacher inquiries?

What are the components of an effective teacher learning video case? What do teachers learn from different kinds of cases? (e.g. cases of “exemplary” teaching, TIMSS videos of typical teaching in different countries, teachers’ own teaching, etc.)

What kinds of support do teachers need to learn how to analyze video in ways that will transform how they look at and plan their own teaching? Can ICT applications help make more accessible to teachers the kinds of conceptual frameworks that can support teachers in learning “to see” teaching practice in new and more analytical ways, challenging them to look beneath the surface and to look more closely at student thinking and learning?

References American Association for the Advancement of Science, Project 2061 (1993). Benchmarks for science literacy. New York: Oxford University Press. Anderson, L M. & Bird, T. (1995). How three prospective teachers construed three cases of teaching. Teaching and Teacher Education, 11(5), 479-99. Ball, D.L. & Cohen, D.K. (1999). Developing practice, developing practitioners: Toward a practice-based theory of professional education. In G. Sykes and L. Darling-Hammond (Eds.),Teaching as the learning profession: Handbook of policy and practice (pp. 3-32). San Francisco: Jossey Bass. Carpenter, T.P., Fennema, E., Peterson, P.L, Chiang, C.P., & Loef, M. (1989). Using knowledge of children’s mathematics thinking in classroom teaching: An experimental study. American Educational Research Journal 26,(1) 499-53. Clark, D., & Hollingsworth, H. (2000). Seeing is understanding. The National Staff Development Council Journal of Staff Development, Fall. Cobb, P., Wood, T., Yackel, E., Nicholls, J., Wheatley, G., Trigatti, B., & Perlwitz, M. (1991). Assessment of a problem-centered second-grade mathematics project. Journal for Research in Mathematics Education, 22, 13-29. Cohen, D.K. & Barnes, C.A. (1993). Pedagogy and policy. In D.K. Cohen, M.W. McLaughlin, & J.E. Talbert (Eds.), Teaching for understanding: Challenges for policy and practice (pp. 207-239). San Francisco: Jossey-Bass. Cohen, D.K. & Hill, H.C. (1998). Instructional policy and classroom performance: The mathematics reform in California. Ann Arbor: University of Michigan. Crockett, M.D. (2002). Inquiry as professional development: Creating dilemmas through teachers’ work. Teaching and Teacher Education, 18, 609-624. Darling-Hammond, L. & Sykes, G. (Eds.). (1999). Teaching as the learning profession: Handbook of policy and practice. San Francisco: Jossey-Bass. Elmore, R.F. (2002). Bridging the gap between standards and achievement: The imperative for professional development in education. Washington, D.C.; Albert Shanker Institute.

Page 24


DFG-NSF Conference

US Panelist Papers

Garet, M.S., Porter, A.C., Desimone, L., Birman, B.F., & Yoon, K.S. (2001). What makes professional development effective? Results from a national sample of teachers. American Educational Research Journal, 38, 915-945. Kennedy, M. (1998). Form and substance in inservice teacher education. Madison, WI: National Institute for Science Education, University of Wisconsin-Madison. Lawrenz, F., & McCreath, H. (1988). Integrating quantitative and qualitative evaluation methods to compare two inservice training programs. Journal of Research in Science Teaching,25, 397-407. Lewis, C.C. & Tsuchida, I. (1997). Planned educational change in Japan: The shift to student-centered elementary science. Journal of Educational Policy, 12, 313-331. Lewis, C.C. & Tsuchida, I. (1998). A lesson is like a swiftly flowing river American Educator, 22(4), 12-17; 50-52. Loucks-Horsley, S., Hewson, P.W., Love, N., & Stiles, K.E. (1998). Designing professional development for teachers of science and mathematics. Thousand Oaks, CA: Corwin Press. Marek, E.A. & Methven, S.B. (1991). Effects of the learning cycle upon student and classroom teacher performance. Journal of Research in Science Teaching, 28(1), 41-53. National Council of Teachers of Mathematics (1991). Professional standards for teaching mathematics. Reston, VA: National Council of Teachers of Mathematics. National Research Council (1996). National Science Education Standards. Washington, D.C.: National Academy Press. National Research Council (2000). How people learn: Brain, mind, experience, and school. Washington, D.C.: National Academy Press. Ogura, Y. (2000). Japanese expert teachers’ science teaching evaluation framework and its application to teacher education. Paper presented at the annual meeting of the National Association for Research in Science Teaching, New Orleans. Otto, P.B. & Schuck, R.F. (1983) The effect of a teacher questioning strategy training program on teaching behavior, student achievement, and retention. Journal of Research in Science Teaching, 20(6), 521-28. Putnam, R.T., Borko, H. (2000). What do new views of knowledge and thinking have to say about research on teacher learning? Educational Researcher, 29(1), 4-15. Rubin, R.L., & Norman, J.T. (1992). Systematic modeling versus the learning clcycle: Comparative effects of integrated science process skill achievement. Journal of Research in Science Teaching, 20, 715727. Sherin, M. G., & Han, S. Y. (2002). Teacher learning in the context of a video club. Paper presented at the annual Meeting of the American Educational Research Association. New Orleans, April 1-5. Sherin, M. G., & van Es. E. A. (2002). Using video to support teachers' ability to interpret classroom interactions. In Society for Information Technology and Teacher Education: Vol. 4. Information Technology and Teacher Education Annual (pp.2532-2536). Association for the Advancement of Computing in Education, Norfolk, VA. Shimahara, N.K. (1998). The Japanese model of professional development: Teaching as craft. Teaching and Teacher Education 14, 451-462. Smith, S.P & Featherstone, H. (1995). He knows there's six 100s in 26? An investigation into what it means to ‘do mathematics’ in a teacher group. NCRTL Craft Paper (95-3). Stigler, J.W. & Hiebert, J. (1999). The teaching gap: Best ideas from the world’s teachers for improving education in the classroom. New York: Free Press. Takemura, S., & Shimizu, K. (1993). Goals and strategies for science teaching as perceived by elementary teachers in Japan and the United States. Peabody Journal of Education, 68(4), 23-33. Page 25


DFG-NSF Conference

US Panelist Papers

Whitehurst, G. J. (2002). Research on teacher preparation and professional development. Paper presented at the White House Conference on Preparing Tomorrow's Teachers. Washington, March 5. Wood, T., & Sellers, P. (1996). Assessment of a problem-centered mathematics program: Third grade. Journal for Research in Mathematics Education, 27, 337-353. Yoshida, M.(1999). Lesson study: An ethnographic investigation of school-based teacher development in Japan. Doctoral dissertation, University of Chicago. Zannoni, C. & Santagata, R. (October 2002). The use of Lessonlab software for teacher professional development. Paper presented at the XII AIRIPA National Conference, Udine, Italy.

Page 26


DFG-NSF Conference

US Panelist Papers

Conceptions of Mathematical Competence Edward A. Silver University of Michigan easilver@umich.edu My assigned topic for this brief paper was, "What models are there to explain the development and the optimization of student competencies and what measurement techniques are there for diagnosing and increasing these competencies? What research is there to inform the development of student competencies in mathematics and how can these be assessed?" Although it is not possible for me to span in a few pages the wide expanse suggested by these questions, I offer a few comments as a place in which to begin our conversation.

What are competencies? The root word competence is typically used in English to indicate fitness or adequacy. The word carries a similar meaning when applied to the learning of school subjects. To say that someone is competent is to make a claim that suggests the individual can perform in a manner that meets or exceeds some standard. Although competence has sometimes been tinged with a sense of minimalism in academic contexts, most uses of the word admit several kinds of complexity. For example, competence is not necessarily a unitary phenomenon. Competence is multidimensional. One can speak of a person being competent with a foreign language with respect to several types of linguistic performance -- such as writing, reading, or speaking. Moreover, as the foreign language example also suggests, competence is not an all-or-nothing phenomenon. Competence is layered. For instance, one might more competent in reading a foreign language than in engaging in conversation in the language. Also, competence may vary by degree across individuals, or even within an individual, across performance domains. For example, the average German student is likely to be more competent than the average American student in reading, writing, and speaking at least one non-native language. Moreover, some German and some American students are exceptionally competent with French; whereas, most students have lower degrees of competence.

What is mathematical competence? Attempts to describe desired competence in mathematics are as old as the teaching of mathematics itself. Over the years one can find many terms -- such as mathematical literacy, numeracy, and expertise -- used to characterize mathematical competence. Studies intended to inform our understanding of mathematical competence have involved a wide range of theories, designs, and groups of subjects. For example, studies have included a variety of subject populations: children with specific learning disabilities, groups of students who are generally successful or generally unsuccessful in mathematics, individuals or groups with demonstrated expertise or advanced training in mathematics, and persons who use mathematical skills or concepts in the performance of non-academic tasks. Indeed, the three forms of complexity mentioned above have spawned many different models of competence and have plagued attempts to gain consensus about any version. The core issues may be summarized as follows: With respect to what standard(s) shall overall mathematical competence be assessed? Can standards for mathematical competence be developed that reflect the various domains of mathematics (e.g., algebra, geometry) and the variety of forms of mathematical performance (e.g., performing routine procedures, displaying conceptual understanding, posing or solving complex problems)? How shall different degrees of competence be established and with respect to what standard shall each be assessed? Although no single model of mathematical competence has adequately addressed all the issues and gained widespread acceptance, one recently suggested model has considerable promise. We have adopted this model in our work in the NSF-funded Center for Proficiency in Teaching Mathematics (see http://www.cptm.us). In a report of a committee formed by the (U. S.) National Research Council (NRC) to synthesize research on mathematics learning over the first eight years of schooling, Kilpatrick, Swafford and Findell (2001)

Page 27


DFG-NSF Conference

US Panelist Papers

proposed the notion of mathematical proficiency to capture a comprehensive, composite model of mathematical competence. According to this NRC report, mathematical proficiency has five components: • Conceptual understanding refers to an integrated and functional grasp of mathematical ideas and encompasses comprehension of mathematical concepts, operations, and relations. •

Procedural fluency refers to knowledge of procedures, knowledge of when and how to use them appropriately, and skill in performing them flexibly, accurately, and efficiently.

Strategic competence refers to the ability to formulate, represent, and solve mathematical problems.

Adaptive reasoning refers to the capacity to think logically about the relationships among concepts and situations; it encompasses the processes of reflection, explanation, and justification.

Productive disposition refers to an inclination to see mathematics as sensible, useful, and worthwhile, coupled with a belief that steady effort in learning mathematics pays off and a tendency to see oneself as an effective learner and doer of mathematics.

This model of mathematical proficiency clearly conveys the multidimensional character of competence noted above. These five strands provide a framework for discussing the knowledge, skills, abilities, and beliefs that constitute mathematical proficiency. Moreover, the model also suggests some aspects of the layering that also characterizes competence. According to Kilpatrick, Swafford and Findell (2001), the five strands are interwoven and interdependent, analogous to the intertwined strands of a rope, in the development of proficiency in mathematics. Some other aspects of layering are less clear in this model and remain to be specified. For example, degrees of proficiency are not specified, nor is it clear the extent to which proficiency might be associated with specific sub-domains of mathematics. Much remains to be done, but this model of mathematical proficiency offers both a useful summary of much of what is known about the development of mathematical competence and a productive starting point for the next phase of work.

What remains to be done? In the remaining space I will mention only two of many challenges that remain for those who wish to characterize and assess mathematical competencies. The first I call the measurement challenge and the second I think of as the domains of definition and applicability challenge. The measurement challenge. In general, the development of measurement approaches and tools has significantly lagged behind the development of theoretical complexity and nuance. For example, the (U.S.) National Assessment of Educational Progress reports student performance with respect to three degrees of competence, which are called achievement levels: Basic, Proficient, and Advanced (see http://nces.ed.gov/nationsreportcard/mathematics/achieve.asp). But these achievement levels lack definitions that reflect the coherence and integrity of mathematics as a subject domain. Instead the levels have been derived through a process that combines some public and professional judgement with post hoc empirical analyses of actual student performance patterns. But refinements in the reporting of results to reflect careful definitions of competence is not the only challenge we face. Much more work needs to be done to measure competence along each of the strands of proficiency identified above. Of the five strands of mathematical proficiency identified by Kilpatrick, Swafford and Findell (2001), conceptual understanding and procedural fluency have received the most attention from test developers. Most tests available in the United States attempt to assess these dimensions of proficiency, and they often report associated subscores with names that evoke these strands, though the definition the definition used by the test developer may be quite different from the one offered by the NRC report. Some attention has been paid to the strands of strategic competence and adaptive reasoning, mostly through the use of so-called performance assessment tasks. The term performance assessment (PA) is typically used to refer to a class of assessments that is based on observation and judgment (Airasian 1991). That is, in PA an assessor usually observes a performance or the product of a performance and judges its quality. PA has long been used in classrooms by teachers to determine what has been learned and by whom. Performance assessment has also been employed in the external assessment of student learning outcomes. PA received significant attention from educators and assessment specialists during Page 28


DFG-NSF Conference

US Panelist Papers

the latter part of the 1980s and throughout the 1990s (Messick 1995; Reckase 1993). A growing dissatisfaction with selected-response testing (e.g., true/false questions and multiple-choice items) and an awareness of advances in research in cognition and instruction also spawned interest in PA. Constructedresponse tasks (e.g., tasks calling for brief or extended explanations or justifications) became increasingly popular as a means of capturing much of what is valued instructionally in a form that could be included in an external assessment of student achievement. These tasks provide a basis for assessing the strands of strategic competence and adaptive reasoning. Might productive disposition also be examined in this way? Despite a variety of technical, feasibility, and interpretation issues that have plagued attempts to employ PA on a large scale (e.g., Baxter & Glaser,1998; Brennan & Johnson, 1995; Gao, Shavelson & Baxter, 1994; Silver et al 2000), many educators and assessment experts remain enthusiastic about the potential of PA to address many limitations of other forms of assessment. In particular, advances in the cognitive sciences and technology, along with the increasing availability of sophisticated technological tools in educational settings, may provide new opportunities to resolve many of these issues. If so, it may be possible to develop an assessment system for mathematical proficiency that is comprehensive, coherent, and continuous -- key characteristics of balanced assessment systems identified by another committee of the (U. S.) National Research Council and described in detail in Pellegrino, Chudowsky, and Glaser (2001). The domains of definition and applicability challenge. Most attempts to characterize mathematical competence focus on matters of academic attainment in the mathematics taught in school. Although this domain of definition is reasonable, it is possible to argue that it is too narrow or that it focuses on the wrong aspects of mathematical competence. Instead one might argue that it is the domain of application that is critical. In such a view, one would consider not only the uses of mathematics within academic work -- in fields highly dependent on mathematics or in mathematics itself -- but also the uses and applications of mathematical ideas, skills and approaches in a wider range of settings. It is this latter view that is espoused by advocates for a view of mathematical competence that has been called quantitative literacy (Steen, 2001). Quantitative literacy refers to the capacity of individuals to deal with the quantitative aspects of life. This term, and the underlying conception, has a long history that includes consumer-oriented mathematics curriculum materials of the 1930s and thereafter, the notion of a numerate citizenry identified in a 1982 British government report (Cockcroft, 1982), the notion of quantitative literacy used in the international life skills survey of adults, and the notion of mathematical literacy proposed by the OECD Programme for International Student Assessment (PISA). Although each of these versions differs from the others in some ways, they all attempt to capture in some nontrivial way to importance of examining the uses of one's mathematical knowledge in context. In this view competence may be described in terms of actions (e.g., deciding, judging, interpreting) as well as in terms of domains of knowledge (e.g., space, number, change) and specific mathematical knowledge and skills (e.g., dividing fractions, representing data, characterizing variation). Although the notion of quantitative literacy has not been defined in a manner that allows either careful description or useful prescription, the examples offered by advocates for quantitative literacy challenge us to examine our conceptions of mathematical competence and our methods for measuring it. To the extent that our view of mathematical competence encompasses knowledge and skills that are useful and useable, we should be able to incorporate this alternative conception in ways that advance our thinking and enhance our development.

Note Paper prepared for the German-American Workshop on Quality development in Mathematics and Science Education, Kiel, Germany, 5-8 March 2003.

References Airasian, P. W. (1991). Classroom assessment. New York, McGraw-Hill. Baxter, G. P., & Glaser, R. (1998). Investigating the cognitive complexity of science assessments. Educational Measurement: Issues and Practice, 17(3), 37-45.

Page 29


DFG-NSF Conference

US Panelist Papers

Brennan, R. L., & Johnson, E. G. (1995). Generalizability of performance assessments. Educational Measurement: Issues and Practice, 14(4), 9-12, 27. Cockcroft, W. H. (1982). Mathematics counts. London: Her Majesty's Stationery Office. Glaser, R., & Silver, E. A. (1994). Assessment, testing, and instruction: Retrospect and prospect. In L. DarlingHammond (Ed.), Review of Research in Education (Volume 20, pp 393-419). Washington, DC, American Educational Research Association.. Gao, X., Shavelson, R. J., & Baxter, G P. (1994). Generalizability of large-scale performance assessments in science: Promises and problems.� Applied Measurement in Education, 7(4), 323-342. Kilpatrick, J., Swafford. J., & Findell, B. (Eds.). (2001). Adding it up: Helping children learn mathematics. Washington, DC: National Academy Press. Messick, S. (Ed.) (1995). Values and standards in performance assessment: Issues, findings, and viewpoints. Educational Measurement: Issues and Practice, 14(4). Pellegrino, J., Chudowsky, N., & Glaser, R. (2001) Knowing what students know: The science and design of educational assessment. Washington, DC, National Academy Press. Reckase, M. (Ed.) (1993). Special issue: Performance assessment. Journal of Educational Measurement, 30(3). Silver, E. A., Alacaci, C., & Stylianou, D. (2000). Students’ performance on extended constructed-response tasks. In E. A. Silver & P. A. Kenney (Eds.), Results from the Seventh Mathematics Assessment of the National Assessment of Educational Progress (pp. 301-341). Reston, VA, National Council of Teachers of Mathematics. Steen, L. A. (Ed.). (2001). Mathematics and democracy: The case for quantitative literacy. Princeton, NJ: Woodrow Wilson National Fellowship Foundation.

Page 30


DFG-NSF Conference

US Panelist Papers

Research on technology and the teaching and learning of mathematics M. Kathleen Heid The Pennsylvania State University mkh2@psu.edu As we think about the role and effectiveness of technology in mathematics education, we first need to acknowledge the precipitous rate at which instructional technologies have changed over the past fifteen years. During that time, research on technology and the teaching and learning of mathematics has evolved at a rate similar to the evolution of the technology itself. In this paper, I will suggest some areas of research on the impact of technology in the teaching and learning of mathematics on which the mathematics education community has made considerable progress and areas in which further research seems to be promising. [For lengthier descriptions of research on technology and the teaching and learning of mathematics, we are compiling two volumes of research syntheses and descriptions to appear later this year. Also, see Heid, 1997.] Mathematics education research has contributed to our understanding of the impact and role of technology in mathematics teaching and learning in several important arenas. Hypotheses and theoretical explanations have been developed, empirical research has been conducted both to test and to generate hypotheses, and technology-intensive curriculum and tools have been developed and tested. Students’ learning of mathematics has been studied in technological environments and in the course of technologyintensive curricula, student-teacher interactions in technology-rich classrooms have been studied, and teacher knowledge and beliefs (e.g., Zbiek, 1995). Technology can affect student learning, teacher-student interaction, student access to mathematical ideas, and the nature of mathematics teaching. I will focus this paper on an area of deep concern to mathematics educators worldwide–the impact of technology on students’ mathematical understandings. As an example of the nature of empirical research in this area, I will discuss some of the results amassing with respect to the impact of technology on the understanding of number, algebra, and functions.

Empirical research: The example of number operations, functions, and algebra. Research related to the use of technology in the teaching of numbers and operations provides several clear messages. First, the preponderance of research on the topic suggests that use of technological tools for computation is not, in general, harmful to students’ development of conceptual or procedural understanding (Damarin et al., 1988; Hedron, 1985; Hembree & Dessart, 1992; Henderson et al., 1985). Moreover, their use has a generally positive effect on problem-solving with or without technology. Second, intelligent use of technology requires a flexible understanding of number and number operations. If they are to learn to use technology to investigate mathematical ideas, student will need to have a deeper understanding of mathematics (Ruthven & Chaplin, 1997). The centrality of algebra and functions in school mathematics coupled with regular individual classroom access to multirepresentational algebra and symbolic manipulation tools suggests a center-stage role for research on technology and the teaching and learning of algebra. Multi-representational technology raises questions about what students ought to learn about algebra and algebraic reasoning and in what order. Graphing tools suggest that graphs and functions can be an organizing feature of school mathematics instruction. Questions that research might inform (but not answer) include: what algebra still needs to be taught when accessible technology can perform most of the symbolic manipulation on which students spend a vast majority of their instruction-related time? In what new ways can courses and topics within courses be sequenced? What new topics can be taught? The following sections will discuss empirical research that ought to inform those decisions. Effects on development of traditional algebraic symbolic manipulation and word problem skills School algebra in the U. S. has long been defined as techniques for solving equations, producing equivalent algebraic expressions, and applying these techniques to a fixed set of word problem types.

Page 31


DFG-NSF Conference

US Panelist Papers

Algebra tutors can be used successfully to improve student performance on standard and novel algebra word problems. In a large number of studies (Johnson et al., 1996; Davis et al., 1994; Dubinsky et al., 1995; Leinbach et al., 1991;Dick & Patton, 1994), technology has been used to facilitate the focus of the course on the development of concepts and on understanding of the applications of calculus. When CAS is used as a tool throughout calculus so that the course can focus on concepts and applications, a common result is that computer users do significantly better on conceptual tasks with little or no loss to by-hand symbolic manipulation skills (Heid, 1984, 1988; Palmiter, 1986; Schrock, 1989; Rosenberg, 1989; Park, 1993). When the CAS has been used to supplement calculus instruction as a lab or on homework assignments or in a smaller portion of the course and the research was designed to measure both conceptual understanding and procedural skills, CAS-using groups have generally (exception: Padgett, 1994)done at least as well as their non-technology counterparts (Campbell, 1994; Cooley, 1995; Hawker, 1986; Judson, 1988; Melin-Conejeros, 1992). CAS-oriented curricula may work differently for higher ability students (Crocker,199 1).Qualitative differences may explain the nature of the quantitative differences in studies like this. CAS-using groups have been observed using a variety of problem-solving strategies (Parks, 1995), approaching subsequent courses more conceptually (Roddick, 1997), and exhibiting greater facility with multiple representations of functions (Hart, 1991; Porzio, 1994). It is clear that the role of symbolic manipulation in school mathematics must be rethought as students learn to live and work in a technological world. Usiskin’s (1995) analysis of the relative roles in the UCSMP curriculum of particular paper-and-pencil and technology-based algorithms provides insight into the intricacies of the issue. A study by Yerushalmy identified ways in which students built new understandings on old (in this case, developing an understanding of asymptotes through by-hand symbolic manipulation, polynomial division). It seems that the important issue is that the student thought to perform a polynomial division and reason from it. While he could have used technology like the TI-92 to get an equivalent result, the question is whether a student who had learned his symbolic manipulation in the context of a CAS would have similarly powerful (or even more powerful) understandings. It seems that technology actually requires that students pay more attention to notation. As students use an increasing range of sophisticated technologies, their ability to adapt and understand different notations will become more important. Sequencing Prior to widespread access to computational tools, one of the most popular sequences within any given mathematical topic was for students to learn the procedures, then learn to apply them or to acquire an understanding of underlying concepts only after students had developed facility with related skills. With computers that could perform most of these routines, a concepts-and-applications-before-skills approach to algebra and to calculus was tested in a range of studies. The studies suggested that symbolic manipulation skills may be learned more quickly after students have developed conceptual understanding through the use of computing tools with the facility of a computer algebra system (Heid, 1988; Heid et al., 1988; Judson, 1990; Palmiter, 1991; Heid, 1992). Similar results were obtained in introductory algebra courses. My colleagues and I have conducted a series of studies of the mathematical understandings of students who completed Computer-Intensive Algebra (Fey et al., 1995) courses. Computer-Intensive Algebra (CIA) focuses on the development of algebraic concepts like function, families, equivalence, and system, and gave students constant access to computer algebra systems without directly teaching traditional by-hand algebraic manipulation skills. Students in the CIA classes consistently outperformed their counterparts in traditional algebra classes on measures of concepts, applications, and problem solving without significantly diminished skills (Heid, 1992; Heid et al., 1988; Matras, 1988; Oosterum, 1990; Sheets, 1993). These studies showed that beginning high school algebra students who studied the CIA curriculum for all but the last six to eight weeks of the school year, and who studied traditional skills for a maximum of six to eight weeks, had a much deeper conceptual understanding of fundamental algebraic ideas (such as function, variable, and mathematical modeling), and scored as well as their traditional course counterparts on final examinations of by-hand algebraic manipulative skills. O'Callaghan (1994) verified these results in a study of college students enrolled in a beginning algebra course who were using the CIA text. Development of conceptual understanding

Page 32


DFG-NSF Conference

US Panelist Papers

Research provides evidence in a variety of arenas that technology can be used to develop conceptual understanding, and as previously discussed, the understanding developed in these studies is typically accompanied by evidence that there was no concomitant detrimental effect on manipulative skills. For example, in a CAS-aided functions course (Hillel et al., 1992), when emphasis was redirected from practice of algebraic manipulation to reading graphs, working between and among representations, and interpreting solutions to equations, students in experimental classes performed better than their counterparts in traditional courses on conceptual questions, and at least as well on technical questions. Not every use of computer algebra systems resulted in advantages for the CAS group, however (Thomas and Rickhuss,1992). Development of understandings of specific algebraic concepts New understandings of variables and functions and tools to support particular concept development Particular types and uses of computing tools have the potential for enhancing student understanding in particular areas of algebra. The effects of graphing technology on conceptual understandings of graphs and functions are well-documented (Kieran, 1993; Leinhart, Zaslavsky, & Stein, 1990; Dunham, 1991; Dunham & Dick, 1994) and include: higher levels of graphical understanding (Browning, 1989); better work in interpreting graphs (Beckmann, 1989; Dugdale, 1986/87; Oosterum, 1990); better work in relating graphs to their symbolic representations (Dugdale, 1989; Rich, 1990; Ruthven, 1990; Shoaf-Grubbs, 1994); deeper understanding of functions (Beckmann, 1989; Rich, 1990); better understanding of connections among a variety of representations (Beckmann, 1989; Browning, 1989; Hart, 1992); enhanced ability to think about function graphs without software (Yerushalmy, 1997). In spite of the inherently algebraic structure of spreadsheets, there have been relatively few studies of the effects of spreadsheet use on students’ algebraic understandings. Rojano and Sutherland (Rojano, 1996; Sutherland & Rojano, 1993) found that hands-on spreadsheet work aimed at enabling students to express the generality of symbolic relationships improved students’ understanding of function and appeared to move from thinking with the specific to the general. Other researchers (Capponi & Balacheff, 1989), however, have observed: "...despite the fact that use of a spreadsheet requires manipulation of formulas, there is not a mere transfer of the pupil's algebraic knowledge into the spreadsheet context". A number of studies have targeted the use of special software to enhance students’ understandings of function (Yerushalmy’s (1997c) “Visual Mathematics” program to broaden the scope of traditional actions and objects, Dubinsky and his colleagues (Ayers et al., 1988) using UNIX shells and scripts to induce reflective abstraction, and Confrey’s Function Probe to support “a ‘covariation approach’ to functions, which gives equal weight to the relationships in the columns and the rows” (Noss, Healy, & Hoyles, 1997, p. 206). Hazzan and Goldenberg (1998) studied undergraduate mathematics using prepared dynamic geometry constructions and noted that students recognized several new manifestations of features of function: the nature of the independent variable, constant function, undefined function, continuous function. Just as technology can be used to enhance students’ understandings of function, it can also useful in developing students’ understandings of variable (e.g.,Sutherland, 1991). Families of functions At times, computing software can provide students with representations that clarify their thinking about families of functions. A study conducted by Ruthven (1990) found that students with calculator experience outperformed their peers both in recognizing the global shape of a graph, and in picking our local features that could be linked to its algebraic form. Students in a study by O’Keefe (1992) recognized and communicated about characterisitcs of function families when those families were investigated using Dynagraph, while the same students were often unable to do so with symbolic or graphic representations.

Summary In summary, myriad studies have examined the effects of technology on the teaching and learning of mathematics. I have pointed to examples of studies in the area of number, algebra, and functions, but similar bodies of research (of different sizes) have investigated aspects of technology and the teaching and learning of elementary and secondary geometry, of calculus, of mathematical modeling , and of probability and statistics. The reports of many of these studies suggest a difference in performance or understanding but do not explain or investigate the nature of those differences.

Page 33


DFG-NSF Conference

US Panelist Papers

The contribution of concepts and theories More recent work has provided concepts and theories that may be useful in explaining the ways that technology impacts mathematics learning. Some of these theories are particularly created to explain aspects of specific technology, some explain more general aspects of technology, and some are more generally applicable to learning. These concepts and theories include a range of theories, including theories about differences in the ways that technologies are used (e.g., Pea’s (1985, 1987) identification of cognitive technologies as amplifiers and reorganizers), theories about learning that may be particularly impacted by technologies (e.g., see Heid’s (2003) description of learning theories applied to uses of CAS in learning mathematics), concepts of registers (Duval, 1995) to explain the role of multiple representations, mediation (e.g., Noss & Hoyles, 1996) to explain the ways technology may change the task, instrumentation (Guin and Trouche) to explain the relationship between the technological tool and the user, technique (Lagrange) to explain the complexity of procedural understanding, milieu (Brousseau, 1997) to explain the interactions between the system of the learner and the system of the software, and the figure/drawing distinction (Parzysz, 1988) that characterizes work in dynamic geometry environments. General areas of needed research The existing body of research on technology and the teaching and learning of mathematics gives a fairly robust accounting of different trials of technology use in mathematics instruction. A small selection of those studies have been identified in this paper, mostly for illustrative purposes. For the most part, the studies have not closely examined the ways in which individuals work in technology-intensive mathematics environments. Instead, the studies have identified promising areas for further research. Series of studies need to be dedicated to characterizing the mathematical understandings that arise from particular uses of technology. How do students work with different types of technology in mathematics classrooms? How are their mathematical understandings impacted by this work? How does technology constrain or mediate that learning? Some of those studies should be targeted on specific aspects of mathematics learning. How do students acquire understanding of mathematical procedures in CAS-rich environments? How does the CAS influence the nature of students’ understandings? Other studies need to address the decisions teachers make as they choose to use technology in their mathematics instruction. What influences mathematics teachers to choose technological approaches? How do teachers view the role of technology in their mathematics teaching? How do their beliefs about mathematics and about technology influence how they bring technology into their instruction? Finally, researchers should test developing theories and hypotheses that may relate to technologyintensive mathematics learning. They should consider the usefulness of current theories and, as needed, develop revisions to those theories to improve their usefulness.

References Ayers, T., Davis, G., Dubinsky, E., & Lewin, P. (1988). Computer experiences in learning composition of functions. Journal for Research in Mathematics Education, 19(3), 236-259. Beckmann, C. E. (1989). Effects of computer graphics use on student understanding of calculus concepts. Ph.D. Dissertation, Western Michigan University. Boers-VanOosterum, M. A. M. (1990). Understanding of variables and their uses acquired by students in traditional and computer-intensive algebra. Ph.D. Dissertation, University of Maryland. Brousseau, G. (1997). Theory of didactical situations in mathematics. (N. Balacheff, M. Cooper, R, Sutherland, & V. Warfield, Trans and Eds.). Dordrecht: Kluwer. Browning, C. A. (1989). Characterizing levels of understanding of functions and their graphs. Ph.D. Dissertation, Ohio State University. Campbell, C. P. (1994). A study of the effects of using a computer algebra system in college algebra. Ph.D. dissertation, West Virginia University. Capponi, N., & Balacheff, N. (1989). Tableur et calcul algebrique (spreadsheet and algebra). Educational Studies in Mathematics, 20, 179-210.

Page 34


DFG-NSF Conference

US Panelist Papers

Cooley, L. A. (1995). Evaluating the effects on conceptual understanding and achievement of enhancing an introductory calculus course with a computer algebra system. Ph.D. dissertation, New York University. Crocker, D. A. (1991). A qualitative study of interactions, concept development and problem-solving in a calculus class immersed in the computer algebra system Mathematica. Ph. D. dissertation, The Ohio State University. Damarin, S. K., Dziak, N. J., Stull, L., & Whiteman, F. (1988). Computer instruction in estimation: Improvement in high school mathematics students. School Science and Mathematics, 88(6), 488-492. Davis, B., Porta, H., & Uhl, J. (1994). Calculus and Mathematica. Reading, Mass: Addison-Wesley. Dick, T. P., & Patton, C. M. (1994). Calculus of a Single Variable. Boston: PWS. Dubinsky, E., Schwingendorf, K., & Mathews, D. (1995). Calculus, Concepts, and Computers. New York: McGraw-Hill. Dugdale, S. (1986/87). Pathfinder: A microcomputer experience in interpreting graphs. Journal of Educational Technology Systems, 15(3), 259-280. Dugdale, S. (1989). Building a qualitative perspective before formalizing procedures: Graphical representations as a foundation for trigonometric identities. In C. A. Maher, G. Goldin, & R. B. Davis (Eds.), Eleventh Annual Meeting of the North American chapter of the International Group for the Psychology of Mathematics (PME-NA), (pp. 249-255). New Brunswick, NJ: Rutgers University. Dunham, P. H. (1991). Teaching with graphing calculators: A survey of research on graphing technology. In L. Lum (Ed.), Fourth International Conference on Technology in Collegiate Mathematics, (pp. 89101). Portland, OR: Addison-Wesley Publishing Company. Dunham, P. H., & Dick, T. P. (1994). Research on graphing calculators. The Mathematics Teacher, 87(6), 440-445. Duval, R. (1995). Geometrical pictures: Kinds of representation and specific processings. In R. Sutherland & J, Mason (Eds.) Exploiting Mental Imagery with Computers in Mathematics Education (pp. 142-147): Berlin: Springer-Verlag., Fey, J. T., Heid, M. K., Good, R. A., Sheets, C., Blume, G. W., & Zbiek, R. M. (1995). Concepts in algebra: A technological approach. Dedham, MA: Janson Publications, Inc. Guin & Trouche Hart, D. K. (1991). Building concept images: Supercalculators and students' use of multiple representations in calculus. Ph. D. dissertation, Oregon State University. Hawker, C. M. (1986). The effects of replacing some manual skills with computer algebra manipulations on student performance in business calculus ( MU-Math(registered trademark, artificial intelligence, dropout rate). Ph. D. dissertation, Illinois State University. Hazzan, O., & Goldenberg, E. P. (1997). Students' understanding of the notion of function in dynamic geometry environments. International Journal of Computers for Mathematical Learning, 1, 263-291. Hedron, R. (1985). The hand-held calculator at the intermediate level. Educational Studies in Mathematics, 16, 163-179. Heid, M. K. (1984). An exploratory study to examine the effects of resequencing skills and concepts in an applied calculus curriculum through the use of the microcomputer. Ph. D. Dissertation, The University of Maryland. Heid, M. K. (1988). Resequencing skills and concepts in applied calculus using the computer as a tool. Journal for Research in Mathematics Education, 19(1), 3-25. Heid, M. K. (1992). Final report: Computer-intensive curriculum for secondary school algebra (Final report No. NSF project number MDR 8751499). National Science Foundation.

Page 35


DFG-NSF Conference

US Panelist Papers

Heid, M. K. (1997). The technological revolution and the reform of school mathematics. American Journal of Education, 106(1), 5-61. Heid, M. K. (2003). Theories that inform the use of CAS in the teaching and learning of mathematics. In J. Fey et al. (Eds.) CAS in Mathematics Education. Reston, VA: National Council of Teachers of Mathematics. Heid, M. K., Sheets, C., Matras, M. A., & Menasian, J. (1988). Classroom and computer lab interaction in a computer-intensive algebra curriculum. Paper presented at Annual meeting of the American Educational Research Association. New Orleans, LA. Hembree, R., & Dessart, D. J. (1986). Effects of hand-held calculators in precollege mathematics education: A meta-analysis. Journal for research in mathematics education, 17, 83-99. Hembree, R., & Dessart, D. J. (1992). Research on calculators in mathematics education. In J. T. Fey & C. R. Hirsch (Eds.), Calculators in mathematics education (pp. 23-32). Reston, VA: The National Council of Teachers of Mathematics. Henderson, R. W., Landesman, E. M., & Kachuck, I. (1985). Computer-video instruction in mathematics: Field test of an interactive approach. Journal for Research in Mathematics Education, 16(3), 207-224. Hillel, J., Lee, L., Laborde, C., & Linchevski, L. (1992). Basic functions through the lens of computer algebra systems. Journal of Mathematical Behavior, 11, 119-158. Johnson, P., Heid, M. K., Edwards, B., & Bohidar, N. (1996). Seventeen major calculus reform projects: A guide to choosing a reformed calculus curriculum. Unpublished manuscript. Judson, P. T. (1988). Effects of modified sequencing of skills and applications in introductory calculus. Ph. D. dissertation, The University of Texas at Austin. Kieran, C. (1993). Functions, graphing, and technology: Integrating research on learning and instruction. In T. A. Romberg, E. Fennema, & T. P. Carpenter (Eds.), Integrating research on the graphical representations of functions (pp. 189-237). Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Lagrange, J.-B. Leinbach, C. L., Hundhausen, J. R., Ostebee, A. M., Senechal, L. J., & Small, D. B. (Eds.). (1991). The Laboratory Approach to Teaching Calculus. Washington, D.C.: Mathematical Association of America. Leinhart, G., Zaslavsky, O., & Stein, M. K. (1990). Functions, graphs, and graphing: Tasks, learning, and teaching. Review of Educational Research, 60(1), 1-64. Matras, M. A. (1988). The effects of curricula on students' ability to analyze and solve problems in algebra. Ph.D. Dissertation, University of Maryland. Melin-Conejeros, J. (1992). The effect of using a computer algebra system in a mathematics laboratory on the achievement and attitude of calculus students. Ph.D. dissertation, The University of Iowa. Noss, R., Healy, L., & Hoyles, C. (1997). The construction of mathematical meanings: Connecting the visual with symbolic. Educational Studies in Mathematics, 33, 203-233. Noss, R., & Hoyles, C. (1996). Windows on mathematical meanings. Dordrecht, The Netherlands: Kluwer. O'Callaghan, B. R. (1998). Computer-Intensive Algebra and Students' Conceptual Knowledge of Functions. Journal for Research in Mathematics Education, 29(1), 21-40. O'Keefe, J. J. (1992). Using dynamic representation to enhance students' understanding of the concept of function. Ph. D. dissertation, Boston College. Padgett, E. E. (1994). Calculus I with a laboratory component. Ph. D. dissertation, Baylor University. Palmiter, J. (1991). Effects of computer algebra systems on concept and skill acquisition in calculus. Journal for research in mathematics education, 22(2), 151-156.

Page 36


DFG-NSF Conference

US Panelist Papers

Palmiter, J. R. (1986). The impact of a computer algebra system on college calculus (Macsyma, integration, MuMath). Ph. D. dissertation, The Ohio State University. Park, K. (1993). A comparative study of the traditional calculus course vs. the calculus & Mathematica course. Ph. D. dissertation, University of Illinois at Urbana-Champaign. Parks, V. W. (1995). Impact of a laboratory approach supported by 'Mathematica' on the conceptualization of limit in a first calculus course ( computer algebra system). Ph. D. dissertation, Georgia State University. Parzysz, B. (1988). Knowing vs. seeing: Problems of the plane representation of space geometry figures. Educational Studies in Mathematics, 19(1), 90-111. Pea, R. D. (1985). Beyond amplification: Using the computer to reorganize mental functioning. Educational Psychologist, 20, 167-182. Pea, R. D. (1987). Cognitive technologies for mathematics education. In A. H. Schoenfeld (Ed.), Cognitive science and mathematics education Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Porzio, D. T. (1994). The effects of differing technological approaches to calculus on students' use and understanding of multiple representations when solving problems. Ph. D. dissertation, The Ohio State University. Rich, B. (1990). The effect of the use of graphing calculators on the learning of function concepts in precalculus mathematics. Ph. D. Dissertation, University of Iowa. Roddick, C. D. (1997). A comparison study of students from two calculus sequences on their achievement in calculus-dependent courses (Mathematica). Ph. D. dissertation, The Ohio State University. Rojano, T. (1996). Developing algebraic aspects of problem solving within a spreadsheet environment. In N. Bednarz, C. Kieran, & L. Lee (Eds.), Approaches to algebra: Perspectives for research and teaching (pp. 137-145). Dordrecht: Kluwer Academic Publishers. Rosenberg, J. P. (1989). A constructivist approach to computer-assisted mathematics instruction. Ph. D. dissertation, Stanford University. Ruthven, K. (1990). The influence of graphic calculator use on translation from graphic to symbolic forms. Educational Studies in Mathematics, 21, 431-450. Ruthven, K., & Chaplin, D. (1997). The calculator as a cognitive tool: Upper-primary pupils tackling a realistic number problem. International Journal of Computers for Mathematical Learning, 2, 93-124. Schrock, C. S. (1989). Calculus and computing: An exploratory study to examine the effectiveness of using a computer algebra system to develop increased conceptual understanding in a first-semester calculus course. Ph. D. dissertation, Kansas State University. Sheets, C. (1993). Effects of computer learning and problem-solving tools on the development of secondary school students' understanding of mathematical functions. Ph.D. Dissertation, University of Maryland. Shoaf-Grubbs, M. M. (1994). The effect of the graphing calculator on female students' spatial visualization skills and level-of understanding in elementary graphing and algebra concepts. CBMS Issues in mathematics Education, 4, 169-194. Sutherland, R., & Rojano, T. (1993). A spreadsheet approach to solving algebra problems. Journal of Mathematical Behavior, 12(4), 353-83. Thomas, P. G., & Rickhuss, M. G. (1992). An experiment in the use of computer algebra in the classroom. Education & Computing, 8, 255-263. Usiskin, Z. (1995). Yerushalmy, M. (1997). Reaching the unreachable: Technology and the semantics of asymptotes. International Journal of Computers for Mathematical Learning, 2, 1-25. Page 37


DFG-NSF Conference

US Panelist Papers

Zbiek, R. M. (1995). Her math, their math: An inservice teacher's growing understanding of mathematics and technology and her secondary students' algebra experience. In D. T. Owens, M. K. Reed, & G. M. Millsaps (Eds.), Seventeenth Annual Meeting of the International Group for the Psychology of Mathematics Education, 2 (pp. 214 - 220). Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education.

Page 38


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.