EVALUATION AND ASSESSMENT TECHNIQUES: PART I INSTRUCTIONS: Complete and or answer the following questions or tasks after each instruction. Why should assessments, learning objectives, and instructional strategies be aligned? What if the components of a course are misaligned? What is the difference between formative and summative assessment? Provide examples. What is the difference between assessment and grading?
What is assessment? Where do we want students to be at the end of a course or a program? And how will we know if they get there? Those two questions are at the heart of assessment. Although there is a lot of buzz about assessment these days, assessment itself is nothing new. If you’ve ever given an exam, led a discussion, or assigned a project – and used what you discovered about student learning to refine your teaching – you’ve engaged in assessment. Assessment is simply the process of collecting information about student learning and performance to improve education. At Carnegie Mellon, we believe that for assessment to be meaningful (not beancounting or teaching to the test!) it must be done thoughtfully and systematically. We also believe it should be driven by faculty so that the information gathered:
Reflects the goals and values of particular disciplines Helps instructors refine their teaching practices and grow as educators Helps departments and programs refine their curriculum to prepare students for an evolving workplace
Assessment Basics Assessment is a broad and rapidly growing field, with a strong theoretical and empirical base. However, you don’t have to be an assessment expert to employ sound practices to guide your teaching. Here we present the basic concepts you need to know to become more systematic in your assessment planning and implementation:
Why should assessments, learning objectives, and instructional strategies be aligned? What is the difference between formative and summative assessment? What is the difference between assessment and grading? Glossary of terms.
Why should assessments, learning objectives, and instructional strategies be aligned? Assessments should reveal how well students have learned what we want them to learn while instruction ensures that they learn it. For this to occur, assessments, learning objectives, and instructional strategies need to be closely aligned so that they reinforce one another. To ensure that these three components of your course are aligned, ask yourself the following questions:
Learning objectives: What do I want students to know how to do when they leave this course? Assessments: What kinds of tasks will reveal whether students have achieved the learning objectives I have identified? Instructional strategies: What kinds of activities in and out of class will reinforce my learning objectives and prepare students for assessments?
What if the components of a course are misaligned? If assessments are misaligned with learning objectives or instructional strategies, it can undermine both student motivation and learning. Consider these two scenarios: Your objective is for students to learn to apply analytical skills, but your assessment measures only factual recall. Consequently, students hone their analytical skills and are frustrated that the exam does not measure what they learned. Your assessment measures students’ ability to compare and critique the arguments of different authors, but your instructional strategies focus entirely on summarizing the arguments of different authors. Consequently, students do not learn or practice the skills of comparison and evaluation that will be assessed.
What do well-aligned assessments look like? This table presents examples of the kinds of activities that can be used to assess different types of learning objectives (adapted from the revised Bloom’s Taxonomy). Type of learning Examples of appropriate assessments objective Objective test items such as fill-in-the-blank, matching, labeling, or Recall multiple-choice questions that require students to: Recognize Identify
recall or recognize terms, facts, and concepts
Activities such as papers, exams, problem sets, class discussions, or concept maps that require students to: Interpret Exemplify Classify Summarize Infer Compare Explain
summarize readings, films, or speeches compare and contrast two or more theories, events, or processes classify or categorize cases, elements, or events using established criteria paraphrase documents or speeches find or identify examples or illustrations of a concept or principle
Activities such as problem sets, performances, labs, prototyping, or simulations that require students to: Apply Execute Implement
use procedures to solve or complete familiar or unfamiliar tasks determine which procedure(s) are most appropriate for a given task
Activities such as case studies, critiques, labs, papers, projects, debates, or concept maps that require students to: Analyze Differentiate Organize Attribute
Evaluate Check Critique Assess Create Generate Plan Produce Design
discriminate or select relevant and irrelevant parts determine how elements function together determine bias, values, or underlying intent in presented material
Activities such as journals, diaries, critiques, problem sets, product reviews, or studies that require students to:
test, monitor, judge, or critique readings, performances, or products against established criteria or standards
Activities such as research projects, musical compositions, performances, essays, business plans, website designs, or set designs that require students to:
make, build, design or generate something new
This table does not list all possible examples of appropriate assessments. You can develop and use other assessments – just make sure that they align with your learning objectives and instructional strategies!
What is the difference between formative and summative assessment?
Formative assessment The goal of formative assessment is to monitor student learning to provide ongoing feedback that can be used by instructors to improve their teaching and by students to improve their learning. More specifically, formative assessments:
help students identify their strengths and weaknesses and target areas that need work help faculty recognize where students are struggling and address problems immediately
Formative assessments are generally low stakes, which means that they have low or no point value. Examples of formative assessments include asking students to:
draw a concept map in class to represent their understanding of a topic submit one or two sentences identifying the main point of a lecture turn in a research proposal for early feedback
Summative assessment The goal of summative assessment is to evaluate student learning at the end of an instructional unit by comparing it against some standard or benchmark. Summative assessments are often high stakes, which means that they have a high point value. Examples of summative assessments include:
a midterm exam a final project a paper a senior recital
Information from summative assessments can be used formatively when students or faculty use it to guide their efforts and activities in subsequent courses.
What is the difference between assessment and grading? Assessment and grading are not the same. Generally, the goal of grading is to evaluate individual students’ learning and performance. Although grades are sometimes treated as a proxy for student learning, they are not always a reliable measure. Moreover, they may incorporate criteria – such as attendance, participation, and effort – that are not direct measures of learning. The goal of assessment is to improve student learning. Although grading can play a role in assessment, assessment also involves many ungraded measures of student learning (such as concept maps and CATS). Moreover, assessment goes beyond grading by systematically examining patterns of student learning across courses and programs and using this information to improve educational practices.
PART II: HOW TO ASSESS STUDENTS’ PRIOR KNOWLEDGE RESOURCE LINK: http://www.cmu.edu/teaching/assessment/priorknowledge/index.html What are Performance-Based Prior Knowledge Assessments? Provide examples. Give examples of your own of appropriately written questions for self assessment. What are CATs? List and provide a brief description of the CATs suggested on the sight. Give 3 examples of CATs you’ve used in class.
How to Assess Students’ Prior Knowledge In order to gauge how much students have learned, it is not enough to assess their knowledge and skills at the end of the course or program. We also need to find out what they know coming in so that we can identify more specifically the knowledge and skills they have gained during the course or program. You can choose from a variety of methods to assess your students’ prior knowledge and skills. Some methods (e.g., portfolios, pre-tests, auditions) are direct measures of students’ capabilities entering a course or program. Other methods (e.g., students’ selfreports, inventories of prior courses or experiences) are indirect measures. Here are links to a few methods that instructors can employ to gauge students’ prior knowledge.
Performance-based prior knowledge assessments Prior knowledge self-assessments Classroom assessment techniques (CATs) Concept maps
Performance-Based Prior Knowledge Assessments The most reliable way to assess students’ prior knowledge is to assign a task (e.g., quiz, paper) that gauges their relevant background knowledge. These assessments are for diagnostic purposes only, and they should not be graded. They can help you gain an overview of students’ preparedness, identify areas of weakness, and adjust the pace of the course. To create a performance-based prior knowledge assessment, you should begin by identifying the background knowledge and skills that students will need to succeed in your class. Your assessment can include tasks or questions that test students’ capabilities in these areas.
Prior Knowledge Self-Assessments Prior knowledge self-assessments ask students to reflect and comment on their level of knowledge and skill across a range of items. Questions can focus on knowledge, skills, or experiences that:
you assume students have acquired and are prerequisites to your course you believe are valuable but not essential to the course you plan to address in the course
The feedback from this assessment can help you calibrate your course appropriately or direct students to supplemental materials that can help them address weaknesses in their existing skills or knowledge. The advantage of a self-assessment is that it is relatively easy to construct and score. The potential disadvantage of this method is that students may not be able to accurately assess their abilities. However, accuracy improves when the response options clearly differentiate both types and levels of knowledge.
Writing Appropriate Questions for Self-Assessments Writing appropriate questions for prior knowledge self-assessments can seem daunting at first. Identifying specific terms, concepts, or applications of skills to ask about will help you write effective questions.
Examples of questions with possible closed responses: How familiar are you with "Karnaugh maps"? 1. I have never heard of them or I have heard of them but don't know what they are. 2. I have some idea what they are, but don't know when or how to use them. 3. I have a clear idea what they are, but haven't used them. 4. I can explain what they are and what they do, and I have used them.
Have you designed or built a digital logic circuit? 1. 2. 3. 4.
I have neither designed nor built one. I have designed one, but not built one. I have built one, but not designed one. I have both designed and built one.
How familiar are you with a "t-test"? 1. 2. 3. 4. 5.
I have never heard of it. I have heard of it, but don't know what it is. I have some idea of what it is, but it’s not very clear. I know what it is and could explain what it's used for. I know what it is and when to use it, and I could use it to analyze data.
How familiar are you with Photoshop? 1. I have never used it or I tried using it but couldn't do anything with it. 2. I can do simple edits using preset options to manipulate single images (e.g., standard color, orientation and size manipulations).
3. I can manipulate multiple images using preset editing features to create desired effects. 4. I can easily use precision editing tools to manipulate multiple images for professional quality output.
For each of the following Shakespearean plays, place a check mark in the cell if it describes your experience. Play
Have Read it
Have seen a live performance
Have seen a TV or movie production
Have written a college-level paper on it
Hamlet King Lear Henry IV Othello
Using Classroom Assessment Techniques Classroom Assessment Techniques (CATs) are a set of specific activities that instructors can use to quickly gauge students’ comprehension. They are generally used to assess students’ understanding of material in the current course, but with minor modifications they can also be used to gauge students’ knowledge coming into a course or program. CATs are meant to provide immediate feedback about the entire class’s level of understanding, not individual students’. The instructor can use this feedback to inform instruction, such as speeding up or slowing the pace of a lecture or explicitly addressing areas of confusion.
Asking Appropriate Questions in CATs Examples of appropriate questions you can ask in the CAT format:
How familiar are students with important names, events, and places in history that they will need to know as background in order to understand the lectures and readings (e.g. in anthropology, literature, political science)? How are students applying knowledge and skills learned in this class to their own lives (e.g. psychology, sociology)? To what extent are students aware of the steps they go through in solving problems and how well can they explain their problem-solving steps (e.g. mathematics, physics, chemistry, engineering)? How and how well are students using a learning approach that is new to them (e.g., cooperative groups) to master the concepts and principles in this course?
Using Specific Types of CATs Minute Paper Pose one to two questions in which students identify the most significant things they have learned from a given lecture, discussion, or assignment. Give students one to two minutes to write a response on an index card or paper. Collect their responses and look them over quickly. Their answers can help you to determine if they are successfully identifying what you view as most important.
Muddiest Point This is similar to the Minute Paper but focuses on areas of confusion. Ask your students, “What was the muddiest point in… (today’s lecture, the reading, the homework)?” Give them one to two minutes to write and collect their responses.
Problem Recognition Tasks Identify a set of problems that can be solved most effectively by only one of a few methods that you are teaching in the class. Ask students to identify by name which methods best fit which problems without actually solving the problems. This task works best when only one method can be used for each problem.
Documented Problem Solutions Choose one to three problems and ask students to write down all of the steps they would take in solving them with an explanation of each step. Consider using this method as an assessment of problem-solving skills at the beginning of the course or as a regular part of the assigned homework.
Directed Paraphrasing Select an important theory, concept, or argument that students have studied in some depth and identify a real audience to whom your students should be able to explain this material in their own words (e.g., a grants review board, a city council member, a vice president making a related decision). Provide guidelines about the length and purpose of the paraphrased explanation.
Applications Cards Identify a concept or principle your students are studying and ask students to come up with one to three applications of the principle from everyday experience, current news events, or their knowledge of particular organizations or systems discussed in the course.
Student-Generated Test Questions A week or two prior to an exam, begin to write general guidelines about the kinds of questions you plan to ask on the exam. Share those guidelines with your students and
ask them to write and answer one to two questions like those they expect to see on the exam.
Classroom Opinion Polls When you believe that your students may have pre-existing opinions about courserelated issues, construct a very short two- to four-item questionnaire to help uncover students’ opinions.
Creating and Implementing CATs You can create your own CATs to meet the specific needs of your course and students. Below are some strategies that you can use to do this.
Identify a specific “assessable” question where the students’ responses will influence your teaching and provide feedback to aid their learning. Complete the assessment task yourself (or ask a colleague to do it) to be sure that it is doable in the time you will allot for it. Plan how you will analyze students’ responses, such as grouping them into the categories “good understanding,” “some misunderstanding,” or “significant misunderstanding.” After using a CAT, communicate the results to the students so that they know you learned from the assessment and so that they can identify specific difficulties of their own.
PART III: HOW TO ASSESS STUDENTS’ LEARNING AND PERFORMANCE RESOURCE LINK: http://www.cmu.edu/teaching/assessment/assesslearning/index.html List a few tips on how to create assignments. List a few tips on how to create exams. Compare and contrast the two previous methods. What is the difference between concept maps and concept tests? Provide an example of each. Provide tips on how to assess student group work? Explain what has worked for you. What benefits are there in using rubrics? Provide an example of a rubric you have used.
How to Assess Students’ Learning and Performance Learning takes place in students’ heads where it is invisible to others. This means that learning must be assessed through performance: what students can do with their learning. Assessing students’ performance can involve assessments that are formal or informal, high- or low-stakes, anonymous or public, individual or collective. Here we provide suggestions and strategies for assessing student learning and performance as well as ways to clarify your expectations and performance criteria to students.
Creating assignments Creating exams Using classroom assessment techniques Using concept maps Using concept tests Assessing group work Creating and using rubrics
Creating Assignments Here are some general suggestions and questions to consider when creating assignments. There are also many other resources in print and on the web that provide examples of interesting, discipline-specific assignment ideas.
Consider your learning objectives. What do you want students to learn in your course? What could they do that would show you that they have learned it? To determine assignments that truly serve your course objectives, it is useful to write out your objectives in this form: I want my students to be able to ____. Use active, measurable verbs as you complete that sentence (e.g., compare theories, discuss ramifications, recommend strategies), and your learning objectives will point you towards suitable assignments.
Design assignments that are interesting and challenging. This is the fun side of assignment design. Consider how to focus students’ thinking in ways that are creative, challenging, and motivating. Think beyond the conventional assignment type! For example, one American historian requires students to write diary entries for a hypothetical Nebraska farmwoman in the 1890s. By specifying that students’ diary entries must demonstrate the breadth of their historical knowledge (e.g., gender, economics, technology, diet, family structure), the instructor gets students to exercise their imaginations while also accomplishing the learning objectives of the course (Walvoord & Anderson, 1989, p. 25).
Double-check alignment. After creating your assignments, go back to your learning objectives and make sure there is still a good match between what you want students to learn and what you are asking them to do. If you find a mismatch, you will need to adjust either the assignments or the learning objectives. For instance, if your goal is for students to be able to analyze and evaluate texts, but your assignments only ask them to summarize texts, you would need to add an analytical and evaluative dimension to some assignments or rethink your learning objectives.
Name assignments accurately. Students can be misled by assignments that are named inappropriately. For example, if you want students to analyze a product’s strengths and weaknesses but you call the assignment a “product description,” students may focus all their energies on the
descriptive, not the critical, elements of the task. Thus, it is important to ensure that the titles of your assignments communicate their intention accurately to students.
Consider sequencing. Think about how to order your assignments so that they build skills in a logical sequence. Ideally, assignments that require the most synthesis of skills and knowledge should come later in the semester, preceded by smaller assignments that build these skills incrementally. For example, if an instructor’s final assignment is a research project that requires students to evaluate a technological solution to an environmental problem, earlier assignments should reinforce component skills, including the ability to identify and discuss key environmental issues, apply evaluative criteria, and find appropriate research sources.
Think about scheduling. Consider your intended assignments in relation to the academic calendar and decide how they can be reasonably spaced throughout the semester, taking into account holidays and key campus events. Consider how long it will take students to complete all parts of the assignment (e.g., planning, library research, reading, coordinating groups, writing, integrating the contributions of team members, developing a presentation), and be sure to allow sufficient time between assignments.
Check feasibility. Is the workload you have in mind reasonable for your students? Is the grading burden manageable for you? Sometimes there are ways to reduce workload (whether for you or for students) without compromising learning objectives. For example, if a primary objective in assigning a project is for students to identify an interesting engineering problem and do some preliminary research on it, it might be reasonable to require students to submit a project proposal and annotated bibliography rather than a fully developed report. If your learning objectives are clear, you will see where corners can be cut without sacrificing educational quality.
Articulate the task description clearly. If an assignment is vague, students may interpret it any number of ways – and not necessarily how you intended. Thus, it is critical to clearly and unambiguously identify the task students are to do (e.g., design a website to help high school students locate environmental resources, create an annotated bibliography of readings on apartheid). It can be helpful to differentiate the central task (what students are supposed to produce) from other advice and information you provide in your assignment description.
Establish clear performance criteria. Different instructors apply different criteria when grading student work, so it’s important that you clearly articulate to students what your criteria are. To do so, think about the best student work you have seen on similar tasks and try to identify the specific characteristics that made it excellent, such as clarity of thought, originality, logical organization, or use of a wide range of sources. Then identify the characteristics
of the worst student work you have seen, such as shaky evidence, weak organizational structure, or lack of focus. Identifying these characteristics can help you consciously articulate the criteria you already apply. It is important to communicate these criteria to students, whether in your assignment description or as a separate rubric or scoring guide. Clearly articulated performance criteria can prevent unnecessary confusion about your expectations while also setting a high standard for students to meet.
Specify the intended audience. Students make assumptions about the audience they are addressing in papers and presentations, which influences how they pitch their message. For example, students may assume that, since the instructor is their primary audience, they do not need to define discipline-specific terms or concepts. These assumptions may not match the instructor’s expectations. Thus, it is important on assignments to specify the intended audience http://wac.colostate.edu/intro/pop10e.cfm (e.g., undergraduates with no biology background, a potential funder who does not know engineering).
Specify the purpose of the assignment. If students are unclear about the goals or purpose of the assignment, they may make unnecessary mistakes. For example, if students believe an assignment is focused on summarizing research as opposed to evaluating it, they may seriously miscalculate the task and put their energies in the wrong place. The same is true they think the goal of an economics problem set is to find the correct answer, rather than demonstrate a clear chain of economic reasoning. Consequently, it is important to make your objectives for the assignment clear to students.
Specify the parameters. If you have specific parameters in mind for the assignment (e.g., length, size, formatting, citation conventions) you should be sure to specify them in your assignment description. Otherwise, students may misapply conventions and formats they learned in other courses that are not appropriate for yours.
A Checklist for Designing Assignments Here is a set of questions you can ask yourself when creating an assignment. Have I...
Provided a written description of the assignment (in the syllabus or in a separate document)? Specified the purpose of the assignment? Indicated the intended audience? Articulated the instructions in precise and unambiguous language? Provided information about the appropriate format and presentation (e.g., page length, typed, cover sheet, bibliography)? Indicated special instructions, such as a particular citation style or headings? Specified the due date and the consequences for missing it? Articulated performance criteria clearly?
Indicated the assignment’s point value or percentage of the course grade? Provided students (where appropriate) with models or samples?
Creating Exams How can you design fair, yet challenging, exams that accurately gauge student learning? Here are some general guidelines. There are also many resources, in print and on the web, that offer strategies for designing particular kinds of exams, such as multiplechoice.
Choose appropriate item types for your objectives. Should you assign essay questions on your exams? Problem sets? Multiple-choice questions? It depends on your learning objectives. For example, if you want students to articulate or justify an economic argument, then multiple-choice questions are a poor choice because they do not require students to articulate anything. However, multiplechoice questions (if well-constructed) might effectively assess students’ ability to recognize a logical economic argument or to distinguish it from an illogical one. If your goal is for students to match technical terms to their definitions, essay questions may not be as efficient a means of assessment as a simple matching task. There is no single best type of exam question: the important thing is that the questions reflect your learning objectives.
Highlight how the exam aligns with course objectives. Identify which course objectives the exam addresses (e.g., “This exam assesses your ability to use sociological terminology appropriately, and to apply the principles we have learned in the course to date”). This helps students see how the components of the course align, reassures them about their ability to perform well (assuming they have done the required work), and activates relevant experiences and knowledge from earlier in the course.
Write instructions that are clear, explicit, and unambiguous. Make sure that students know exactly what you want them to do. Be more explicit about your expectations than you may think is necessary. Otherwise, students may make assumptions that run them into trouble. For example, they may assume – perhaps based on experiences in another course – that an in-class exam is open book or that they can collaborate with classmates on a take-home exam, which you may not allow. Preferably, you should articulate these expectations to students before they take the exam as well as in the exam instructions. You also might want to explain in your instructions how fully you want students to answer questions (for example, to specify if you want answers to be written in paragraphs or bullet points or if you want students to show all steps in problem-solving.)
Write instructions that preview the exam. Students’ test-taking skills may not be very effective, leading them to use their time poorly during an exam. Instructions can prepare students for what they are about to be asked by previewing the format of the exam, including question type and point value
(e.g., there will be 10 multiple-choice questions, each worth two points, and two essay questions, each worth 15 points). This helps students use their time more effectively during the exam.
Word questions clearly and simply. Avoid complex and convoluted sentence constructions, double negatives, and idiomatic language that may be difficult for students, especially international students, to understand. Also, in multiple-choice questions, avoid using absolutes such as “never” or “always,” which can lead to confusion.
Enlist a colleague or TA to read through your exam. Sometimes instructions or questions that seem perfectly clear to you are not as clear as you believe. Thus, it can be a good idea to ask a colleague or TA to read through (or even take) your exam to make sure everything is clear and unambiguous.
Think about how long it will take students to complete the exam. When students are under time pressure, they may make mistakes that have nothing to do with the extent of their learning. Thus, unless your goal is to assess how students perform under time pressure, it is important to design exams that can be reasonably completed in the time allotted. One way to determine how long an exam will take students to complete is to take it yourself and allow students triple the time it took you – or reduce the length or difficulty of the exam.
Consider the point value of different question types. The point value you ascribe to different questions should be in line with their difficulty, as well as the length of time they are likely to take and the importance of the skills they assess. It is not always easy when you are an expert in the field to determine how difficult a question will be for students, so ask yourself: How many subskills are involved? Have students answered questions like this before, or will this be new to them? Are there common traps or misconceptions that students may fall into when answering this question? Needless to say, difficult and complex question types should be assigned higher point values than easier, simpler question types. Similarly, questions that assess pivotal knowledge and skills should be given higher point values than questions that assess less critical knowledge.
Think ahead to how you will score students’ work. When assigning point values, it is useful to think ahead to how you will score students’ answers. Will you give partial credit if a student gets some elements of an answer right? If so, you might want to break the desired answer into components and decide how many points you would give a student for correctly answering each. Thinking this through in advance can make it considerably easier to assign partial credit when you do the actual grading. For example, if a short answer question involves four discrete components, assigning a point value that is divisible by four makes grading easier.
Creating objective test questions Creating objective test questions – such as multiple-choice questions – can be difficult, but here are some general rules to remember that complement the strategies in the previous section.
Write objective test questions so that there is one and only one best answer. Word questions clearly and simply, avoiding double negatives, idiomatic language, and absolutes such as “never” or “always.” Test only a single idea in each item. Make sure wrong answers (distractors) are plausible. Incorporate common student errors as distractors. Make sure the position of the correct answer (e.g., A, B, C, D) varies randomly from item to item. Include from three to five options for each item. Make sure the length of response items is roughly the same for each question. Keep the length of response items short. Make sure there are no grammatical clues to the correct answer (e.g., the use of “a” or “an” can tip the test-taker off to an answer beginning with a vowel or consonant). Format the exam so that response options are indented and in column form. In multiple choice questions, use positive phrasing in the stem, avoiding words like “not” and “except.” If this is unavoidable, highlight the negative words (e.g., “Which of the following is NOT an example of…?”). Avoid overlapping alternatives. Avoid using “All of the above” and “None of the above” in responses. (In the case of “All of the above,” students only need to know that two of the options are correct to answer the question. Conversely, students only need to eliminate one response to eliminate “All of the above” as an answer. Similarly, when “None of the above” is used as the correct answer choice, it tests students’ ability to detect incorrect answers, but not whether they know the correct answer.)
Using Classroom Assessment Techniques Classroom Assessment Techniques (CATs) are a set of specific activities that instructors can use to quickly gauge students’ comprehension. They are generally used to assess students’ understanding of material in the current course, but with minor modifications they can also be used to gauge students’ knowledge coming into a course or program. CATs are meant to provide immediate feedback about the entire class’s level of understanding, not individual students’. The instructor can use this feedback to inform instruction, such as speeding up or slowing the pace of a lecture or explicitly addressing areas of confusion.
Asking Appropriate Questions in CATs Examples of appropriate questions you can ask in the CAT format:
How familiar are students with important names, events, and places in history that they will need to know as background in order to understand the lectures and readings (e.g. in anthropology, literature, political science)? How are students applying knowledge and skills learned in this class to their own lives (e.g. psychology, sociology)? To what extent are students aware of the steps they go through in solving problems and how well can they explain their problem-solving steps (e.g. mathematics, physics, chemistry, engineering)? How and how well are students using a learning approach that is new to them (e.g., cooperative groups) to master the concepts and principles in this course?
Using Specific Types of CATs Minute Paper Pose one to two questions in which students identify the most significant things they have learned from a given lecture, discussion, or assignment. Give students one to two minutes to write a response on an index card or paper. Collect their responses and look them over quickly. Their answers can help you to determine if they are successfully identifying what you view as most important.
Muddiest Point This is similar to the Minute Paper but focuses on areas of confusion. Ask your students, “What was the muddiest point in… (today’s lecture, the reading, the homework)?” Give them one to two minutes to write and collect their responses.
Problem Recognition Tasks Identify a set of problems that can be solved most effectively by only one of a few methods that you are teaching in the class. Ask students to identify by name which methods best fit which problems without actually solving the problems. This task works best when only one method can be used for each problem.
Documented Problem Solutions Choose one to three problems and ask students to write down all of the steps they would take in solving them with an explanation of each step. Consider using this method as an assessment of problem-solving skills at the beginning of the course or as a regular part of the assigned homework.
Directed Paraphrasing Select an important theory, concept, or argument that students have studied in some depth and identify a real audience to whom your students should be able to explain this material in their own words (e.g., a grants review board, a city council member, a vice
president making a related decision). Provide guidelines about the length and purpose of the paraphrased explanation.
Applications Cards Identify a concept or principle your students are studying and ask students to come up with one to three applications of the principle from everyday experience, current news events, or their knowledge of particular organizations or systems discussed in the course.
Student-Generated Test Questions A week or two prior to an exam, begin to write general guidelines about the kinds of questions you plan to ask on the exam. Share those guidelines with your students and ask them to write and answer one to two questions like those they expect to see on the exam.
Classroom Opinion Polls When you believe that your students may have pre-existing opinions about courserelated issues, construct a very short two- to four-item questionnaire to help uncover students’ opinions.
Creating and Implementing CATs You can create your own CATs to meet the specific needs of your course and students. Below are some strategies that you can use to do this.
Identify a specific “assessable” question where the students’ responses will influence your teaching and provide feedback to aid their learning. Complete the assessment task yourself (or ask a colleague to do it) to be sure that it is doable in the time you will allot for it. Plan how you will analyze students’ responses, such as grouping them into the categories “good understanding,” “some misunderstanding,” or “significant misunderstanding.” After using a CAT, communicate the results to the students so that they know you learned from the assessment and so that they can identify specific difficulties of their own.
Using Concept Maps Concept maps are a graphic representation of students’ knowledge. Having students create concept maps can provide you with insights into how they organize and represent knowledge. This can be a useful strategy for assessing both the knowledge students have coming into a program or course and their developing knowledge of course material.
Concept maps include concepts, usually enclosed in circles or boxes, and relationships between concepts, indicated by a connecting line. Words on the line are linking words and specify the relationship between concepts. See an example (pdf).
Designing a concept map exercise To structure a concept map exercise for students, follow these three steps: 1. Create a focus question that clearly specifies the issue that the concept map should address, such as “What are the potential effects of cap-and-trade policies?” or “What is materials science?” 2. Tell students (individually or in groups) to begin by generating a list of relevant concepts and organizing them before constructing a preliminary map. 3. Give students the opportunity to revise. Concept maps evolve as they become more detailed and may require rethinking and reconfiguring. Encourage students to create maps that:
Employ a hierarchical structure that distinguishes concepts and facts at different levels of specificity Draw multiple connections, or cross-links, that illustrate how ideas in different domains are related Include specific examples of events and objects that clarify the meaning of a given concept
Using concept maps throughout the semester Concept maps can be used at different points throughout the semester to gauge students’ knowledge. Here are some ideas:
Ask students to create a concept map at the beginning of the semester to assess the knowledge they have coming into a course. This can give you a quick window into the knowledge, assumptions, and misconceptions they bring with them and can help you pitch the course appropriately. Assign the same concept map activity several times over the course of the semester. Seeing how the concept maps grow and develop greater nuance and complexity over time helps students (and the instructor) see what they are learning. Create a fill-in-the-blank concept map in which some circles are blank or some lines are unlabeled. Give the map to students to complete. You can see an example of this type of concept map exercise at: http://flag.wceruw.org/tools/conmap/solar.php.
Using Concept Tests Concept tests (or ConcepTests) are short, informal, targeted tests that are administered during class to help instructors gauge whether students understand key concepts. They can be used both to assess students’ prior knowledge (coming into a course or unit) or their understanding of content in the current course.
Usually these tests consist of one to five multiple-choice questions. Students are asked to select the best answer and submit it by raising their hands, holding up a color card associated with a response option, or using a remote control device to key in their response. The primary purpose of concept tests is to get a snapshot of the current understanding of the class, not of an individual student. As a result, concept tests are usually ungraded or very low-stakes. They are most valuable in large classes where it is difficult to assess student understanding in real time.
Creating a concept test Creating a good concept test can be time-consuming, so you might want to see if question repositories or fully developed concept tests already exist in your field. If you create your own, you need to begin with a clear understanding of the knowledge and skills that you want your students to acquire. The questions should probe a student's comprehension or application of a concept rather than factual recall. Concept test questions often describe a problem, event, or situation. Examples of appropriate types of questions include:
asking students to predict the outcome of an event (e.g., What would happen in this experiment? How would changing one variable affect others?) asking students to apply rules or principles to new situations (e.g., Which concept is relevant here? How would you apply it?) asking students to solve a problem using a known equation or select a procedure to complete a new task (e.g., What procedure would be appropriate to solve this problem?)
The following question stems are used frequently in concept test questions:
Which of the following best describes… Which is the best method for… If the value of X was changed to… Which of the following is the best explanation for… Which of the following is another example of… What is the major problem with… What would happen if…
When possible, incorrect answers (“distractors”) should be designed to reveal common errors or misconceptions.
Example 1: Mechanics (pdf) This link contains sample items from the Mechanics Baseline Test (Hestenes & Wells, 1992). Example 2: Statics (pdf) This link contains sample items from a Statics Inventory developed by Paul Steif, Carnegie Mellon.
Example 3: Chemistry This links to the Journal of Chemistry Education’s site, which contains a library of conceptual questions in different scientific areas.
Implementing concept tests Concept tests can be used in a number of different ways. Some instructors use them at the beginning of class to gauge students’ understanding of readings or homework. Some use them intermittently in class to test students’ comprehension. Based on how well students perform, the instructor may decide to move on in the lecture or pause to review a difficult concept. Another method is to give students the chance to respond to a question individually, then put them in pairs or small groups to compare and discuss their answers. After a short period of time, the students vote again for the answer they think is correct. This gives students the opportunity to articulate their reasoning for a particular answer.
Implementing ConcepTests This site contains strategies for implementing ConcepTests that are drawn from Chemistry but applicable to other disciplines: http://jchemed.chem.wisc.edu/JCEDLib/QBank/collection/ConcepTests/CTinfo. html
Assessing Group Work All of the basic principles of assessment that apply to individual students’ work apply to group work as well. Assessing group work has additional aspects to consider, however. First, depending on the objectives of the assignment, both process- and product-related skills must be assessed. Second, group performance must be translated into individual grades, which raises issues of fairness and equity. Complicating both these issues is the fact that neither group processes nor individual contributions are necessarily apparent in the final product. Thus, instructors need to find ways of obtaining this information. The general principles described in the next few sections can be adapted to the context of specific courses.
Assess process, not just product. If both product and process are important to you, both should be reflected in students’ grades – although the weight you accord each will depend on your learning objectives for the course and for the assignment. Ideally, your grading criteria should be communicated to students in a rubric. This is especially important if you are emphasizing skills that students are not used to being evaluated on, such as the ability to cooperate or meet deadlines.
Ask students to assess their own contribution to the team. Have students evaluate their own teamwork skills and their contributions to the group’s process using a self-assessment of the process skills you are emphasizing. These process
skills may include, among others, respectfully listening to and considering opposing views or a minority opinion, effectively managing conflict around differences in ideas or approaches, keeping the group on track both during and between meetings, promptness in meeting deadlines, and appropriate distribution of research, analysis, and writing.
Hold individuals accountable. To motivate individual students and discourage the free-rider phenomenon, it is important to assess individual contributions and understanding as well as group products and processes. In addition to evaluating the work of the group as a whole, ask individual students to demonstrate their learning. This can be accomplished through independent write-ups, weekly journal entries, content quizzes, or other types of individual assignments.
Ask students to evaluate their group’s dynamics and the contributions of their teammates. Gauge what various group members have contributed to the group (e.g., effort, participation, cooperativeness, accessibility, communication skills) by asking team members to complete an evaluation form for group processes. This is not a foolproof strategy (students may feel social pressure to cover for one another). However, when combined with other factors promoting individual accountability, it can provide you with important information about the dynamics within groups and the contributions of individual members. If you are gathering feedback from external clients – for example, in the context of public reviews of students’ performances or creations – this feedback can also be incorporated into your assessment of group work. Feedback from external clients can address product (e.g., “Does it work?”, “Is it an effective design?”) or process (e.g., the group’s ability to communicate effectively, respond appropriately, or meet deadlines) and can be incorporated formally or informally into the group grade.
Grading Methods for Group Work Instructor and student options for assessing group work.
Example of Group and Self-Assessment Tool (download .pdf | download .doc) Creating and Using Rubrics A rubric is a scoring tool that explicitly describes the instructor’s performance expectations for an assignment or piece of work. A rubric identifies:
criteria: the aspects of performance (e.g., argument, evidence, clarity) that will be assessed descriptors: the characteristics associated with each dimension (e.g., argument is demonstrable and original, evidence is diverse and compelling)
performance levels: a rating scale that identifies students’ level of mastery within each criterion
Rubrics can be used to provide feedback to students on diverse types of assignments, from papers, projects, and oral presentations to artistic performances and group projects.
Benefitting from Rubrics A carefully designed rubric can offer a number of benefits to instructors. Rubrics help instructors to:
reduce the time spent grading by allowing instructors to refer to a substantive description without writing long comments help instructors more clearly identify strengths and weaknesses across an entire class and adjust their instruction appropriately help to ensure consistency across time and across graders reduce the uncertainty which can accompany grading discourage complaints about grades
An effective rubric can also offer several important benefits to students. Rubrics help students to:
understand instructors’ expectations and standards use instructor feedback to improve their performance monitor and assess their progress as they work towards clearly indicated goals recognize their strengths and weaknesses and direct their efforts accordingly
Examples of Rubrics Here we are providing a sample set of rubrics designed by faculty at Carnegie Mellon and other institutions. Although your particular field of study or type of assessment may not be represented, viewing a rubric that is designed for a similar assessment may give you ideas for the kinds of criteria, descriptions, and performance levels you use on your own rubric.
Paper
Example 1: Philosophy Paper This rubric was designed for student papers in a range of courses in philosophy (Carnegie Mellon). Example 2: Psychology Assignment Short, concept application homework assignment in cognitive psychology (Carnegie Mellon). Example 3: Anthropology Writing Assignments This rubric was designed for a series of short writing assignments in anthropology (Carnegie Mellon). Example 4: History Research Paper. This rubric was designed for essays and research papers in history (Carnegie Mellon).
Projects
Example 1: Capstone Project in Design This rubric describes the components and standards of performance from the research phase to the final presentation for a senior capstone project in design (Carnegie Mellon). Example 2: Engineering Design Project This rubric describes performance standards for three aspects of a team project: research and design, communication, and team work.
Oral Presentations
Example 1: Oral Exam This rubric describes a set of components and standards for assessing performance on an oral exam in an upper-division course in history (Carnegie Mellon). Example 2: Oral Communication This rubric is adapted from Huba and Freed, 2000. Example 3: Group Presentations This rubric describes a set of components and standards for assessing group presentations in history (Carnegie Mellon).
Class Participation/Contributions
Example 1: Discussion Class This rubric assesses the quality of student contributions to class discussions. This is appropriate for an undergraduate-level course (Carnegie Mellon). Example 2: Advanced Seminar This rubric is designed for assessing discussion performance in an advanced undergraduate or graduate seminar.
See also "Examples and Tools" section of this site for more rubrics. Examples and Tools One goal of this website is to share course- and program-level assessments that have been developed here at Carnegie Mellon. We encourage Carnegie Mellon instructors and programs to contribute additional assessments.
Course-level Examples By College Course-level Examples By Type Program-level Examples By College Contribute your assessment tool
PART IV: HOW TO ASSESS YOUR TEACHING RESOURCE LINK: http://www.cmu.edu/teaching/assessment/assessteaching/index.html What are early course evaluations? What do they consist in? Explain the importance o classroom observations. What are a few suggestions on how to go about with a classroom observation? What is a student focus group?
How to Assess Your Teaching What are your strengths and weaknesses as an instructor? What can you do to improve your teaching effectiveness? There are a number of methods you can use to assess your teaching in addition to the end-of-semester evaluations conducted by the university. They include:
Early course evaluations One-on-one teaching consultations Classroom observations Student focus groups Course representatives
Early Course Evaluations Why wait until the end of the semester to ask students to evaluate your course? Conducting an early course evaluation allows you to use student feedback to improve a course in progress.
Scheduling an Early Course Evaluation Ideally, you should conduct your evaluation early – for example, in the first three to six weeks of a semester-long course or the first two to three weeks of a mini course. By this time, students will have a reasonable sense of how you teach and evaluate their learning and will be able to make substantive comments. Conducting the evaluation early also gives you time to make adjustments to your teaching and the course and see their impact. For best quality feedback, allow 10 to 15 minutes at the beginning of class for students to complete the form. If you distribute the forms at the end of class, many students will be too rushed and the quality of the feedback will be diminished.
Preparing Students for the Evaluation Let students know that you would like their feedback so that you can create a better learning experience for them. Stress that you want candid and constructive responses that will help you meet this goal. Encourage students to write to you rather than about you. Finally, tell students that you will talk with them about the feedback you receive. This shows them that you are genuinely interested in their feedback and will respond to their comments.
Choosing an Evaluation Form The Eberly Center recommends using an evaluation form that asks open-ended questions. This allows students to address the issues that they perceive to be the most important. You can always add one or two questions about specific issues or concerns you have.
Sample Forms Here we provide examples of early course evaluations for instructors and TAs. In addition, the Eberly Center can provide you with assistance in developing your own form.
Form for Instructors: PDF format MS Word format Form for TA's: PDF format MS Word format
Organizing Student Feedback A pile of open-ended responses can seem daunting, but the data can often be easily organized so that you can identify major themes. The following process has been helpful to many faculty:
Starting with the first student’s comments, rewrite an abbreviated version of each main point or idea they touch on (e.g., too fast, homework doesn’t relate to lecture). When you see an identical or similar comment by another student, make a tally mark next to the original abbreviated version to indicate that the comment was repeated. Sort the list into themes (e.g., pace, difficulty, presentation skills, tone). Note the frequency of the different kinds of comments.
There will probably be areas of consensus (high frequency) and divergence (low frequency). Areas of consensus will usually be your highest priority when determining what, if anything, to change.
Interpreting Student Feedback One useful way to make sense of student feedback is to group students’ comments into the following categories:
strengths, which are aspects that students felt you did well or were positive aspects of the course ideas for change, which are aspects that students felt were weak or could be improved issues beyond your control, which are that you can’t change
When considering changes, focus on “ideas for change,” but do not lose sight of the strengths students identify! Eberly Center consultants are happy to help you interpret early course evaluations and develop appropriate responses.
Discussing Student Feedback with Your Class A critical part of the early course evaluation process is discussing the feedback with your students and thanking them for their input. This sets a positive tone for the class and shows a fundamental respect for students’ role in making the class work. Here is a suggested process for discussing this feedback with your class:
Select three to five issues that you want to report to the class. Balance the issues so that you present both positive feedback and areas for improvement. If you plan to make changes based on the feedback, explain the changes and the rationale behind them. If possible, enlist students’ help in your efforts (e.g., if they reported that you talk too fast or too softly, ask them to indicate with a hand signal or some other sign when they cannot follow or hear). If you decide not to make changes in an area students identified as problematic, explain why the changes are not possible or why it is important to do it the way you are currently doing it.
Maintain a positive tone throughout the discussion. It is important not to seem defensive, angry, or over-apologetic because these reactions can undermine students’ perceived value of future evaluations.
One-on-One Teaching Consultations The Eberly Center offers individual consultations to any instructor on campus who would like the opportunity to discuss teaching issues, solve teaching problems, or try new innovations in the classroom. In these consultations we work collaboratively with you to help you identify your strengths as a teacher, collect information (e.g., classroom observations, student focus groups, examination of teaching materials) to reveal the source of problems, and develop productive solutions.
Who We Work With We work with faculty, post-docs, and graduate students with all level of teaching experience, including instructors who are:
new to Carnegie Mellon and want to calibrate to our students and the institution experienced and successful teachers who want to try new techniques, approaches, or technologies encountering difficulties in their courses and want help identifying and addressing problems new to teaching and want help getting started (including graduate students who anticipate pursuing an academic career)
How Consultations Work We hold ourselves to high standards when working one on one with instructors. All of our consultations are: Strictly confidential: We do not disclose any information from our consultations. This includes the identities of those with whom we work, the information they share with us, and the data we gather on their behalf from classroom observations and interactions with TAs and students. Documented for faculty and graduate student purposes alone: We provide written feedback to the instructors with whom we consult that summarizes and documents the consultation process. We do not write letters of support for reappointment, promotion, or tenure, but faculty can choose to use our documentation as they see fit.
Voluntary: We do not seek out instructors, but we are happy to meet with anyone who contacts us.
Classroom Observations Having a colleague observe your classroom can be a useful way to get immediate feedback about your strengths and weaknesses as an instructor, as well as concrete, contextualized suggestions for improvement. The Eberly Center provides this service to any faculty member or graduate student at Carnegie Mellon, regardless of experience level. The feedback we provide is strictly for you: we ensure strict confidentiality. To ensure that observations are as productive as possible, we recommend that you meet with an Eberly consultant before the first classroom observation to:
discuss the goals of the course discuss the goals of the particular class being observed talk about any particular concerns or requirements you have regarding the observation share relevant course materials discuss specific aspects of your teaching you would like the observer to provide feedback on
You should also meet with the consultant again after the observation to go over feedback, ask questions, discuss applicable strategies, and (if the class was videotaped) review and discuss the videotape together. You can request follow-up observations as well.
Student Focus Groups When conducted thoughtfully by a skilled interviewer, focus groups can be a useful and reliable method for collecting information about students’ experiences in a course or program. Because they allow for clarification and follow-up, they are more effective than surveys for identifying areas of agreement and disagreement across groups of students and for eliciting students’ suggestions for improvement. If you are interested in having a trained Eberly teaching consultant lead a focus group with students from a current class, a previous class, or a particular academic program, please contact us to discuss whether a focus group is appropriate for your particular context.
Course Representatives One way to monitor your effectiveness as a teacher is to ask a student to serve as a student representative or ombuds(wo)man. The representative’s role is to communicate student concerns and feedback anonymously to you. The student you ask to play this role should be someone you consider trustworthy who is respected by classmates (one option is to ask the class to choose their own
representative.) Explain that the representative’s responsibility will be strictly to synthesize and share students’ concerns anonymously, but that he or she is free to decline the responsibility. The role should be strictly voluntary. When you have chosen your course representative, let other students know that you welcome their feedback, which they can convey to you directly or channel to their representative, who will share it anonymously with you. You can then meet or correspond periodically with the course representative to collect student feedback. In larger classes, you might want to designate a team of student representatives who can synthesize the experiences of students in different recitation sections in their feedback to you.