SPRING REPORT V
VITTORE | RENAISSANCE LEARNING | 05.2017
TABLE OF CONTENTS 1
51
IDEATION
2
Executive Summary
53
Inquire Math
3
About Renaissance Learning
55
Fantasy Groups
5
Project Description
57
Goal Guide
6
Hunt Statement
59
Mission of the Week
61
Mistake-it-til-you-Make-it
63
Testing Bot Body
Literature Review
65
Team Progress Monitoring
11
Competitive Analysis
67
Goal Map Challenge
13
Research and Analysis Methods
7 9
1
INTRODUCTION
EXECUTIVE SUMMARY
RESEARCH
69
CONCLUSIONS
15
FINDINGS
17
Cultural Model
70
About Vittore
19
Information Flows
71
References
23
Sequence Flow
25
Design Insights
47
Personas
This report summarizes the first three phases of a collaborative project between Renaissance Learning and Vittore, a student design team from Carnegie Mellon University’s Master’s in Educational Technology and Applied Learning Science (METALS) Program. Renaissance approached METALS with the need to update the goal-setting wizard within their assessment suite to meet the needs of teachers using their products. Using data-driven and human-centered research methods, Vittore investigated teacher needs in order to design a goal-setting process that best supports teachers’ instructional practices and provides the greatest benefit to learners. This project spans from January to August 2017 and involves four major phases to designing a solution: Research, Analysis, Ideation, and Solution Evaluation. This report documents the first three phases of the project. For the first two phases, primary research from user interviews and secondary research are presented. In the analysis phase, the workflow for key stakeholders, the types of information that are shared, and the influence of different stakeholders were mapped to determine how goal-setting practices fit into the school-wide system. We also present synthesis of the most salient insights, including usage barriers to the existing goal-setting wizard, Star’s measurement metrics, and the varying degrees of data usage in classrooms.
For the third phase, Ideation, Vittore generated eight product concepts based on these design insights. Each of these product concepts attempts to address specific ways in which goal setting and progress monitoring can be made more of a central practice in schools. The set of product concepts presents a wide variety of potential solutions to the identified breakdowns and barriers as reported in the analysis phase. Vittore hopes to use these concepts as a starting point to generate more fully-developed ideas, to combine solutions to addressmultiple research insights, and to spark novel solutions that have not been developed. Vittore would like to acknowledge the support we have received from Renaissance Learning in support of this project. The Renaissance team has provided us with resources, valuable insight on research directions, and has thoroughly engaged in the Analysis and Ideation phases. We look forward to continuing the design and development of a solution with the Renaissance team through the remainder of this project.
Our team found that in general, data goals are measured peripherally rather than at the core of student progress monitoring.
2
MISSION: “To accelerate learning for all children and adults of all ability levels and ethnic and social backgrounds, worldwide.”
ABOUT RENAISSANCE LEARNING 3
Renaissance Learning is a learning analytics company that produces K-12 educational software. Renaissance products, including adaptive Star assessments, Accelerated Reader, and Accelerated Math, are used in over 30,000 schools nationwide. Renaissance was founded in 1986 by two parents, Judi and Terry Paul (Where we began, 2017). They were interested in helping their son improve his reading skills by providing a reading practice platform to help his teacher. Since then, the assessments, practice programs, and teacher reports have been strengthened through research into educational material validity and
reliability, consultations with teachers and educational experts, and technology that meets teacher needs. The Renaissance interim assessment suite, Star, provides teachers with standards-aligned assessments in addition to goal-setting and progress-monitoring tools that support teachers in making data-driven instructional decisions. Star assessments are used by schools to measure students’ knowledge to identify areas of growth and to identify students at-risk of failure. This allows early intervention that helps minimize student achievement gaps.
Renaissance’s evidence-based tools provide teachers with meaningful data to help teachers make the best instructional decisions for each student in their classrooms. Thanks to a robust normative growth model, teachers can consider student growth individually allowing them to set achievable goals for each student. With reliable data about student achievement and a comprehensive set of tools to monitor student growth, teachers can help each student on his or her path to mastery.
4
PROJECT DESCRIPTION
HUNT STATEMENT
Renaissance Learning’s Star Assessments are computer-adaptive tests used to measure student performance in reading, math, and early literacy. These assessments are widely used for Response to Intervention (RTI) and Multi-Tiered Support System (MTSS) purposes. Comparisons of Star Assessment individual student scores to norms across all students taking Star can aid teachers in identifying students in need of intervention (screening), measuring growth towards state standards (growth and progress monitoring), and evaluating the effectiveness of different interventions.
tions that the students need to achieve these goals. In addition to giving teachers feedback about intervention efficacy, challenging yet reasonable goals can engage students and promote growth.
Star Assessments are designed as tools for teachers to help assign students to the proper level of support and type of interventions. Because student support at higher tiers requires additional resources, teachers and schools benefit from knowing whether an intervention is effective and being able to decide whether a student still requires the resources provided by that tier of support.
The opportunity to improve the goal setting process served as the impetus for the project with Vittore. Our team has investigated the goalsetting experience for teachers and students alike to understand current barriers to use of the GSW, to identify opportunities for improvement, and to implement a solution to better meet teachers’ goal-setting needs and improve the quality of their intervention decisions.
Despite the benefits offered by setting goals, few students (3-6% of Star Assessment users) have goals set for them using the goal-setting wizard (GSW) in the Star Assessment toolkit. With approximately 20-30% of students involved in targeted academic interventions, increased use of the GSW can positively impact Star users.
We aim to investigate current goal setting and intervention processes used by teachers in K-12 classrooms in order to empower them to leverage data-driven approaches to set personalized goals for students, monitor instructional efficacy, and allocate resources for the different tiers of student support.
Progress goals can help teachers in this decision making process. By setting a goal for an individual student, the teacher creates a criterion against which to measure student growth. The processes of setting goals and monitoring progress towards those goals can help teachers determine the effectiveness of their interventions, decide how to allocate their resources regarding students who are in need of support, and the types of interven-
5
7
RESEARCH
This section discusses our research collection and analysis techniques. At the start of our research phase, Vittore conducted a literature review on the themes of MTSS and RTI, goal setting, and growth measurements used in K-12 education. We selected the field research method of contextual inquiry in order to research goal setting and intervention in K-12 firsthand. Research analysis methods including affinity diagramming and process modeling were employed to synthesize data.
CONTENTS
7
09
Literature Review
11
Competitive Analysis
13
Research and Analysis Methods
8
LITERATURE REVIEW To understand the role and function of goal-setting in the RTI/MTSS framework, we investigated three main areas informed by an idealized workflow provided by Renaissance at the start of this project: response to intervention, goal-setting, and growth measurements. To understand research-based recommendations about how goal-setting should work in this context, we were interested in answering several key questions: 1. What are the pedagogical reasons for RTI, and how do goal-setting and growth measurements help intervention program robustness? 2. What are the benefits of goal-setting, and how should teachers set reasonable goals for students? 3. What are the values of growth measurements, and how can these be interpreted into actionable decision rules regarding instruction?
RTI/MTSS Response to Intervention (RTI) systems are based on research that has demonstrated the importance of early education for academic success. Juel (1988) showed that children who fail to master basic reading skills by the end of first grade have an 88% probability of continuing to struggle in the fourth grade. Early identification of students with academics problems is therefore paramount to their success. RTI was developed as this identification method.
9
GOAL SETTING The RTI process involves three tiers of instruction known as multi-tiered support systems (MTSS). Tier 1, core instruction, is the standard level of support for students; approximately 85% of students fit into this category. These students are expected to achieve academic success without additional intervention. Tier 2 adds additional instructional support to help students whose learning trajectory predicts academic goals not being achieved. It is designed to serve about 10% of students. Tier 3 is an intensive intervention designed for students who do not respond to Tier 1 or Tier 2 and who are identified as being at high risk of academic failure. The RTI process relies on accurate and effective use of screening and progress monitoring assessments so educators can make data-driven decisions about which students should receive additional support and so that they can best evaluate the efficacy of that support. Students are first screened through benchmarking assessments to identify at-risk students. Progress monitoring then serves as a formative assessment to measure a student’s response to intervention. Computer adaptive testing (CAT) forms the foundation for RTI decisions within Star. CAT involves large item banks across assessment domains and adjusts difficulty based on student responses. The test samples performance towards long-term goals such as expected growth over the academic year and pinpoints the skills that represent a student’s current achievement level. These screening tests measure performance towards these overarching goals, not just the content or skills the student is learning currently (Stecker, 2005). These tests measure general outcomes, so they can help teachers predict whether students will meet grade-level goals and plan instruction accordingly.
Literature provides two main reasons for setting student goals: to provide a metric against which to measure performance and to incentivize students. In the Star RTI framework, the measurement metric is the key rationale. Only literature supporting that aspect of student goal-setting is presented here. Teachers and administrators use performance goals in the RTI workflow to evaluation their instructional intervention efficacy for both students at the individual micro-level and at the group macro-level (Shapiro, 2014). When a goal is established, teaching teams have a method they can use to compare the effectiveness of their intervention methods. As Shapiro (2014) clearly discusses, goals should identify an expected level of performance. If a student is on track to meet his goal, a teacher can be infer that an intervention is effective. Similarly, if progress measurements do not predict success in meeting goals, teachers can adjust their intervention strategies. Setting a goal provides a means of measuring instructional efficacy. Once a teaching team establishes explicit expected outcomes, they have a method to measure whether their intervention plans meet these expected outcomes. With specific goals, assessments and instruction to measure and support progression towards these goals can be aligned (Carver, 2001).
GROWTH MEASUREMENTS According to Dale H. Schunk, “challenging yet attainable” goals should be set for students (1990, p. 81). Goals in this zone of proximal development (Vygotsky, 1978) should be achievable with an effective
intervention but provide adequate challenges for students to grow. Literature has suggested decision processes that take into account benchmarks, current performance levels, and anticipated growth to set goals that fit the challenging yet attainable requirement. Additionally, student motivation and interest in goals depend on the specificity, proximity, and difficulty level of the goal. In the Star RTI workflow, student performance is measured by comparing student growth to target goals. Renaissance includes growth measurements as a means of contextualizing individual student growth. This is achieved through norm-referenced growth measurements. In these measurements, a student’s growth is compared to his or her peers who have begun at the same level. Comparisons between students who have similar “achievement paths” are considered more useful for these students because they develop data-driven expectations collected from an appropriate sample. Although teaching staff are interested in closing the achievement gap, “the growth necessary for nonproficient students to reach proficiency, absent radical changes to growth rates of students statewide, is likely unattainable for a large percentage of non-proficient students” (Betebenner, 2011, 12). Therefore, measuring growth in the context of achievement is more illuminating than measuring achievement alone. Further literature supports growth measurements as dispositional education: growth mindsets can buttress student achievement (Ambrose et al., 2010). For low-achieving students, who may be easily discouraged due to performance differences compared to their peers, it is important to establish a difference between performance levels and growth levels.
10
COMPETITIVE ANALYSIS BENCHMARKING TEST
Standards Aligned
Outer Loop Adaptive Assessment
MTSS RESOURCE ALLOCATION
MTSS Decisions
Goal Setting
INTERVENTION PLANNING
Individual Skill Tracking
Group Data Provided
To consider Star
PROGRESS MONITORING + GROUP ITERATION
Recommended Interventions
Star
NWEA MAP
DIBELS
Dreambox
PearsonAIMSweb
FastBridge
i-Ready
Test Question Transparency
Growth Measurements
Timed Assessments
Progress Monitoring
assessments and the GSW in context, we identified several alternative benchmarking and MTSS-driven competitive products. While other educational technology uses goal-setting, Star tools most useful to schools in the RTI support they provide, so analysis was constrained to these similar tools. We compiled a list of features that teachers considered important based on interviews and rated each of these products according to the features that the user considers most important during their workflow. These competitors do not represent all alternative products. This analysis helped us identify areas for improvement in several ways. Because our interviews were conducted with both Star users and non-Star users, understanding the differences in features and their affiliated teacher use allowed us to discover similarities in usage patterns. This analysis brought attention to how other software approach some of these important features and how these products are being used in the field. Finally, it helped identify features which had usage patterns that did not match expectations.
The gradation in the purple shades represent how well the corresponding feature met user needs. A blank box indicates that feature is not present. A red-outlined box represents a case in which the matching feature is present but was not used as expected by teachers or did not entirely meet teacher needs. For example, many of the interviewees expressed interest in measuring growth but did not use the growth measurements as built into the system.
11
12
RESEARCH AND ANALYSIS METHODS OUR INTERVIEWEES 14 classroom teachers 8 ELA, 6 math 1 Grade 2 5 Grade 3 5 Grade 4 3 Grade 5 6 intervention teachers 2 special education 1 math 1 reading 2 general
RESEARCH METHODS Vittore has employed two key data collection methods: contextual inquiry and think-alouds. We used these methods to understand educators’ decision-making processes for placing and teaching students in instructional tiers and classroom groups, as well as decisions during interventions. Contextual inquiry involves a semi-structured interview with the subject about their work and their work tools in the context in which the subject completes their work. This is a method suited towards understanding the work environment and culture that may influence the subject (Beyer & Holtzblatt, 1997).
ANALYSIS METHODS Vittore used five techniques to analyze our data: coding, affinity diagramming, building models, generating insights, and constructing personas. Coding is a process that categorizes quotes and notes from raw interview data into relevant and informative codes determined by the protocol and guiding research questions. Some examples of our codes include: GS (Goal Setting), IoR (Interpretation of Reports), and CwS (Communication with Students).
Think-alouds are a technique in which the subject verbalizes their thought processes as they complete their work (Lewis & Rieman, 1993). Through this, the researcher is privy to the decision making process and criteria for making critical decisions in the subject’s line of work.
Affinity diagramming involves grouping raw data into summative, hierarchical thematic and insight categories. This is a bottom-up process in which thematic and insight categories emerged organically as we sifted through our interview notes. Our team also used a theoretical framework informed by Ambrose et al. (2010) and Wiggins and McTighe (2005) to compare grouping organizations. This framework allowed us to consider how categories fit into categories of instructional design including educational goals, instruction, and assessment.
Vittore member Kevin poses with teachers while in the field doing a contextual inquiry session.
Vittore discusses emerging research findings with their working affinity diagram in the background.
Building models allowed us to uncover user behavior patterns. Vittore used three types of models: cultural models, information flow models, and sequence flow models. The cultural model helps us understand the value points, expectations, and influences between all stakeholders, including administrators, teachers, and students. The information flow model helps us visualize what types of information are being conveyed between stakeholders. Sequence flow models were used to take a detailed view at the workflow of an intervention specialists, noting the work processes and interactions between roles and tools. This model is particularly illustrative compared to the bestpractice sequence flow model provided by Renaissance administrators, teachers, and students. Personas are models of users who we interviewed in our research (Cooper, 1995). Personas are compiled from the behaviors, motivations, and needs of the many users whom we interviewed. Personas serve to inform the product design. By assembling and concretizing user patterns into particular personas, the team has a precise way to measure and communicate how well a design concept meets the needs of a concrete type of user.
6 administrators 3 middle school 3 elementary
3 states: Pennsylvania, West Virginia, Colorado
13
Vittore member Tianxin hand sketches the first version of our information flow model.
14
FINDINGS 15
This section describes our field research findings represented in models, design insight explanations, and personas. Cultural Models, Information Flows, and Sequence Flows are used to describe teacher practices. There are ten design insights total. Each applies to a different stage in the Intervention Cycle we observed during our field research, and each is research-based fodder for our pitches in the Ideation section. Persona creation helped our team elicit user needs, motivations, goals, frustrations, and barriers across user types.
CONTENTS 17
Models
25
Design Insights
47
Personas
16
Cultural Model
et
io
M
ct
en
iv
In
Dr
as
a
ta
<<
rd
c e t state test perform nt ance >> s wi t h g re at est need >> Us e co rre ctly
Progress Monitoring Tools
Tools
Goal Setting Tools
<< Build base skills << Be confident / Ach ieve your goals
wi
gn
<<
Students are on track to reach state standard
Ali
Benchmark Tools
D
Below Level Catch
ab
ou
ts t th e studen
At Level
Above Level
Continue learning
More challenges
Students
Parents
Instructional Resources
C
fo
Evidence that the school is best supporting their child
Teachers want to communicate goals to students in a way that is understandable and encouraging. Score goals may not be effective for communication.
als
Da Use
da
e
s> >
D
sro
om Grad Teac e lev her el pe on s tate rfo s t anda rmance Kno rds wing w stud ents hat skiill s are lack ing
Keep growi ng >>
Administrator
us
<< i
Clas
Meet
<< Meet state st an
o
rs
nce of ide n Ev ructio t ns
sh rd a ts en and ud st t S te a st
ache
grad e lev el r e q Donâ&#x20AC;&#x2122; t sin u i r e m k> en > t
17
Teachers want progress monitoring tools to be transparent (i.e. say what skills have been tested). They hope that time spent on tests is worthwhile.
t
in
Teachers want goals to be actionable for students and to be useful measures of student performance.
<
B
Gro tion upin Spe g app ciali ropr MTSS st st iate ude Kno nts t in terv wing o pr e n tion ovid wha Pers s t e s tude onali nts a s diffi re la culty ed instr uctio ckin leve g n: D l esira ble
ee
m
e Te
rven
e
C
i red ely p u d t Accurat fy s < Iden ti
Administrators want all students to be at state standards, but interventionists care more about individual student growth and performance.
Lo
d
l ou
t-Lin
Inte
e
ve
o
G
nt
B
BREAKDOWNS IN THE MODEL A
l ca
Fron
m rn
Other Forces
A
or
3. Intervention Procedure | Communication with Students and Parents There are influences between intervention teachers, classroom teachers, and tools. Most influences are from intervention teachers gathering observational as well as test data around students to help them prepare for interventions and understand the effectiveness of their instruction. Teachers want effective communication between teachers, students, and parents during intervention. It increases the transparency of intervention decisions and forms a supportive environment for learning.
sc
ho
ol
> st >
M
2. MTSS Decisions Administrators want a data-driven method to assign students to Tier 1, 2, or 3 support to ensure that students with the greatest need receive adequate support. School administrators require that the progress monitor tools provide an accurate prediction for state test performance and identify students with intervention needs early.
n
ho
1. Selection of and Requirements for Progress Monitoring Tools Whether or not a tool can correctly predict studentsâ&#x20AC;&#x2122; performance on state tests is the primary factor when selecting progress monitor product. Local governments may also approve certain tools due to high test reliability and validity.
p ts l en oo u d on t t S s t e r or po m up e ;S av s e , H rc d s sou re
te te sta g n i us
Tools
upp ort >>
ass
Students
ion al s
Breakdowns Direction of influences Value points
ru
teachers, students and others in schools. For this project, expectations of tool use were also considered. In our cultural model, we found that school education culture impacts the following aspects of progress monitoring and intervention:
>>
st
A cultural model reveals the value points, expectations, and information flows between administrators,
Administrators
Em ot
Teachers
th cu rr ic << A ul ccu << Id um rate enti fy s l y / tud Ea ent m e a s sy sw u ith re s t kil gr ea ls << Easy te st to u ne se, ed e ff icie nt, act ion ab << Se le t acti ad onab vic le an e d m eas ura ble go
CULTURAL MODEL
an W
t
18
INFORMATION FLOW
Information Flow Selection and Requirements on Progress Monitoring Tools
Breakdowns Value points
Local Government
Information flow models show the communication between
people and tools around school education. In our culture model, we found impacts of different value points and expectations from stakeholders on the three aspects of progress monitoring and intervention identified in the cultural model:
Students should meet state standard
1. Selection of benchmarking and progress-monitoring tools that a school uses.
State standards State/district requirements
2. RTI/MTSS decisions to decide how to assign students to Tier 1, 2, or 3 instruction.
State standards Common core
Administrator
3. Intervention procedures, including planning instructional materials, measuring student growth, and motivating students.
Tools
A Students are on track to reach state standard
In this section, we use information flow to illustrate how people collaborate and communicate to achieve their objectives. Breakdowns are marked in each model.
B
State test predictions/ Progress Monitor
Progress Monitoring Tools
Benchmark Tools
BREAKDOWNS IN THE MODEL A
19
Not all students are able to meet state standards by the end of the school year.
Consistent tool use may be difficult to ensure for
B all teachers. Data may not be easy to interpret for teachers.
20
Information Flow
Information Flow
RTI / MTSS Decisions
Intervention Procedure | Communication with Students and Parents
Breakdowns Value points
Breakdowns Value points
Administrator
Front-Line Teachers
Students are on track to reach state standard Rationale for Goal Setting
Personalised instruction: Desirable difficulty level
A
Appropriate interventions for tier 2/3 students
Meet state standards as a class
Observations on students’ performance
Behavioral data
B Students’ performance
Skill Report
Parents
B
MTSS Decisions at student level
Data Meeting
Tools Benchmark test score
Progress Monitor Tools
BREAKDOWNS IN THE MODEL Desired goals from administrators may not be realistic for students in intervention.
Goal setting Tools
Feedback on Tests
Skill Report
Communicate Goals
Per
Feedback and encouragement
for
Other Assessments
Classroom Assessments (informal)
Evidence that the school is best supporting their child
Student Progress,
Benchmark Goal setting Tools (STAR) Tools
Goal Setting for each student in intervention
21
Types of intervention
Tools (STAR)
C B
A
Students’ performance
Knowing what students are lacking
Set Groups and Goals for Students Student Progress,
Internal tools Decision Rules School level MTSS Decisions
Front-Line Teachers Classroom Teacher
Grade level performance on state standards
Knowing what students are lacking
A
Intervention Specialist
Classroom Teacher
Intervention Specialist
Scores from other platfrom /Tests
C
ma
nce
Students Below Level
Performance
Catch
BREAKDOWNS IN THE MODEL B
Intervention specialists need more information for instruction design such as grouping students and selection of learning goals.
C
Teachers may not be able to understand and use the measurement metrics, such as SPG, to inform their instruction to meet goals.
A
Teachers may have some difficulty aligning goals with testing reports.
B
Growth in an intervention may not be reflected in classrooms, so consistent communication between classroom and interventionists is needed.
C
It is hard to make learning goals meaningful for students.
22
Sequence
Benchmark
Data Interpretation
Intervention Intervention
Week 1 in Intervention Cycle
Week 1 in Intervention Cycle
6-8 Weeks Intervention 6-8 Weeks Intervention
PEOPLE
Roles
School Administrators
Start
Data Coaches
Students
Benchmark Assessment (Star)
TOOLS
Host data meetings, analyse score, decide intervention tiers
BREAKDOWNS IN THE MODEL
Doing Well
Teachers Intervention Teachers)
Progress Monitor Assessment Goal Setting Tools Instruction Resources
23
Ask Teachers to do benchmark
Monitor benchmark test
Do Benchmark test
Flow Models show the intent, interactions, and tools within workflows. In this model, we narrowed down our scope and analyzed the whole intervention and progress monitoring process from an intervention teacher’s perspective.
Attend data meeting, decide goal and intervention tiers
Plan Intervention
Prepare for Instruction
G Attend intervention
Tier 2 and Tier 3 get notified Tier 1 student do not need intervention
Test students, Generate score result and skill reports
Skill based intervention
ask student to do progress monitor
learning disability?
Do Progress Monitor Test
Investigate to find the reasons
Yes Student go the learning support program
B
Generate skill reports for individual / group
D Test Student
c Resource related to skill report
B
Teachers wants to know more details about the test content (e.g. what skills have been tested).
C
Teachers want more resources to guide skill-based interventions.
D
Frequent testing leads to student burnout, but data teams need trend lines to measure intervention effectiveness.
E
The criteria to decide whether a student should move out of an intervention are not clear.
F
There is a lack of criteria to decide whether students are improving “Enough”. Desired goals are sometimes unrealistic for students.
G
It is not clear how intervention specialists adjust instruction according to students’ performance.
F not doing well
Generate results
A Set goals for students in tier 2 and 3
Score goals are used to predict state test perfomance and do not speak directly to interventions. Teachers use skill reports to prepare intervention.
E
How are students doing?
No
A
Score goals as references for trendline
24
1. Data Team Allocation of MTSS Resources Administrators want a data-driven method to assign students to Tier 1, 2, or 3 support to ensure that the students with the greatest need receive adequate support. To leverage data effectively during the decision-making process, school administrators use scores from benchmarking tests as a prediction of state test performance. These scores are supplemented by data from other assessments or teacher observations. 2. Goal Setting and Intervention Planning Once a student has been assigned to intervention support, teachers have to decide on an appropriate goal for that student as well as the instructional material that will help them meet that goal. Teachers use instructional planning reports as well as benchmarking scores to group students for instruction in interventions. 3. Progress Monitoring and Intervention Adjustments Teaching teams rely on progress monitoring to understand whether a student in Tier 2 or Tier 3 instruction is receiving adequate support to get them caught up to their grade level. Progress monitoring assessments are used to measure student growth, qualify students for additional learning support, and may be used to adjust intervention groups or instructional material.
25
on Planning nti ve
When generating design insights, we focused on the MTSS workflow within the schools. We found three major phases in which data from Star assessments informed decisions:
INSIGHT MAPPING 1
2
Data visualizations about student performance levels are useful for teachers when deciding which students need support.
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
MT S
S
DESIGN INSIGHTS INTRODUCTION
ng
Test
3
Goals reflect desired (not expected) student performance.
4
Teachers want to implement instruction at appropriate levels of difficulty for specific students and school groups so that all students can better their skills.
Teachers are interested in test validity and consider other sources to augment test data. 5
A number is not enough, but it helps. Teachers want measurable and actionable data.
6
Teachers want growth, but they donâ&#x20AC;&#x2122;t use a reference metric against which to compare student test scores.
7
Teachers want to communicate goals to students in a clear and motivating way.
8
Limitations to using Student Growth Percentile (SGP) include distrust in the validity of the metric, difficulty in understanding the measurement logic, and perceived mismatch between SGP, desired goals, and realistic goals
9
Teachers want to administer assessments in frequencies and durations that will maximize feedback with the smallest loss to instructional time.
10
Teachers distinguish performance measurements from growth measurements to make assessments more equitable across students.
26
Int er
Pr
Intervention Cycle on Planning nti ve
INSIGHT:
Data visualizations about student performance levels are useful for teachers when deciding which students need support.
27
EVIDENCE:
tion loca Al
ess Mon ito ogr ri
1 MT S
S
ng
Test
Itâ&#x20AC;&#x2122;s nice for the teachers to see this. When you color code it, you can see if better than just having numbers in there. - 4th Grade ELA Teacher One challenge of the MTSS system is identifying the students who are most in need of support. As data teams decide which students receive the most targeted interventions, they need methods of comparing across students using multiple metrics. Although each student will have individualized needs, quick methods to assess the level and type of intervention a student requires allow data teams to make these decisions systematically. When it comes to interpreting student test results and acting upon those results, teachers find visual representations that display student skill needs tremendously helpful. Both within the Star tests (as notated by colored groups) and within internal tracking spreadsheets, teachers rely on visual representations of the data to create meaningful groups and target instruction. Color demarcations provides simple cut-offs to make decisions about resource allocation.
This is an example of Star test documents that an elementary school administrator creates and shares with teachers during data meetings. Blank columns are left intentionally empty to encourage teachers to create goals for students.
WHO SAID IT CLASSROOM TEACHERS
ADMINISTRATORS
28
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
2 MT S
on Planning nti ve
INSIGHT:
Teachers are interested in test validity and consider other sources to augment test data.
29
EVIDENCE:
S
ng
Test
It’s hard to tell if a student really knows a concept on a Star test or if they wanted to be a stinker that day. - Intervention Teacher They can have a bad day and score a hundred point difference. That’s a huge range. If you know they didn’t try their hardest or the room was too noisy...the next day you test them again and they jump up. - Elementary Classroom Teacher Whether the purpose of the assessment is benchmarking or measuring student progress, instructional teams rely on additional factors including social and emotional measurements, in-class assessments and observations to check the validity of benchmarking assessments. Benchmarking tests are largely used to determine how to allocate resources for different levels of student needs. Scores on these assessments guide decisions, but do not act conclusively for students who straddle thresholds for adjacent MTSS tiers. Because schools tend to be resource-limited, teachers and administrators
interpret benchmarking data alongside social-emotional notes, in-class observations, and attendance data to determine which students have the highest need of MTSS resources. By considering multiple information sources, teachers and administrators feel more confident in their resource allocation decisions. Recognizing the limitations of benchmarking tests is of value to the instructional team. One example of these limitations include the fact that benchmarking assessments are one-day snapshots of students that might not be reflective of classroom instructional activities. Teachers also expressed interest in knowing meta-features of the test taking (e.g. how long did a student take compared to their peers). Teachers also question the validity of benchmarking tests for making classroom decisions based on the progress monitoring data. While intervention efforts focus on building particular skill areas, they do not have a good indication that the skills are actually tested in a particular test. This makes it difficult for teachers to interpret whether a skill has been mastered or if it just was not assessed.
WHO SAID IT CLASSROOM TEACHERS INTERVENTION TEACHERS ADMINISTRATORS
30
MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
3 Test
S
ng
on Planning nti ve
INSIGHT:
Goals reflect desired rather than expected student performance.
31
EVIDENCE: The moderate growth box wasn’t enough to get them up to proficiency so we set up more rigorous goals. Kiddos may be growing, but it’s not enough.
Intervention teachers are aware of this limitation in the set goals, so they are often forced to track a student’s “realistic” goal in addition to the numeric goal that is set in the system. Because the goals are not realistic from the start, the schools do not have a good measurement to determine whether their interventions are as useful as they expect.
- Instructional Support Teacher
We set the goal because that’s where we truly want them. We want them to be at the 60th percentile. - Middle School Administrator
WHO SAID IT CLASSROOM TEACHERS INTERVENTION TEACHERS
Goals for students often come from a performance-driven, top-down approach as a response to predictions on state standards. Because schools are expected to have a certain percentage of their students pass the state standardized assessments (Adequate Yearly Progress, 2009) and show adequate growth for their grade level, student goals are set in the aims of achieving these outcomes.
ADMINISTRATORS
However, these goals are not likely to be attainable for the students in Tier 2 or Tier 3 support. The growth required of these students to reach satisfactory performance on state assessments is extremely high for their level of performance in reality. In effect, these goals ask students to recover all the prerequisite knowledge expected to have been learned by their grade level in addition to grade-level content.
32
Int er
Pr
Intervention Cycle
INSIGHT:
on Planning nti ve
Teachers want to implement instruction at appropriate levels of difficulty for specific students and school groups so that all students can better their skills.
33
EVIDENCE:
tion loca Al
ess Mon ito ogr ri
4 MT S
S
ng
Test
Itâ&#x20AC;&#x2122;s important to set goals, but if you set them too high and they fail, students think they are a failure. - 3rd Grade Math Teacher
We use a scripted intervention. We do as much as we feel is at a studentâ&#x20AC;&#x2122;s level and just above to push them a little bit more, but we can cut them off a little bit short. - Intervention Reading Teacher Teachers often group their students as a means of providing individualized instruction while balancing their time and resources. By grouping students based on missing skills, interests, or other measures, teachers can tailor instruction to the needs of 2-3 students instead of the entire classroom. Grouped instruction presents several challenges. Firstly, teachers have to decide how best to group students. In making this decision, teachers have to match their observational data about students with performance measurements: particular students might benefit from a group with different ability levels, while others might benefit from working with students of the same level as they
are so they can have very targeted skill practice. Importantly, the group composition has to be such that it allows all members of the group to learn. Secondly, teachers have to decide on instructional content for each group. Each student has a different set of skills with which they need the most practice. Teachers must balance the groups so they meet time and resource allocations but are small enough that the skills that would benefit the entire group will be enough to help each student learn. Thirdly, teachers have to tie grouped instruction to individual goals and test results. Despite grouped instruction, all goal-setting happens individually. This does not always match teacher desires: they often have the same goal across a group of learners. Interpretation of progress towards a goal is therefore tricky: teachers have to determine whether the intervention, the grouping, or something else is responsible for performance.
WHO SAID IT CLASSROOM TEACHERS INTERVENTION TEACHERS ADMINISTRATORS
34
Int er
Pr
Intervention Cycle on Planning nti ve
INSIGHT:
A number is not enough, but it helps. Teachers want measurable and actionable data.
35
EVIDENCE:
tion loca Al
ess Mon ito ogr ri
5 MT S
S
ng
Test
Iâ&#x20AC;&#x2122;m not getting a lot of feedback. Where are they, and what skill set do they need the most? - 4th Grade ELA Teacher We need to really strengthen those building blocks before we can ask students to conceptualize harder things. - Special Education Teacher
The Scaled Score or percentile for a student reported by Star is important to the instructional team because it provides a research-based and numeric foundation on which to make decisions. The teams can be guided by quantitative methods to qualify students for learning support, to determine overall performance of a class or a school, and to measure changes in performance. Quantitative results from these tests give concrete evidence about student performance. However, these results are not actionable. These results identify who is struggling, but they do not indicate why they are struggling, or what instruction would most help the student. The instructional team relies on additional reports, outside resources, and other diagnostics to determine why a student receives a particular
numeric score. They require resources to identify not only who needs help, but specifically what next steps can be used to help a student. This is particularly true for intervention teachers interpreting results from low-performing students. Test performance provides little additional information for the teachers: the test likely only communicates that the student is performing poorly in all areas, which is something the teacher already knows. For these students, who likely have the highest need, the skill-based report is not actionable because it does not identify the most important skill gaps. Measurable data is important not just for rigorous MTSS decision processes that meet state requirements but also as an internal and external communication tool. In data teams, teachers and administration rely on numeric measures to reflect student growth. These measurements also communicate student progress and relevant responses from the teaching team to parents.
WHO SAID IT CLASSROOM TEACHERS INTERVENTION TEACHERS ADMINISTRATORS
36
MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
6
on Planning nti ve
INSIGHT:
Teachers want growth, but they don’t use a standardized reference metric against which to compare student test scores.
37
EVIDENCE:
S
ng
Test
We want to see constant and steady progress, and set goals to make sure this progress continues to happen. - Middle School Principal
I tell the students, ‘Any improvement and you’re not failing.’ - Elementary School ELA Teacher Due to the difficulty of achieving the desired goal of “closing the gap” for students in Tier 2 or Tier 3 support, educators, particularly intervention specialists, are more interested in their students demonstrating growth through the intervention process. Rather than expecting mastery on the skill gaps that originally qualified them for intervention support, if a student is making progress, that student’s interventions are considered effective.
This has the potential to sustain and even increase performance gaps between Tier 1, Tier 2, and Tier 3 students. Even if a student in a tier with high levels of support is growing, this growth might be too slow to catch them up to their classmates.
WHO SAID IT CLASSROOM TEACHERS INTERVENTION TEACHERS ADMINISTRATORS
Although the instructional team is observing student growth, they do not often use normreferenced growth indicators such as the SGP. Teachers often look for growth in their students, but they do not compare that growth to expected growth for other students of equivalent performance levels. Because of this, the growth metrics might demonstrate that a student is learning, but they do not indicate whether that learning is happening at an expected rate.
38
MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
7
on Planning nti ve
INSIGHT:
Teachers want to communicate goals to students in a clear and motivating way.
39
EVIDENCE:
S
ng
Test
It is important that the students know that the tests matter so they give it their best try. - Intervention Teacher
Itâ&#x20AC;&#x2122;s good to set goals, but if you set them too high and fail, they think they are a failure. - 3rd Grade Math Teacher
Students present their own goals and data and what theyâ&#x20AC;&#x2122;d like to work towards. - Special Education Teachers
You let five people skip you in line this year. Are you going to change that? Are you going to do something about that? What do you want to do? - 5th Grade Math Teacher
The significance of performing well on state tests may not be as obvious to students as it is to educators. This extends to predictive benchmarking assessments: while the measures of performance and growth are useful to teachers and administration, these numbers do not have any intrinsic meaning to students. Communicating the importance of these numbers therefore has two purposes: first, to ensure that the students
are performing to their full capability because they understand the stakes, and second, to ensure that students are motivated to improve the numbers that are reported in the system. One communication method that was used by nearly all the interviewees was conferencing with the students. Following a benchmarking assessment, educators mentioned that they would hold discussions with students about their performance and its causal factors. By associating goals and performance with student actions, the goals - as well as ways to change behavior to meet them - become more realistic. However, this may not be standard practice for all educators, as our interviewees can select into these conferences. Teachers also place high value on goal transparency. By making goals and progress evident, both on a class and an individual level, each student is aware of expectations and measure themselves against those expectations. The transparency allowed teachers to incentivize performance. Clarity about performance expectations helps align teacher and student expectations and make the testing process meaningful for students.
WHO SAID IT CLASSROOM TEACHERS INTERVENTION TEACHERS ADMINISTRATORS
40
MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
8
on Planning nti ve
INSIGHT:
Limitations to using Student Growth Percentile (SGP) include distrust in the validity of the metric, difficulty in understanding the measurement logic, and perceived mismatch between SGP, desired goals, and realistic goals.
41
EVIDENCE:
S
ng
Test
You can only get SGP Fall to Winter and Winter to Spring. It only considers two tests. - Middle School Administrator
We see low SGP when we have kids that have a lot of attention and focusing difficulties. Either their Star is going up and down, one time they’re really focused and another time they’re not, or the teacher’s seeing different behavior in the classroom than what we’re seeing in that assessment. - Elementary School Administrator
The SGP Does not show up on the progress monitoring report, so it is not a useful measurement. - Intervention Teacher WHO SAID IT
The majority of our interviewees did not use SGP, even if they expressed interest in measuring growth for students. The reasons for this can be categorized into three groups. 1. Teachers question the validity of the measurement: because SGP compares student performance between only the current test and the previous test, teachers questions its validity. A particularly strong or poor testing day could significantly impact the norm-referenced group against which the student is compared, making the growth metric difficult to interpret. 2. Teachers do not understand what SGP represents. Teachers familiar with SGP knew it was intended to measure growth. However, few understood how to interpret its calculated value, or how to relate it to measuring progress. 3. Setting goals through SGP does not allow the possibility of meeting the desired goals. If a student is very low performing (e.g. 2nd percentile), the SGP required to have that student reach the state benchmark is much too high to be reasonable for that student. Interviewees expressed that although this is a reality, their ultimate goal is to get students to their grade level. SGP paints a bleak picture - that even if a student is growing, they are not likely to advance.
INTERVENTION TEACHERS ADMINISTRATORS
42
MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
9
on Planning nti ve
INSIGHT:
Teachers want to administer assessments in frequencies and durations that will maximize feedback with the smallest loss to instructional time.
43
EVIDENCE:
S
ng
Test
[Special education teachers are] trying to balance teaching time with not testing too much. And so I think that’s a consistent kind of struggle for them. - Administrator
We Star test once every other week, which feels insane, but that’s what we’ve been told what to do so that’s what we’re doing. - Special Education Teacher
Educators rarely feel that they have enough time. This is exacerbated for students in interventions who require significant supplemental instruction in hopes of getting them caught up to their grade level. For this same group of students, progress monitoring is imperative to evaluate both the effectiveness of an intervention and to identify whether a student requires additional support. Intervention teachers have a difficult time balancing their instruction with measuring its effectiveness.
Adding to this issue is the student learning that occurs in these short time durations. Based on school schedules, intervention teachers may only have time to work on a very small number of skills with their students. While students may show improvement with this deliberate practice, the adaptive tests may not be refined enough to detect small improvements in these highly localized areas. The feedback may not be meaningful for teachers if the progress monitoring tests do not test the areas of practice.
WHO SAID IT CLASSROOM TEACHERS INTERVENTION TEACHERS ADMINISTRATORS
44
MT S
tion loca Al
ess Mon ito ogr ri
EVIDENCE: S
ng
Test
10 Int er
Pr
Intervention Cycle on Planning nti ve
INSIGHT:
Teachers distinguish performance measurements from growth measurements to make assessments more equitable across students.
45
It’s the higher kids who are feeling like they’re not achieving anything and just not performing where they should be. - 5th Grade Math Teacher
It’s not okay just to be at grade level. The growth is what’s important. - 4th Grade ELA Teacher
In classrooms, teachers are consistently trying to tailor instruction to all levels of students. To maintain student interest and motivation regardless of their level, teachers focus on both growth and performance. The class goal tends to be reaching benchmark. Individual goals focus on how the student can grow, either to meet that goal or to continue learning. Particularly for very high and low achieving students, growth goals are very important. For neither of these groups is the performance goal meaningful. High performing students achieve performance goals far too easily, and these goals are far out of reach for the lowest achieving students. Therefore, having a meaningful way to monitor and demonstrate progress to these students can help them remain motivated without “punishing” them for their particular level.
WHO SAID IT CLASSROOM TEACHERS
ADMINISTRATORS
46
PERSONAS
47
48
49
50
Pitch Ideas ess Mon ito ogr ri Pr
on Planning nti ve
51
Intervention Cycle
Int er
In this section, we present our product pitches. Vittore looks forward to vetting these ideas with Renaissance learning and using them as catalysts for further visioning of solutions.
MT S
tion loca Al
IDEATION
Test
S
ng
Intervention phase 2: Pitch Ideas 1-4 Intervention phase 3: Pitch Ideas 5-8
1 InquireMath
5
Mistake-it-til-you-Make-it
2 Fantasy Groups
6
Testing Bot Buddy
3 Goal Guide
7
Team Progress Monitoring
4 Mission of the Week
8
Goal Map Challenge
52
PITCH 1: InquireMath
How might we provide teachers with standards-aligned and goal-driven content that is relevant to student motivations?
STORYBOARD
INSIGHTS MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
Test
S
ng
Insights 5, 7
5: A number is not enough, but it helps. Teachers want measurable and actionable data. 7: Teachers want to communicate goals to students in a clear and motivating way.
on Planning nti ve
SOLUTION
Mr. Wisdom Intervention Teacher
“I want to find more inquiry and project-based learning examples for my groups. They should also align with standards so I can target learning goals in my intervention.”
53
InquireMath is a crowd-sourced library of inquiry and project-based math learning examples which are aligned to state standards and directly mapped to score-driven goals. Currently, when students complete an intervention, a skill report is sent to teachers. With InquireMath, teachers will have a library of project and inquiry-based math lessons which align to state standards that students need to learn next. Teachers can set goals for students to learn specific skills and associated vetted inquiry and project-based activities. Because each activity is connected to targeted state standards, goals for activity completion correspond with an estimated increase in standardized test scores. InquireMath also provides a mobile app for teachers to easily digitize and share inquiry and project-based activities, which can then be rated by other teachers based on their goal achievement results.
Mr. Wisdom doesn’t know what kind of instructional intervention to use that connects to testable state standards.
He has an idea for an inquiry-driven project for his students to learn some of the skills.
He can easily take a picture of the project, tie it to particular skills, and post it to the repository to share ideas with other students.
Other teachers can learn about project ideas, what skills are relevant, and give feedback on instructional effectiveness.
EVIDENCE SUPPORTING USER RESEARCH Teachers in our user research expressed a desire to communicate goals to students in a way that is clear and motivating. With InquireMath, abstract scores become real. Physical projects and lines of inquiry show students how interesting and meaningful math goal achievement can be. Teachers and interventionists are already relying on access to resources for their interventions and instructions, so InquireMath will provide a library of material for them which is teacher-created and content-aligned. Teachers have told us “I kind of have to build my own curriculum” and “if we can justify it, our school will get it for us if it’s going to help our students.” With InquireMath, the process of finding instructional material is simplified.
SUPPORTING LITERATURE Past research shows that students are more engaged in the process of learning math when it is delivered in a context in which they are explorers and ideators: “When students think their role is not to reproduce a method but to come up with an idea, everything changes” (Duckworth, 1991). With InquireMath, teachers can find lessons that will challenge students to come up with these ideas. Projects also provide real-world application contexts for students to learn math and produce significantly higher results in both standardized tests (Boaler, 1998) and later in life (Boaler, 2005 and Boaler, 1998). than traditional methods. InquireMath will empower teachers to make standards-aligned content relevant to their students while helping students to achieve higher scores.
54
STORYBOARD
PITCH 2:
Fantasy Groups Intervention Cycle
Int er
Pr
INSIGHTS
MT S
tion loca Al
ess Mon ito ogr ri
Test
S
ng
How might we help teachers make and monitor the growth of student groups?
Insight 4
4: Teachers want to implement instruction at appropriate levels of difficulty for specific students and school groups so that all students can better their skills.
on Planning nti ve
SOLUTION
Miss Rainey Intervention Teacher “I’ve been grouping students during intervention instruction. I want to know how I can use students’ test results and learning history data to improve my approach.”
55
Fantasy Groups is a tool that helps teachers play with student groupings and predict progress towards goals just as a Fantasy Football scoring predictor might do within a sports context. This tool could also help teachers set goals for whole groups based on either concept-level improvement, scaled score increase, or SGP per student. Fantasy Groups leverages students’ stats, i.e. standardized test scores, Star scores, and other data in order to simulate and make predictions about effective groupings so each student has the best chance of success.
Ms. Rainey uses groups in her interventions to try to help students learn skills that the whole group is missing. But she has no way to know if everyone will learn from being in the groups.
With Fantasy Groups, student “stats” in content areas can be calculated. She can virtually put students in groups, and the tool predicts how well the group will work using past STAR testing stats.
It also provides suggestions about strategies about making instruction effective for the entire group.
Ms. Rainey is happy because she can make her student groups with confidence and doesn’t have to worry about having to wait out a unit or intervention cycle in order to adjust groups.
EVIDENCE SUPPORTING USER RESEARCH Multiple facets of the importance of student grouping were revealed by our research. Teachers want to provide group intervention according to student needs, so they are constantly monitoring, evaluating, and deliberating over student grouping decisions both in RTI/MTSS and in the general classroom. Some teachers group students into reading groups according to which concepts they needed to improve upon, and students within each reading group vary along performance levels. STAR currently suggests that students should be grouped by common scores, but Fantasy Groups allows teachers to simulate different student groupings in a way that makes sense for instruction.
SUPPORTING LITERATURE Literature states that groups containing students with varying ability levels are better than homogenous groups, because students can help each other learn and answer questions. Group rewards also play an “important role in student achievement gains in instructional groups”. When teachers make grouping decisions, there are many factors which may be considered, and it is better that placement decisions are made by a team of teachers (Ward, 1987). Fantasy Groups is designed to assist in this decision-making process and create groups that are best for student learning.
56
PITCH 3:
Goal Guide
How might we help make numerical data easier to understand for teachers?
STORYBOARD
INSIGHTS MT S
Int er
Pr
Intervention Cycle
6: Teachers want growth, but they don’t use a reference metric against which to compare student test scores.
tion loca Al
ess Mon ito ogr ri
Test
S
ng
3: Goals reflect desired rather than expected student performance.
Insights 3, 6, 8
8: Limitations to using SGP include distrust in the validity of the metric, difficulty in understanding the measurement logic, and perceived mismatch between SGP, desired goals, and realistic goals.
on Planning nti ve
SOLUTION
Miss Rainey Intervention Teacher “The goal setting tool seems to have useful metrics to define and measure students’ progress. I want to know what these metrics mean and how I can I make full use of them.”
57
Goal Guide is a tool that teachers can use to help set intervention goals for their students. When setting an intervention goal, the teacher uses Goal Guide to understand the implications and plausibility of the goals they set. Goal Guide uses a worked example to test the teacher’s understanding of what exactly their goal means for different levels of students, including which SGP the student will have to be in to reach it.
Ms. Rainey doesn’t know how to set a goal that her student will achieve or how to know if her students are growing enough.
Through worked examples, Goal Guide takes Ms. Rainey through a process of deciding on reasonable goals.
The guide also uses worked examples to compare student growth for several student levels.
Now Ms. Rainey can set realistic goals and measure her students’ growth towards those goals.
EVIDENCE SUPPORTING USER RESEARCH In our discussions with teachers and interventionists, we found that teachers are not using the goals they set for their intended purpose of providing a measure of instructional efficacy. The numeric Scaled Score goal is entered as part of a routine procedure wherein in each student must achieve the state standards regardless of their current performance level. For some students, this achievement is an impossible ideal. Goal Guide is a solution to help teachers learn how to enter goals that are realistic for their students to achieve.
SUPPORTING LITERATURE Properly setting goals is vital to the RTI process, because “when targets are met or exceeded, professionals can be assured that the intervention has met its established objectives and adjustments of the goals can be made along the way to the outcome” (Shapiro, 2014). Our literature review supports what we found in our user research in mainly that many non-proficient students will not be able to attain proficiency (Betebenner, 2011), and that realistic goal setting often requires training” (Schunk, 1990). If teachers don’t know how to set realistic goals, they cannot make the most out of Renaissance’s goalsetting options. Goal Guide can train teachers to become masters of realistic goal-setting.
58
How might we help make goal data actionable and teachable for teachers?
PITCH 4: Mission of the Week tion loca Al
ess Mon ito ogr ri
5: A number is not enough, but it helps. Teachers want measurable and actionable data.
MT S
Intervention Cycle
Int er
Pr
INSIGHTS
S
ng
Test
Insights 5, 7, 10
7: Teachers want to communicate goals to students in a clear and motivating way.
on Planning nti ve
Miss Rainey Intervention Teacher “Learning should be goal-oriented, trackable, and encouraging. I want my students to see themselves improving through hard work.”
Ms. Rainey shows her student Scottie his Scaled Score, but he doesn’t really know what it means.
10: Teachers distinguish performance measurements from growth measurements to make assessments more equitable across students.
SOLUTION
59
STORYBOARD
Mission of the Week is a tool designed to enhance the process of goal-setting. It is a student-led, self-growth system with gamification features to set interim goals for students and track their everyday progress. Interim goals are shown as skill-based tasks that accompany traditional Scaled Score demonstrable growth. Corresponding interim tasks are then tracked with badges which students win by accomplishing their Missions. Skills are categorized by their similarity and each student can choose exactly what his or her badge visual representations will be depending on their interest in sports, comic books, cute animals, or other fun novelties.
With Mission-of-the-Week, the skills Scottie should work on are broken down into categories and levels based on state standards. Ms. Rainey can select the most important ones she wants Scottie to focus on.
Scottie sees these skills as opportunities to earn badges, and is excited about the chance to progress. He can earn different types of badges based on his preferences.
Now Scottie’s progress is meaningful to him, and he is excited to work hard to earn more badges.
EVIDENCE SUPPORTING USER RESEARCH In our research, we found that the goals are not realistic for students and that the purely numeric goals from the Goal-Setting Wizard cannot directly guide teaching since they do not correspond to skills. On top of that, students may not understand the significance of the numeric goals in the GoalSetting Wizard. By breaking down goals from their numeric values into component skills, students can have a better understanding of the meaning and relevance of the goals for which they are reaching.
SUPPORTING LITERATURE The literature on mastery makes it clear that performance is improved with the conjunction of a highly structured activity and an explicit goal (Ericsson, 2008). Within the current model, goal-setting is used to set a target score for a student. Mission of the Week reframes this target as a series of skills to learn. Within this framework, students can receive feedback of how successfully they learned a skill. This is supported by research that emphasizes the importance of setting learning goals for students rather than performance goals (Dweck, 1986). Badging also serves to incentivize students (Jeffrey & Adomdza, publication pending), which aids in their goal buy-in.
60
How might we help make goal data both actionable and teachable for teachers and relevant and understandable for students?
PITCH 5:
Mistake-it-tilyou-Make-it
STORYBOARD
INSIGHTS MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
Test
S
Insights 7, 10
ng
on Planning nti ve
Miss Rainey Intervention Teacher “Students cannot improve if they don’t know what mistakes they have made. They should view mistakes as opportunities for growth.”
61
7: Teachers want to communicate goals to students in a clear and motivating way. 10: Teachers distinguish performance measurements from growth measurements to make assessments more equitable across students.
Scottie takes his Star assessment, but has no idea where he made mistakes.
Mistake-it tracks Scottie’s errors and reports areas with the most errors, as well as tracking the areas where the most gains were made, to Ms. Rainey.
SOLUTION
EVIDENCE
After a student takes a Star Assessment, two things happen. First, they are given a Scaled Score, representing their proficiency. Second, their teacher is given a report of which skills to teach next. Mistake-it-til-you-Make-it collects the information regarding which concepts the students had most problems with on the assessment, and then presents this information as a list of concepts along with the number of mistakes made for each. Mistake-it-tilyou-Make-it reminds the students and teachers that mistakes are good and that if students got everything right, then they wouldn’t be learning. Teachers then work with students to set a goal to make less mistakes on a concept for the next assessment. Hopefully next week, the students will make new mistakes and have new opportunities to learn.
SUPPORTING USER RESEARCH Students in the bottom tier of MTSS often have confidence issues around learning their subjects. They will make great progress within their intervention groups, but then when they go back to the classroom, they are reminded how far behind they are comparatively and can lose their confidence. By reframing their struggles and mistakes as opportunities to learn, “Mistake-it-til-you-Makeit” helps teachers instill a growth mindset within their students, so they can say, “I don’t know that material yet.” This addresses another issue we discovered: many interventionists only care about progress rather than final scores. As long as the students are learning from mistakes they made in previous weeks, then this growth metric of upward progress is met.
Ms. Rainey presents these mistakes as an opportunities to learn. “You don’t know how to do this… yet!” They then work together to set goals to overcome these mistakes.
Next assessment, Scottie gets those concepts correct, and his score goes up! He’s made new mistakes, but he is excited for the next opportunity to learn!
SUPPORTING LITERATURE There is a growing body of literature around the positive effect of a growth mindset. “More growth-minded individuals… showed superior accuracy after mistakes compared with individuals endorsing a more fixed mindset” (Moser, 2011). By changing the student perspective, “Mistakeit-til-you-Make-it” celebrates student mistakes as an opportunity to grow and learn, and each concept they made a mistake on is merely something they have yet to learn. “Every time a student makes a mistake in math, they grow a synapse”. With Mistake-it-til-youmake-it, Renaissance Learning could help students’ brains grow.
62
How might we help make assessment data actionable for teachers?
PITCH 6:
Testing Bot Buddy
2: Teachers are interested in test validity and consider other sources to augment test data. Test
MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
INSIGHTS
S
Insights 2, 6, 9
ng
on Planning nti ve
Mr. Wisdom Intervention Teacher
“There is much more I can learn about my students through assessment rather than just their scores. Knowing that they have made mistakes is not enough. I need to know why.”
63
STORYBOARD
6: Teachers want growth, but they don’t use a reference metric against which to compare student test scores. 9: Teachers want to administer assessments in frequences and durations that will maximize feedback with the smallest loss to instructional time.
Mr. Wisdom sees Star tests as a black box. He wishes he knew what was actually being tested so he knew what his students didn’t understand!
Testing Bot Buddy monitors the time students take on each problem and tracks misconceptions and mistake patterns each student makes throughout the test.
SOLUTION
EVIDENCE
Testing Bot Buddy (TBB) is an automated chatbot solution that helps teachers understand where their students are struggling on assessments. During an assessment, Buddy records problems on which students make mistakes as well as how long students spend on each question. Each multiple choice question is mapped onto skills with higher granularity, and each multiple choice option is associated with a misconception. If teachers know which misconceptions a student fell prey to frequently, they can use this information to guide their instructional planning. Additionally, by monitoring time spent on test questions, TBB helps teachers compare the time spent to other students within a student’s percentile. Considering time on assessment questions in conjunction with misconceptions can provide teachers insight into student errors.
SUPPORTING USER RESEARCH Teachers expressed interest in seeing what questions students are missing so that they can have a better idea of what content to cover during instruction. Testing Bot Buddy aims to provide teachers with more fine-grain information in the form of time-spent and misconceptions. Testing Bot Buddy’s assessment monitoring features give teachers insight into students’ performance levels along with time spent on each question and misconceptions associated with each multiple-choice response.
Testing Bot Buddy sends a report to the teachers that identifies these mistake and misconception patterns.
Mr. Wisdom knows how to better target instruction and intervention, and he can communicate with students about their skill gaps as demonstrated on Star!
SUPPORTING LITERATURE Setting appropriate goals involves a decision process that takes into account benchmarks, current performance level, and anticipated growth. Testing Bot Buddy’s assessment monitoring features give teachers an insight into the students’ performance level features, like time spent on each question, and misconceptions associated with each multiple-choice response, that is not typically monitored by assessment software. Also, selfobservation is deliberate attention to one’s own behaviors, and it informs and motivates behavioral change (Schunk, 1990).
64
PITCH 7:
Team Progress Monitoring MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
Test
S
Insights 4, 9
ng
How might we help teachers make and monitor the growth of groups? INSIGHTS 4: Teachers want to implement instruction at appropriate levels of difficulty for specific students and school groups so that all students can better their skills. 9: Teachers want to administer assessments in frequences and durations that will maximize feedback with the smallest loss to instructional time.
on Planning nti ve
SOLUTION
Mr. Wisdom Intervention Teacher
“I want to address both cognitive and social aspects of learning. Goal setting and progress monitoring for a team may bring extra benefits to each individual student.”
65
STORYBOARD
A group-based progress monitoring practice test interface that allows students to teach each other. Group goals regarding peer interactions to help students achieve individual goals are set for progress monitoring assessments. Students work through a computer-assisted interface to solve problems individually on the Star test. They can ask for and provide assistance to one another, allowing students to learn from each other. A report detailing the types of help that was requested and given can assist the teacher in monitoring progress, identifying needs, and creating effective groups.
Mr. Wisdom splits his students into groups to work on building skills, but each of his students has a different goal. He doesn’t know how to make group work meaningful for students when it comes time for testing.
With Team Progress Monitoring, Mr. Wisdom can set teamwork goals for his students related to asking and answering questions, seeking help, and peer collaboration.
A student taking the test can provide advice for another student, tracking concept mastery and peer to peer exchanges.
Now, Mr. Wisdom knows how students are learning from each other to make effective intervention groups and help everyone meet their goals.
EVIDENCE SUPPORTING USER RESEARCH Many of the teachers we interviewed wanted to use groups for instruction, and specifically designed their interventions around what best meets the needs of a group of students. Because of this, students are grouped based on how their needs match one another, rather than based on the ability of the students. With “Team Progress Monitoring,” the progression of the group is measured by how well each student learns skills.
SUPPORTING LITERATURE “The group reward structure plays an important part in students’ achievement gains in instructional groups. Group rewards enhance the learning of individual students only if group members are held individually accountable and rewarded for their own learning as well as for the group’s products and performance” (Seijts & Latham, 2005). With “Team Progress Monitoring”, the students are incentivized to help one another to improve the performance of their team.
66
How might we help make goal data actionable and teachable for teachers?
PITCH 8:
Goal Map Challenge Test
4: Teachers want to implement instruction at appropriate levels of difficulty for specific students and school groups so that all students can better their skills.
MT S
Int er
Pr
Intervention Cycle
tion loca Al
ess Mon ito ogr ri
INSIGHTS
S
Insights 4, 7, 9
ng
7: Teachers want to communicate goals to students in a clear and motivating way. 9: Teachers want to administer assessments in frequences and durations that will maximize feedback with the smallest loss to instructional time.
on Planning nti ve
SOLUTION
Mr. Wisdom Intervention Teacher
“Prior knowledge is the foundation for further learning. An effective goal should break down the skills and lead student to their success step by step.”
67
STORYBOARD
Goal Map Challenge presents students with a map of content goals ordered based on knowledge dependencies. The Goal Map shows mini learning goals in sequential order to help students achieve their goals step by step with a solid foundation. Mini assessments on each learning goal on the map indicate whether the student has mastered a skill in order to move on to new skills. Each of these assessments is a “challenge” for the students to overcome during their journey on the goal map.
Mr. Wisdom feels it is hard to connect progress monitoring tests to his goals and instruction. Sometimes, his students feel burnt out from all the testing. He wants a better way to use time on tests.
“Goal Map challenge” breaks down a primary learning goals into several skills goals in a sequential order, which are in accordance with intervention curriculum. Mini assessments for each skill goal are designed as challenges for students.
Students have to complete a challenge to get access to the next challenge. The mini assessments help teachers track students’ improvement as well as identify students’ mastery of instructional content.
Students follow the goal map to complete the goal for a given intervention cycle. The also develop skills step by step to build a solid foundation for future learning.
EVIDENCE SUPPORTING USER RESEARCH Our user research led us to see that teachers want goals to align to interventions and assessments within Progress Monitoring. By providing mini-goals and mini-assessments, Goal Map Challenge provides teachers with the ability to assess student growth on very specific skills that they are working on within an intervention. Teachers also want their goals to be actionable to students and to direct their intervention. By providing a map of challenges to complete, this product leads the students down the proper path of learning toward their goals. Teachers also feel they spend too much time testing students, so Goal Map Challenge provides miniature assessments that are easily integrated into the intervention instruction context.
SUPPORTING LITERATURE Learning is an active process in which students build their knowledge like a castle: brick by brick and step by step. When students lack prerequisite skills, their learning is greatly hindered (Ambrose, 2010). With Goal Map Challenge, the students progress their way through a map and cannot access a challenge before they complete all of the prerequisite challenges. This is designed to prevent students from having a shaky foundation built on “Swiss Cheese” knowledge (Khan, 2012).
68
CONCLUSIONS
This report documents the work of a collaboration between Renaissance Learning and Vittore, a student design team from Carnegie Mellon University’s Master’s in Educational Technology and Applied Learning Science (METALS) Program. The aim of the project is to meet the goal-setting needs of teachers through the software. Findings from the user research and literature have been used to determine teacher needs and goal-setting practice insights. These findings were used to generate product concepts to address these needs including ways to help teachers make goal-setting a central practice to their instruction, develop comfort interpreting and acting upon data, and motivate students to reach learning goals. This report is intended as a foundation for the prototyping and evaluation of product concepts through the duration of the projects. The final solution may be an agglomeration of product concepts or may be an idea generated outside of this report. However, the methods for product generation and evaluation will follow the methodology described in this report.
69
ABOUT VITTORE
Based on the results reported here as well as discussions with Renaissance Learning, the team will prototype and test solutions according to the collaboratively generated design criteria. These prototypes will be tested with users to evaluate current designs and guide future designs. A final prototype and the evaluative process will be presented in a Summer Report in August 2017. Vittore would like to thank the team at Renaissance Learning for their invaluable resources, insights, feedback, and discussions. The team has pushed us to consider our research from new and valuable perspectives that helped the insights and product concepts develop, and we look forward to continuing our work through the remainder of the project. Vittore would also like to thank the METALS faculty, alumni, and students, as well as our interviewees, whose insights have been indispensable.
KEVIN DELAND | DEVELOPMENT brings a diverse technical background to the Vittore team with a B.S. in Electrical and Computer Engineering from Duke University and four years of experience as a Software Engineer at IBM. One of Kevin’s driving motivations is to improve STEM education, because he believes sharing with youth the magic of how the world works can enable them to create new ideas and inventions. Kevin also has an artistic flair and loves dancing, writing, and the occasional bursts into improvised rap.
SARAH KLEIN | DESIGN is a user experience designer and instructional designer. After graduating from McGill University, Sarah worked as a teacher before transitioning into design. She is interested in innovative projects and emerging technology related to job skills education, project based learning, literacy, and language learning in the K-12 space. Sarah is currently a Social Innovation Fellow at Carnegie Mellon University. Her portfolio is viewable at www.sarah-klein.com
TIANXIN YU | RESEARCH has a background in cognitive science, art, and design. She has a B.S. in Psychology and a B.A. in Art Studies from Peking University, and a M.A in product design from Tsinghua University. She is passionate about exploring cognitive science, technology, and product creation. She has two years of research experience in the Virtual Reality Lab of Peking University and worked as research intern in Microsoft Research Asia in Human-Computer Interaction for one year. During her school years, Tianxin led an education charity organization for three years in China. JORDAN MARKS | COORDINATION comes from an engineering background that motivates her to innovate on science and engineering education. After graduating from MIT with a B.S. in Materials Science and Engineering, Jordan worked for several years as a materials engineer for an aerospace company. She appreciates the importance of communicating complex information to educate a variety of audiences, from children learning about engineering to clients learning about specific technical projects.
70
REFERENCES Adequate Yearly Progress. (2009). U.S. Department of Education. Retrieved from https://www2.ed.gov/policy/elsec/ guid/standardsassessment/guidance_pg5.html Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C. & Norman, M.K. (2010). How learning works: 7 Research-based principles for smart teaching. San Fran, CA: Jossey-Bass. Anders Ericsson, K. (2008). Deliberate practice and acquisition of expert performance: a general overview. Academic emergency medicine, 15(11), 988-994. Ball, C., & Gettinger, M. (2009). Monitoring children’s growth in early literacy skills: effects of feedback on performance and classroom environments. Education and treatment of children, 32(2), 189-212. Bandura, M., & Dweck, C.S. (1986). The relationship of conceptions of intelligence and achievement goals to achievement-related cognition, affect and behavior. Unpublished manuscript - Harvard University. Betebenner, D. W. (2008). A primer on student growth percentiles. Dover, NH: National Center for the Improvement of Educational Assessment. Retrieved February, 18, 2011. Betebenner, D. (2009). Norm and criterion referenced student growth. Educational Measurement: Issues and Practice, 28(4), 42-51. Betebenner, D. W. (2011). A technical overview of the student growth percentile methodology: Student growth percentiles and percentile growth projections/trajectories. The National Center for the Improvement of Educational Assessment. Beyer, H., & Holtzblatt, K. (1997). Contextual design: defining customer-centered systems. Elsevier. Boaler, J. (1998). Open and closed mathematics: Student experiences and understandings. Journal for Research in Mathematics Education, 29(1), 41-62. Boaler, Jo. (2016). Mathematical Mindsets: Unleashing Students’ Potential Through Creative Math, Inspiring Messages and Innovative Teaching. Jossey-Bass. San Francisco, CA. Boaler, J. & Humphreys, C. (2005). Connecting mathematical ideas: Middle school video cases to support teaching and learning. Portsmouth, NH: Heinemann.
71
Carver, S.M. (2001). Cognition and instruction: Enriching the laboratory school experience of children, teachers, parents, and undergraduates. In S.M. Carver & D. Klahr (Eds.) Cognition and instruction: Twenty-Five years of progress (pp. 385-426). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Cooper, A. (1995). About face: The essentials of user interface design. John Wiley & Sons, Inc.. Duckworth, E. (1991). Twenty-four, forty-two and I love you: Keeping it complex. Harvard Educational Review, 61(1), 1-24. Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological review, 100(3), 363. Good, R. H., Gruba, J., & Kaminski, R. A. (2002). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an Outcomes-Driven Model. Jeffrey, S., & Adomdza, G.K. (publication pending). Incentive Salience and Improved Performance. Juel, C. (1988). Learning to read and write: A longitudinal study of 54 children from first through fourth grades. Journal of educational Psychology, 80(4), 437. Khan, S. (2012). The one world schoolhouse: Education reimagined. New York, N.Y.: Twelve. Lewis, C., & Rieman, J. (1993). “Chapter 1.” Task-Centered User Interface Design: a Practical Introduction. Moser, J., Schroder, H.S., Heeter, C., Moran, T.P., & Lee, Y.H. (2011). Mind your errors: Evidence for a neural mechanism linking growth mindset to adaptive post error adjustments. Psychological Science, 22, 1484-1489. Schunk, D. H. (1990). Goal setting and self-efficacy during self-regulated learning. Educational psychologist, 25(1), 71-86. Seijts, G. H., & Latham, G. P. (2005). Learning versus performance goals: When should each be used?. The Academy of Management Executive, 19(1), 124-131. Shapiro, E. S. (2008). Best practices in setting progress monitoring goals for academic skill improvement. Best practices in school psychology V, 2, 141-157. Shapiro, E. S. (2014). Tiered instruction and intervention in a response-to-intervention model. RTI Action Network, 381.
72
Shapiro, E. S., Dennis, M. S., & Fu, Q. (2015). Comparing computer adaptive and curriculum-based measures of math in progress monitoring. School Psychology Quarterly, 30(4), 470. Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve student achievement: Review of research. Psychology in the Schools, 42, 795-819. Vygotsky, L. (1978). Interaction between learning and development. From: Mind and Society (pp. 9 - 91). Cambridge, MA: Harvard University Press Ward, B. (1987). Instructional grouping in the classroom. Portland, OR. Where we began. (2017). Retrieved from http://www.renaissance.com/about-us/. Wiggins, G. & McTighe, J. (2005). Understanding by Design (Expanded 2nd Edition). Alexandria, VA: Association for Supervision and Curriculum Development. Zimmerman, B. J., Bandura, A., & Martinez-Pons, M. (1992). Self-motivation for academic attainment: The role of self-efficacy beliefs and personal goal setting. American educational research journal, 29(3), 663-676.
73