When we use progress monitoring with a student, will we close gaps in learning? Maybe, but maybe not. It all depends on how we respond to the data. So how can we be planful and clear about connecting data to real decisions about intervention change?
Progress monitoring is about this: when we do an intervention, we want to know that it is working. If it isn’t, we want to make a change quickly. Because progress monitoring involves collecting student data at a higher frequency—often weekly—it can provide a faster data-based decision cycle in the area of our intervention. With MAP® Reading Fluency™, progress monitoring can be done in oral reading fluency, phonics and word recognition, and phonological awareness.
What does it mean for an intervention to be “working”? An intervention is an increase in intensity and individualization, and its purpose is to accelerate growth over what regular, Tier-1 classroom instruction has been producing. “Working” means that the intervention is boosting the student’s growth in the domain of interest.
Two components are critical for deciding whether an intervention is accelerating a student’s growth sufficiently. One is to define “sufficient,” by setting a goal. The other is to lay out how and when we will conclude that we are or are not on track toward that goal.
Setting the goal
The National Center on Intensive Intervention (NCII) offers three approaches to setting a goal for student growth during an intervention:
- National growth norms
- Intra-individual framework
First, we can use benchmarks as the end goal. This means finding the end-of-year performance we expect of typically achieving students and setting that as the goal for our individual student. From the student’s baseline—their performance now—we draw a line to the goal. This line becomes the goal line, and we track the student’s growth against it.
The second approach is to use growth norms, or the typical slope of growth we see on average for all students. Maybe we find that in oral reading, normal growth for a particular grade and season is an increase of one word correct per minute (WCPM) per week. With this approach, we draw a line with that same slope beginning from the student’s baseline. Notice that the endpoint, or goal, comes second, because it is calculated by following that slope forward for a set period (e.g., 12 weeks).
The final approach to setting a goal for a student’s growth during intervention is more self-referential. Beginning with the student’s past growth rate, we calculate an increase. If the student has been growing at a rate of one WCPM per week already, then we might set the new goal line at a slope of 1.5 or two WCPM per week.
Important considerations for goal setting
To help navigate these three approaches, both NCII and other progress monitoring researchers have offered some important guidance.
When choosing a goal, we should be sure to find a balance between ambitious and realistic. For some older students with very low initial performance, targeting the benchmark may be unrealistic by end of year. For example, a third-grader who reads 12 WCPM in winter may not reach a benchmark of 112 WCPM in one season. That would require an increase of something like eight WCPM per week. That’s definitely ambitious. It may not be very realistic, though.
At the same time, aiming to simply match typical rates of growth may not be enough. If our third-grader starting at 12 WCPM only has a target of increasing by one WCPM per week, that means we are only targeting 24 WCPM at the end of the year. That’s still a far cry from where their typical peers will be. The necessary growth to close a gap is inevitably steeper than the normal growth we see for typical students. (Check out the excellent discussion of normal vs. necessary growth by my colleague Michael Dahlin.)
We need to acknowledge what it will take to close a gap, but we also need to be realistic about just how much any intervention can advance a student in a given timeframe. Setting goals for progress monitoring should take both into account.
Setting decision rules
Once we have a goal for how much an intervention will accelerate a student’s growth, we need to clarify how we will make decisions. Eyeballing a set of data points and making gut decisions based on what we see leaves too much room for our own biases. We run the risk of either settling for something that’s not really working or fiddling too early and often with something that is. So how can we set up rules about how we will read the data to inform our decisions?
We need to acknowledge what it will take to close a gap, but we also need to be realistic about just how much any intervention can advance a student in a given timeframe.
Oregon, a state known for great research on progress monitoring, offers some guidance to consider. (While Oregon is a leader here, be sure to check your own state, too!) They suggest two possible approaches to setting decision rules. Remember: laying out these decision rules ahead of time moves us away from falling back to our own biases.
The first method is to compare the most recent data points to the goal line. If we are looking at the last four data points on a student’s graph, for example, and they are all below the goal line, then the decision rule is to make a change to the intervention: growth is insufficient. If all four data points are above the goal line instead, then the decision rule is to either fade the intervention out or increase the steepness of the goal line. (Ambitious goals, remember?)
The second method is slope analysis. After many weeks, we fit a linear trend line to the student’s data, to characterize the trend in their overall actual growth. Then we compare the steepness of the student’s growth trend to the slope we want: the slope of the goal line. If the student’s slope is more shallow than the goal line, then growth is insufficient. It’s time to make a change to the intervention.
As we plan out and then navigate our decision rules, we should hold some principles in mind. We want to err on the side of helping the student, first and foremost. That means we may plan to be more liberal in concluding that we should improve the intervention, but more conservative in concluding that the intervention has been successful. As educators, we need to proactively problem-solve when interventions are not working well, and we need to verify that what looks like success in reaching goals is both real and lasting.
Putting it all together to close gaps
Setting a goal and a set of decision rules is not enough, of course. These components only make a difference when they surround solid, research-based interventions and a clear capacity for improving those interventions when needed. When our decision framework says to make a change, we need to know how to increase intensity. What does that look like? It can look like more time on the intervention, or it could look like more opportunities to respond and get feedback (think smaller group). There are other ways to increase intensity, too. Luckily, NCII has a great tool for thinking about intervention intensity, complete with an accompanying video. Spend some time with each.
Progress monitoring, done well, is the heart of data-based problem-solving. It means setting the bar for student learning high and holding ourselves, as teachers, accountable for accomplishing that, through constant use of data. It means a commitment to equity, to the idea that all students deserve whatever support is needed to reach high standards and expectations. If we don’t want to fall back to the kinds of bias that come from eyeballs and gut feelings, then let’s be clear and specific right up front about how we intend to connect data to real decisions.