While completing my undergraduate degree in French education, I decided to take a popular social dance course as an elective. I learned (and have since forgotten) a variety of dances such as the fox trot, tango, and salsa. While I can no longer do them, I do recall that each dance has a unique cultural purpose. Essentially, there’s a time and place for each dance, and the same can be said for assessment data.
Educators understand the benefits of triangulating data and that each data point is, well, one snapshot in time. Capturing multiple snapshots allows you to create the most complete picture and make stronger educational decisions.
Keep in mind, though, that not all data and educational decisions are equal. Data, by design, has specific purposes and should be leveraged differently depending on your educational role. Teachers need data to answer questions like, “Which students are meeting grade-level expectations?” and “Should I increase the intensity of my interventions?” On the other foot, administrators look for data to suggest how many students need additional support beyond that provided by the core reading curriculum, for example, or to understand what their leadership team should prioritize when selecting a new intervention resource.
MAP® Reading Fluency™, our early reading universal and dyslexia screener, provides a variety of data from which to make decisions about resource allocation, instruction, and program evaluation. Data, like dances, has a time and place. You may draw a stare or two if you cut a rug and moonwalk during a waltz. While the wrong dance step can be awkward, the wrong data can lead to misguided decisions and unintended consequences. Let’s take a look at MAP Reading Fluency data and discuss the purposes so you can avoid that.
MAP Reading Fluency reporting for resource allocation
The screener outcomes section of our Benchmark report is designed specifically to help inform resource allocation.
Regardless of whether students take the universal screener or the dyslexia screener, they’ll be either flagged or not flagged. For students with foundational skills data, the flag is based on the multivariate predictive model suggesting future potential reading difficulties. For students with oral reading data, the flagged or not flagged outcome is based on Hasbrouk and Tindal’s 2017 reading rate norms.
The screener outcome helps teachers and administrators determine who should receive a specific, limited resource that others won’t receive. For example, some students may receive Title 1 reading services and others will receive support during a core reading block.
Our domain scores and percentiles, based on user norms, suggest which resource a student may need. For example, imagine a student is flagged and performed at the 10th percentile in phonological awareness and performed at the 40th percentile for phonics/word recognition. Based on available resources, the student may participate in a more intense intervention for phonological awareness than for phonics/word recognition.
MAP Reading Fluency reporting to guide instructional decisions
In the example I just mentioned, the Benchmark report doesn’t tell a teacher what phonological awareness skills to address. That’s where the Instructional Planning report comes in.
The Instructional Planning report helps answer three questions for teachers:
- What’s each student’s zone of proximal development (ZPD) and how many students share the same ZPD?
- What’s the spring grade-level expectation?
- What instructional activities are suggested and available for immediate access for each group of students who share the same ZPD?
With the Instructional Planning report, you’ll spend less time finding a dance partner and more time on the dance floor. Follow your MTSS protocols and leverage your available resources, including those we’ve linked in this report.
Are you wondering how well students are responding to core instruction or a specific reading intervention? Check out the Progress Monitoring report to find out. Students’ individual progress monitoring data suggests whether the music tempo is too fast or too slow and if their “steps” are improving overall.
Remember to leverage our user norms as you set reasonable goals for your students. Given what you know about your students, the intervention resource, and the intensity of the support, can you expect students to grow more or less than the norm data suggests?
MAP Reading Fluency reporting for program evaluation
What about program evaluation? The Benchmark Matrix report, listed on page 3 of our MAP Reading Fluency Reports Portfolio, shines a spotlight on how well your students are responding to your instructional resources. Students’ performance levels (i.e., “Exceeds,” “Meets,” “Approaching,” “Below”) indicate who’s ready for Dancing with the Stars.
Students who are exceeding or meeting performance levels are likely on track to meet grade-level standards. Is it time to research new reading resources for grades K–3? If your data suggests opportunities for growth in phonological awareness, then you’ll want to keep that in mind as you review curricular resources with school and district administrators.
Performance levels also appear on the various versions of the student report (see the table of contents of our reports portfolio), the Term Summary report, and the Term Comparison report. Is it time to evaluate your resources? Then grab your dance shoes and tune into this data.
Get your groove on
There’s a time and place for the Macarena, just like there’s a purpose for MAP Reading Fluency data. Need a dance partner? Contact your NWEA account manager for a data dance lesson.