As NWEA prepares to release 2015 norms this summer, I am exploring the thoughtful use of norms in a series of four posts. The first post focused on understanding what norms are and how to judge the quality and usefulness of normative data. Today’s post will focus on the use of status norms, particularly NWEA status norms. I will look at what they are, how to interpret them, and how they may be used to make decisions about students. In a second post on status norms I will explore how NWEA leverages our norms data to link to instructional content. In the final post I will focus on growth norms.
Status norms describe a student’s current achievement level in the context of a peer group, usually age or grade. For any score on any test that has normative data the student’s score can be put in the context of a percentile. If a student scores at the 60% percentile, the student’s score is equal to or higher than 60% of the scores. Percentile scores are well-suited to answer questions like how does this student’s score compare with other students in the same grade? The 60th percentile score means the student did as well or better than 60% of the peer group. While this understanding of the score is correct, it may not be the whole story.
Is a Percentile a Percentile?
If I measure my son’s height on his 10th birthday, I can go online and likely find a table that shows what percentile he is in terms of height. But what if the table is based on children who are ten years and six months old? I don’t know exactly how much my son will grow in six months, but I know the percentile I am seeing for his current height is too low.
Similarly, one caveat to interpreting percentile test scores is knowing if the student took the test at the same time as the students used in the study to create the norms. If my fifth-grade son takes a math test in October, but the fifth-grade norms group took the test in May, the norms group received 20+ weeks of additional math instruction. Is my son’s percentile placement accurate, or should it be higher? Fortunately, NWEA norms have effectively addressed this problem. Not only do we have fall, winter, and spring norms, but our norms account for instructional weeks. That means our norms can precisely answer the question about my son’s performance, as well as questions like since a new student took the math test five weeks after her classmates, how does her score appropriately compare to the other scores?
Schools and districts often use student percentiles to make decisions about inclusion or exclusion in programs such as special education and talented and gifted. In making such decisions, schools and districts face several considerations. First, any score and the percentile or status that goes along with it reflects student performance at one moment in time. There is something powerful conveyed by saying that a student is at the X percentile, but it is important to remember that percentile is based on one test score. As such, scores and percentiles should be put in the context of other sources of data gathered throughout the year. A second consideration is that scores are estimates of true achievement and have a standard error of measurement (SEM) associated with them. The SEM that NWEA reports for scores allows users to see the percentile score as a range as well.
The third important consideration for schools and districts is setting the cut lines in terms of percentiles for intervention and other programs. This is a critical part of a Response to Intervention (RtI) model, universal screening. For RtI, setting the percentile at which students are simply monitored or setting percentile for various intensity levels of intervention are high-stakes decisions. MAP and MPG have the rigor needed to provide solid information, and both assessments have met the highest criteria for universal screening tools established by the National Center for Response to Intervention. Status normative data can be powerful and useful, but as with all data it must be handled carefully and thoughtfully.