Student proficiency: The “by” vs. “at” year’s end debate

As I mentioned in “Misconceptions preventing innovation and improvement in state assessments,” there’s more we can do to better meet the needs of students, educators, teachers, and policymakers. I’d like to look at key questions surrounding emerging through-year assessments, how and when to come up with a summative score, and how we evaluate student proficiency.

About a dozen states have developed or are developing through-year assessment models in which tests are administered at multiple times during the school year, instead of just once in the spring. This can lead to richer and more actionable data related to student progress, shorter assessments, and even fewer assessments administered overall.

But there is debate over how to handle the summative results and whether students still need to be tested at the end of the year if, along the way, they meet the expectations for proficiency on an early administration of the assessment. Should states consider when a student shows what they know and can do by the end of the year or only at the end of the year? This distinction affects the options available—and the decisions made—for state testing systems.

The advantages of looking at data by year’s end

There are several advantages if states look to ensure students are proficient by the end of the year. Students who show proficiency early could:

  • Advance to other topics deeply within and even off grade level, potentially leading to more growth for advanced students
  • Sit out later test administrations
  • Take other types of assessments that can provide additional information about student learning
  • Complete a “check-in” at the end of the school year to ensure continued progress

If states are locked into a model of measuring on-grade proficiency at the end of the year, all kids need to take spring assessments, regardless of their earlier performance. This is the status quo because we assume students may forget and not maintain an earlier level of performance. Yet we also assume that a student’s performance in spring is sufficient to consider the following fall, when there’s more than ample research that students experience summer learning loss. We don’t require students to retest in the fall, just in case.

Where we can see by the end of year in action

Because of the current rhetoric around accepting only springtime performance, most states leveraging through-year models count only the spring administration for accountability purposes, with two exceptions: In Louisiana, all three test administrations are used to inform a student’s final summative scores. Six other states leverage one of the earliest through-year assessment designs, the Dynamic Learning Maps (DLM). DLM instructionally embedded (IE) assessments combine results from assessments administered in fall and spring to produce a summative student score. This system has passed the assessment portion for peer review for ELA and math assessments.

Outside the traditional accountability assessments, competency-based education has long valued student proficiency by the end of the year. In a competency-based education system, students demonstrate proficiency based on when they are ready to show mastery, and assessments are meant to provide timely information to inform a student’s learning along the way. Nearly every state has policy supporting competency-based education, and more districts and schools are implementing competency-based education practices. An assessment model that prioritizes by the end of year proficiency would also help support states, districts, and schools implementing competency-based education practices.

Policy must lead

Changing assessment systems that currently prioritize student proficiency at the end of the school year is, ultimately, a policy decision. There are many defensible measurement models to leverage earlier performance. State leaders should work in partnership with their educators, school leaders, and community members to consider how policy decisions might impact their current accountability models, including how growth might be considered, testing logistics, and more.

We believe states should choose what works best for them, while keeping at the forefront an assessment that is reliable, is valid, and holds students to a high standard. We’re concerned, however, that debate over the two approaches and whether they are equally worthwhile is slowing the adoption of innovative through-year models and encouraging states to only produce assessments with traditional, end-of-year summative scores.

What are your thoughts? Do you have ideas on how we can better design assessments to allow students to show proficiency throughout the school year? Let us know. We’re @NWEAPolicy on Twitter.

Blog post

Helping students grow

Students continue to rebound from pandemic school closures. NWEA® and Learning Heroes experts talk about how best to support them here on our blog, Teach. Learn. Grow.

See the post

Guide

Put the science of reading into action

The science of reading is not a buzzword. It’s the converging evidence of what matters and what works in literacy instruction. We can help you make it part of your practice.

Get the guide

Article

Support teachers with PL

High-quality professional learning can help teachers feel invested—and supported—in their work.

Read the article