Seven Crucial Criteria for a Great Growth Assessment – Post Two

Seven Crucial Criteria for a Great Growth Assessment – Post Two - TLG-IMG-06282018As we’ve written about many times, measuring growth is only useful if it’s done well. It’s not about just comparing one proficiency score to the next one. It’s about accurately identifying each student’s current performance level—with the same precision for students well above or well below grade level as for those in the middle of the pack. As educators, you deserve a growth measure that delivers valid data you can trust—and that you can use to support learning for every single student.

In our last blog post, we shared the first three of seven crucial criteria that make up a great growth measure:

  • The test should be based on your educational standards
  • The test should use a scale with equal intervals over time
  • The test should measure a student’s performance correctly, regardless of their grade

Let’s cover the last four criteria for a quality growth measure:

4. The test should have many possible questions, at many different levels of difficulty. Computer adaptive assessments are ideal for measuring growth well because they can hone in — quite precisely—on what a student knows and is ready to learn. How? They dynamically adapt throughout the test in response to the student’s answers. A correct answer generates a more difficult test item; an incorrect answer, an easier one. This ongoing adjustment allows the test to pinpoint exactly what the student knows.

But in order to work well, these assessments need a very large number of possible questions. We call this the “item pool”; it’s the collection of questions the test draws from when trying to pinpoint exactly what a student knows. A deep item pool—lots of items about every possible topic, and at all possible levels of difficulty—means that the assessment can show the student enough unique questions to reveal the specifics of what they know and are ready to learn next. And they won’t see the same question twice.

The other advantage of a deep item pool is it gets rid of that age-old fear: teaching to the test. If you’re using an assessment with a small number of questions, educators may feel pressure to prep kids to answer just those questions well. But with a tremendous number of quality questions, teachers are free to teach the concepts their students need, knowing that the assessment will draw from a large number of questions to check students’ understanding of the concepts.

5. The test questions should be unbiased and fair for all students. Remember the old riddle about the man and son who were rushed to the hospital after a car accident? The doctor walked into the boy’s room and cried, “I cannot operate on this boy—he’s my son!” The riddle asked people to say how this was possible, and they were often at a loss to come up with the answer—the doctor was the boy’s mom. For a long time, many people unconsciously pictured all doctors as male—and they didn’t realize their bias until it was pointed out.

The fact is, everyone has unconscious biases—it’s part of human nature (albeit a part we’re always striving to improve). But if those biases creep into assessment questions, fairness and equity go out the window. And creating test questions that are equally accessible to every student—regardless of cultural, socio-economic, ethnic, and religious background—requires rigorous and systemic review.

A number of organizations publish stringent standards that help test makers eliminate bias and take a consistent approach to writing unbiased items. Forgive the litany of acronyms, but this is crucial for equitable assessment: you need a growth measure that uses Differential Item Functioning (DIF), as well as bias and sensitivity reviews that follow standards set forth by The American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME).

6. The test should have a clear purpose and be built to meet that purpose efficiently. The most important question on any assessment is the one you, the educator, ask yourself before you give the test: “Why am I doing this?” Any test you give should have a clear purpose and be built to fulfill that specific purpose with accuracy and precision—taking exactly as much time as needed: no more, no less.

The more precise a test, the more accurate the score it gives—and the more time it takes to administer. In attempting to protect valuable instructional time, it can be tempting to reach for the shortest assessment available. But if that assessment gives you unreliable or inaccurate data, it’s a waste of time all around.

So when your purpose is to measure growth—and use the information you glean to support student learning—choose an assessment that measures exactly what’s needed to give you valuable data with maximum efficiency: no more and no less testing than needed for the purpose.

Tweet: Seven Crucial Criteria for a Great Growth Assessment – Post Two https://ctt.ec/vTA_2+ #edchat #education #MAPGrowth7. The test should give you context for growth. Once you accurately measure performance and growth, a world of instructional opportunity opens—but only if that growth is put into a useful context. Your assessment data needs context in order for educators at every level to take action based on it.

A teacher benefits from knowing what the student’s score is in relation to all the other students in the classroom. A principal benefits from knowing their school’s relative position within a district, and a district supervisor finds it useful to place a school’s performance in the state and national context.

Another context is established by looking at student growth trajectories and determining if they’re on a path to meet a given achievement standard in time. For instance, is a student on track to be ready for college upon graduation? Is a school on track to meet proficiency benchmarks on state accountability measures? Knowing this context helps you make key decisions about resource allocation and instruction—at the classroom, building, and district level.

Imprecise student growth measurement can have devastating consequences—students flailing without adequate help, lost opportunities for enrichment, disengaged students, and discouraged staff. Great schools insist on the best measures to understand all their students, not just those in the middle of the bell curve. Our interim assessment – MAP® Growth™ – was designed specifically to measure student growth accurately and deliver the highest quality data in the industry.

Blog post

Helping students grow

Students continue to rebound from pandemic school closures. NWEA® and Learning Heroes experts talk about how best to support them here on our blog, Teach. Learn. Grow.

See the post

Guide

Put the science of reading into action

The science of reading is not a buzzword. It’s the converging evidence of what matters and what works in literacy instruction. We can help you make it part of your practice.

Get the guide

Article

Support teachers with PL

High-quality professional learning can help teachers feel invested—and supported—in their work.

Read the article