Computer adaptive interim assessments are versatile tools to support learning. From universal screening to program evaluation and beyond, computer adaptive interim assessments like MAP® Growth™ from NWEA are a great source of data and insights into student achievement and growth.
There are some misconceptions about what these assessments do and how they can be used. As you consider their utility, here are 10 facts to keep in mind.
1. Myth: Computer adaptive interim assessments are similar to traditional standardized assessments
Reality: The most advanced computer adaptive interim assessments feature technology-enhanced items beyond simple multiple choice, including drag-and-drop questions and text entry fields. Additionally, these tests’ adaptive nature means that each student has their own unique assessment experience, which makes them a lot more sophisticated—and more accurate—than fixed-form tests.
2. Myth: Getting 100% of the questions correct is the ultimate goal
Reality: Even the most accomplished students won’t get every question correct on a computer adaptive interim assessment. These tests continually adjust the difficulty of test questions based on student responses. They are designed to identify a student’s zone of proximal development, the sweet spot between what a student can access on their own and what they can do with the right instruction and support. That means students will only answer about 50% of MAP Growth questions correctly—and that’s the whole point.
3. Myth: If a student is scoring at the 50th percentile on a computer adaptive interim assessment, it must mean they are at grade level.
Reality: Percentile ranks are statistical observations based on the performance of a group of students. A student performing at the 50th percentile is merely scoring the same or higher than half of all students taking the same test. That doesn’t guarantee they are proficient in grade-level content.
Often, proficiency in grade-level content is measured by state summative assessments. NWEA provides linking studies that use performance on MAP Growth to predict a student’s performance on these state assessments. These projections are a better barometer of performance against grade-level content expectations than simply looking at a student’s achievement percentile ranking. There is wide variance from state to state on how achievement percentiles correlate to proficiency benchmarks.
It’s also useful to understand how achievement and growth percentiles work in tandem to create a clear overall picture of student performance. Imagine a student whose MAP Growth score lands them in the 30th percentile for achievement. If they grow at the 50th percentile and stay there through future assessments, they will remain at the 30thpercentile for achievement. Students in this situation must achieve above-average growth to improve their overall standing. Average growth won’t produce the necessary results for every student. MAP Growth can help you determine ambitious yet realistic growth targets for students who may need extra support to reach proficiency benchmarks.
4. Myth: Computer adaptive interim assessments are diagnostic tests that precisely identify discrete skills and standards each student has mastered
Reality: These assessments are more like a sophisticated thermometer than an X-ray; the assessment alerts you when there’s an issue requiring attention, but it doesn’t provide a specific diagnosis. These assessments are also designed to efficiently assess broad content areas, like math, reading, and language usage. When used as universal screeners, they are quite helpful in identifying students who would benefit from certain programming, like interventions or enrichments. At the classroom level, interim assessments are useful in helping teachers focus the attention of their instruction. For example, a teacher may notice a group of students with lower scores in geometry compared to their overall math scores and invest in reteaching or other supports to help close gaps.
Understanding student-specific skill mastery across an entire subject would require many more questions than typically included in interim assessment test design—and would create much longer assessments as well. Assessing discrete skill or standard mastery is a different purpose and requires a different design. Any claims that a computer adaptive interim assessment provider makes about the ability to report standard mastery should be met with skepticism.
The deeper value computer adaptive interim assessments provide expands as students take additional tests over time, illuminating longitudinal growth season-to-season and year-to-year. This helps you understand if instructional strategies are creating the outcomes you desire for your students.
5. Myth: Scores from computer adaptive interim assessments are the only data point we need to make informed instructional decisions
Reality: A high-quality computer adaptive interim assessment like MAP Growth is great at detecting broad patterns and screening for potential needs. It’s meant to be a starting point to guide further investigation rather than your sole determinant for making high-stakes decisions. The data helps educators ask the right questions about each student to help advance their learning.
Think of it as one bright star in your constellation of assessment data, helping you navigate your path ahead by showing not just where students are, but where their current trajectory is taking them. Any significant decisions you make about a student, whether it’s placing them into a gifted group or an intervention, should be based on additional data sources and observations.
6. Myth: Results from computer adaptive interim assessments are confusing and you need a PhD to decode scores
Reality: MAP Growth reports scores on the RIT (Rasch Unit) scale. A RIT scale is a measurement scale like inches, miles, temperature, or weight. It’s an equal interval scale, meaning the difference between scores is the same regardless of whether a student is at the top, middle, or bottom of the scale.
MAP Growth profile reports display results in an accessible, visually friendly manner at the student, class, school, and district level. For teachers, multiple views are available to put data in the right format to support the most common uses, like conferences, goal setting, and forming instructional groups. These reports incorporate normative data to make it quick and easy to compare your students’ growth and achievement to other students across the country. They also fold in linking studies to state assessments, the ACT, and the SAT to help you see which students are on track to meet important milestones.
NWEA professional learning workshops focus on taking effective actions, helping you advance your skills wherever you are in your interim assessment learning journey.
7. Myth: Computer adaptive interim assessments will show us how well our students are absorbing the school’s curriculum
Reality: While design and approach can vary between computer adaptive interim assessments, MAP Growth is curriculum agnostic rather than curriculum embedded. That means it’s not designed to tell you how well students are learning specific curriculum content but, rather, to evaluate whether your curriculum is effective at improving overall achievement in each subject assessed. MAP Growth can help validate whether high performance on curriculum-based assessments—including daily assignments, quizzes, and tests—translates to broader math or reading proficiency.
8. Myth: Computer adaptive interim assessments are too hard or too easy for some of our students
Reality: What’s harder, finishing a marathon in three hours or six? It’s a trick question, because difficulty can only be measured relative to the maximum effort level of each participant. A three-hour marathon for an experienced runner may take the same effort as the six-hour marathon for a first-time or new runner. The same is true for computer adaptive interim assessments. By design, MAP Growth presents approximately the same level of challenge to all students and each student answers about 50% of questions correctly. The test provides an appropriate level of challenge for each student based on their ability.
9. Myth: Computer adaptive interim assessments don’t tell us much about our high-achieving students
Reality: Adaptive assessments do a much better job at measuring high-achieving students than fixed-form assessments. When the content on an assessment is static or fixed, the outcomes become binary to some extent: students either know what’s there or they don’t. When students answer all of the content on fixed-form assessments correctly, it’s clear they know that content, but it doesn’t tell you much about what they know beyond what is assessed. Because adaptive assessments continue to adjust based on responses, they can go much further in identifying each student’s current ceiling.
Furthermore, through growth norms, MAP Growth measures growth relative to similar-performing peers. Growth for students at the 99th percentile is compared to other students at the same achievement level—comparisons that can be motivating to high-achieving students.
And just because students are at the 99th percentile, it doesn’t mean there’s nothing left to learn.
Learn more
To learn more about MAP Growth—and maybe even dispel a few more myths—check out “The complete guide to MAP Growth.” This comprehensive video describes how the RIT score works, the value and purpose of measuring growth over time, how MAP Growth creates a personalized test experience for students, and much more.
For more information on bringing MAP Growth on in your school or district, contact our sales team.