Journal article
“No Fun Games”: Engagement effects of two gameful assessment prototypes
2017
Journal of Research on Technology in Education 50(2) 134-148.
By: Chase Nordengren

Abstract
Assessments with features of games propose to address student motivation deficits common in traditional assessments. This study examines the impact of two “gameful assessment” prototypes on student engagement and teacher perceptions among 391 Grades 3–7 students and 14 teachers in one Midwestern and one Northwestern school. Using mixed methods, it finds higher satisfaction for students taking gameful assessments, and conflicting attitudes from teachers regarding the impact of gameful assessments on students’ intrinsic motivation and desire to learn. The article concludes by discussing opportunities for continued iteration and innovation in gameful assessment design.
See MoreThis article was published outside of NWEA. The full text can be found at the link above.
Related Topics


Exploring the educational impacts of COVID-19
This visualization was developed to provide state-level insights into how students performed on MAP Growth in the 2020–2021 school year. Assessments are one indicator, among many, of the student impact from COVID-19. Our goal with this tool is to create visible data that informs academic recovery efforts that will be necessary in the 2022 school year and beyond.
By: Greg King
Topics: COVID-19 & schools, Innovations in reporting & assessment


Executive Summary: Content proximity spring 2022 pilot study
This executive summary outlines results from the Content Proximity spring 2022 pilot study, including information on the validity, reliability, and test score comparability of MAP Growth assessments that leverage this new item-selection algorithm.
By: Patrick Meyer, Ann Hu, Xueming (Sylvia) Li
Products: MAP Growth
Topics: Computer adaptive testing, Innovations in reporting & assessment, Test design


Content Proximity Spring 2022 Pilot Study Research Report
The purpose of this research report is to provide detailed information about updates to the MAP Growth item-selection algorithm. This brief includes results from the Content Proximity pilot study, including information on the validity, reliability, and test score comparability of MAP Growth assessments that leverage this new item-selection algorithm.
By: Patrick Meyer, Ann Hu, Xueming (Sylvia) Li
Products: MAP Growth
Topics: Computer adaptive testing, Innovations in reporting & assessment, Test design


This study compared the test taking disengagement of students taking a remotely administered an adaptive interim assessment in spring 2020 with their disengagement on the assessment administered in-school during fall 2019.
By: Steven Wise, Megan Kuhfeld, John Cronin
Topics: Equity, Innovations in reporting & assessment, School & test engagement


This study evaluates the effects of asking items throughout the passage (i.e., embedding items) to achieve a more precise measure of reading comprehension by removing barriers for students to demonstrate their understanding. Results showed a significant impact of embedding comprehension items within reading passages on the measurement of student achievement in comparison to answering items at the end of the passage.
By: Meg Guerreiro, Elizabeth Barker, Janice Johnson
Topics: Equity, Innovations in reporting & assessment, Reading & language arts


Variation in respondent speed and its implications: Evidence from an adaptive testing scenario
The more frequent collection of response time data is leading to an increased need for an understanding of how such data can be included in measurement models. Models for response time have been advanced, but relatively limited large-scale empirical investigations have been conducted. We take advantage of a large data set from the adaptive NWEA MAP Growth Reading Assessment to shed light on emergent features of response time behavior.
By: Benjamin Domingue, Klint Kanopka, Ben Staug, James Soland, Megan Kuhfeld, Steven Wise, Chris Piech
Topics: School & test engagement, Innovations in reporting & assessment


A method for identifying partial test-taking engagement
This paper describes a method for identifying partial engagement and provides validation evidence to support its use and interpretation. When test events indicate the presence of partial engagement, effort-moderated scores should be interpreted cautiously.
By: Steven Wise, Megan Kuhfeld
Topics: Measurement & scaling, Innovations in reporting & assessment, School & test engagement