Innovations in reporting & assessment
Emerging technologies allow for a variety of methods to assess students and report data specific to the needs of different stakeholders. These various approaches can result in assessments that are more engaging for students, along with reporting that provides more insightful, useful information for students, families, and educators.
The rise of computer‐based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple‐choice items.
By: Steven Wise
The current study outlines a general process for measuring item-level effort that can be applied to an expanded set of item types and test-taking behaviors (such as omitted or constructed responses). This process, which is illustrated with data from a large-scale assessment program, should improve our ability to detect non-effortful test taking and perform individual score validation.
By: Steven Wise, Lingyun Gao
Assessments with features of games propose to address student motivation deficits common in traditional assessments. This study examines the impact of two “gameful assessment” prototypes on student engagement and teacher perceptions among 391 Grades 3–7 students and 14 teachers in one Midwestern and one Northwestern school.
By: Chase Nordengren
This manuscript reports results from two studies conducted during the development of KinderTEK, an iPad delivered kindergarten mathematics intervention, to determine the relationship between instructor-reported technology experience and intervention implementation, as measured by student use.
By: Lina Shanley, Mari Strand Cary, Ben Clarke, Meg Guerreiro, Michael Thier
In this CASEL Measuring SEL blog, James Soland shares how work with Santa Ana Unified School District led to new insights on how item response times and test metadata may provide insight into student SEL.
By: James Soland
When computer-based tests are used, disengagement can be detected through occurrences of rapid-guessing behavior. This empirical study investigated the impact of a new effort monitoring feature that can detect rapid guessing, as it occurs, and notify proctors that a test taker has become disengaged.
This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of computer-based tests – the control of construct-irrelevant factors that can threaten test score validity.
By: Steven Wise