Parameter estimation accuracy of the effort-moderated IRT model under multiple assumption violations
By: James Soland, Joseph Rios
This session from the 2020 National Council on Measurement in Education virtual conference presents new research findings on understanding and managing test-taking disengagement.
Soland, J.& Rios, J. (2020, September). Parameter estimation accuracy of the effort-moderated IRT model under multiple assumption violations. National Council on Measurement in Education 2020 virtual conference.See More
See the presentation
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent unmotivated test taking behavior.
The effective use of student and school descriptive indicators of learning progress: From the conditional growth index to the learning productivity measurement system
Modeling student growth has been a federal policy requirement under No Child Left Behind (NCLB). In addition to tracking student growth, the latest Race To The Top (RTTP) federal education policy stipulates the evaluation of teacher effectiveness from the perspective of added value that teachers contribute to student learning and growth. Student growth modeling and teacher value-added modeling are complex.
By: Yeow Meng Thum
In this podcast, Nate Jensen discusses the value of assessments aligned to the Common Core State Standards and the misconceptions that accompanied the implementation of new assessments in some states.
Learning First Alliance, Get It Right podcast
Mentions: Nate Jensen
This study investigates the use of screening assessments within the increasingly popular Response to Intervention (RTI) framework, specifically seeking to collect concurrent validity evidence on one potential new screening tool, the Independent Reading Level Assessment (IRLA) framework.
By: Beth Tarasawa, Nicole Ralston, Jacqueline Waggoner, Amy Jackson
This study examined the utility of response time‐based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid‐guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent unmotivated test taking behavior.
This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years.
The current study outlines a general process for measuring item-level effort that can be applied to an expanded set of item types and test-taking behaviors (such as omitted or constructed responses). This process, which is illustrated with data from a large-scale assessment program, should improve our ability to detect non-effortful test taking and perform individual score validation.
By: Steven Wise, Lingyun Gao