Measuring the impact of test disengagement on estimates of educational effectiveness
Learn more about our examination of student disengagement and how it may bias estimates of effectiveness based on observed test results.
View research brief
Predictive analytics in education can offer a benefit as long as educators heed the differences between how the tools are used in industry and how they should be used differently in schooling. Perhaps most important, teachers already know a great deal about their students — far more than an investor knows about a stock or a baseball scout about an up-and-coming pitcher.
By: James Soland
Whenever the purpose of measurement is to inform an inference about a student’s achievement level, it is important that we be able to trust that the student’s test score accurately reflects what that student knows and can do. Such trust requires the assumption that a student’s test event is not unduly influenced by construct-irrelevant factors that could distort his score. This article examines one such factor—test-taking motivation—that tends to induce a person-specific, systematic negative bias on test scores.
By: Steven Wise
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful overview of much of the research in this area, and he provides a thoughtful analysis of the speed-ability trade-off and its impact on measurement.
By: Steven Wise
In this podcast, Nate Jensen discusses the value of assessments aligned to the Common Core State Standards and the misconceptions that accompanied the implementation of new assessments in some states.
Learning First Alliance, Get It Right podcast
Mentions: Nate Jensen
This study investigates the use of screening assessments within the increasingly popular Response to Intervention (RTI) framework, specifically seeking to collect concurrent validity evidence on one potential new screening tool, the Independent Reading Level Assessment (IRLA) framework.
By: Beth Tarasawa, Nicole Ralston, Jacqueline Waggoner, Amy Jackson
Positive student achievement and growth results for students in New York suggest that improvements to the teacher evaluation process that emphasize the importance of strong evaluation procedures, the systematic collection of evidence of teacher performance, and the use of data to inform the process, have promise for improving educator effectiveness far more than a narrower punitive approach.
This study examined the utility of response time‐based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid‐guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent unmotivated test taking behavior.