Journal article
The phantom collapse of student achievement in New York
2014
By: John Cronin, Nate Jensen

Abstract
When New York state released the first results of the exams under the Common Core State Standards, many wrongly believed that the results showed dramatic declines in student achievement. A closer look at the results showed that student achievement may have increased. Another lesson from the exams is that states need to closely coordinate new data with existing data when they switch to different measuring instruments.
See MoreThis article was published outside of NWEA. The full text can be found at the link above.
Topics: Measurement & scaling
Related Topics


Effort analysis: Individual score validation of achievement test data
Whenever the purpose of measurement is to inform an inference about a student’s achievement level, it is important that we be able to trust that the student’s test score accurately reflects what that student knows and can do. Such trust requires the assumption that a student’s test event is not unduly influenced by construct-irrelevant factors that could distort his score. This article examines one such factor—test-taking motivation—that tends to induce a person-specific, systematic negative bias on test scores.
By: Steven Wise
Topics: School & test engagement, Innovations in reporting & assessment, Measurement & scaling


Modeling student test-taking motivation in the context of an adaptive achievement test
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent unmotivated test taking behavior.
By: Steven Wise, G. Gage Kingsbury
Topics: School & test engagement, Growth modeling, Measurement & scaling


Response time as an indicator of test taker speed: assumptions meet reality
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful overview of much of the research in this area, and he provides a thoughtful analysis of the speed-ability trade-off and its impact on measurement.
By: Steven Wise
Topics: School & test engagement, Innovations in reporting & assessment, Measurement & scaling


Is Moneyball the next big thing in education?
Predictive analytics in education can offer a benefit as long as educators heed the differences between how the tools are used in industry and how they should be used differently in schooling. Perhaps most important, teachers already know a great deal about their students — far more than an investor knows about a stock or a baseball scout about an up-and-coming pitcher.
By: James Soland


Using a model of analysts’ judgments to augment an item calibration process
A key finding from behavioral decision-making research has shown that a parametric model of human decision making often outperforms the decision maker himself. We exploit this finding by seeking a model to mimic how analysts integrate FT item level statistics and graphical performance plots to predict the analyst’s assignment of the item’s status.
By: Yeow Meng Thum, Carl Hauser, Wei He, Lingling Ma
Topics: Measurement & scaling


The effective use of student and school descriptive indicators of learning progress: From the conditional growth index to the learning productivity measurement system
Modeling student growth has been a federal policy requirement under No Child Left Behind (NCLB). In addition to tracking student growth, the latest Race To The Top (RTTP) federal education policy stipulates the evaluation of teacher effectiveness from the perspective of added value that teachers contribute to student learning and growth. Student growth modeling and teacher value-added modeling are complex.
By: Yeow Meng Thum
Topics: Measurement & scaling, Growth modeling, Student growth & accountability policies


The potential of adaptive assessment
In this article, the authors explain how CAT provides a more precise, accurate picture of the achievement levels of both low-achieving and high-achieving students by adjusting questions as the testing goes along. The immediate, informative test results enable teachers to differentiate instruction to meet individual students’ current academic needs.
By: G. Gage Kingsbury, Mike Nesterak, MA, Edward Freeman
Topics: Measurement & scaling, Innovations in reporting & assessment, Student growth & accountability policies