An intelligent CAT that can deal with disengaged test taking
Wise, S. (2020). An intelligent CAT that can deal with disengaged test taking. In H. Jiao & R. W. Lissitz (Eds), Application of artificial intelligence to assessment (pp. 161-174). Information Age Publishing.
By: Steven Wise
This book presents varied applications of artificial intelligence (AI) in test development, including research and successful examples of using AI technology in automated item generation, automated test assembly, automated scoring, and computerized adaptive testing.
This book was published outside of NWEA. The full text can be found at the link above.
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent unmotivated test taking behavior.
The effective use of student and school descriptive indicators of learning progress: From the conditional growth index to the learning productivity measurement system
Modeling student growth has been a federal policy requirement under No Child Left Behind (NCLB). In addition to tracking student growth, the latest Race To The Top (RTTP) federal education policy stipulates the evaluation of teacher effectiveness from the perspective of added value that teachers contribute to student learning and growth. Student growth modeling and teacher value-added modeling are complex.
By: Yeow Meng Thum
This article describes the decisions made in the development of CATs that influence and might threaten content alignment. It outlines a process for evaluating alignment that is sensitive to these threats and gives an empirical example of the process.
Topics: Measurement & scaling
Propensity score stratification using multilevel models to examine charter school achievement effects
Of particular debate is the impact of transferring from a traditional public school to a charter school on student achievement and growth. We employ propensity score stratification and multilevel models to balance key covariates between treatment and control groups of a cross-state sample of students, which provides a more complex picture of charter school achievement effects in a quasi-experimental context.
By: Beth Tarasawa, Yun Xiang
Whenever the purpose of measurement is to inform an inference about a student’s achievement level, it is important that we be able to trust that the student’s test score accurately reflects what that student knows and can do. Such trust requires the assumption that a student’s test event is not unduly influenced by construct-irrelevant factors that could distort his score. This article examines one such factor—test-taking motivation—that tends to induce a person-specific, systematic negative bias on test scores.
By: Steven Wise
Predictive analytics in education can offer a benefit as long as educators heed the differences between how the tools are used in industry and how they should be used differently in schooling. Perhaps most important, teachers already know a great deal about their students — far more than an investor knows about a stock or a baseball scout about an up-and-coming pitcher.
By: James Soland
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful overview of much of the research in this area, and he provides a thoughtful analysis of the speed-ability trade-off and its impact on measurement.
By: Steven Wise