Steven Wise, PhD
Senior Research Fellow
Steven Wise has published extensively during the past three decades in applied measurement, with particular emphases in computer-based testing and the psychology of test taking. In recent years, his research has focused primarily on practical methods for effectively dealing with the measurement problems posed by low examinee engagement on achievement tests.
Dr. Wise sits on the editorial board of several academic journals and has provided psychometric consultation to a variety of organizations, including state departments of education in Maryland, Virginia, and Nebraska; National Assessment Governing Board; American Board for Certification of Teacher Excellence; and GED Testing Service. He served as vice president of research at NWEA and director of the PhD program in assessment and measurement at James Madison University. He holds a PhD in educational psychology, measurement, and statistics from the University of Illinois at Urbana-Champaign.
Research by Steven Wise
Positive student achievement and growth results for students in New York suggest that improvements to the teacher evaluation process that emphasize the importance of strong evaluation procedures, the systematic collection of evidence of teacher performance, and the use of data to inform the process, have promise for improving educator effectiveness far more than a narrower punitive approach.
This study examined the utility of response time‐based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid‐guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent unmotivated test taking behavior.
This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent unmotivated test taking behavior.
Whenever the purpose of measurement is to inform an inference about a student’s achievement level, it is important that we be able to trust that the student’s test score accurately reflects what that student knows and can do. Such trust requires the assumption that a student’s test event is not unduly influenced by construct-irrelevant factors that could distort his score. This article examines one such factor—test-taking motivation—that tends to induce a person-specific, systematic negative bias on test scores.
By: Steven Wise
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful overview of much of the research in this area, and he provides a thoughtful analysis of the speed-ability trade-off and its impact on measurement.
By: Steven Wise
This article describes the decisions made in the development of CATs that influence and might threaten content alignment. It outlines a process for evaluating alignment that is sensitive to these threats and gives an empirical example of the process.
Topics: Measurement & scaling
This integrative review examines the motivational benefits of computerized adaptive tests (CATs), and demonstrates that they can have important advantages over conventional tests in both identifying instances when examinees are exhibiting low effort, and effectively addressing the validity threat posed by unmotivated examinees.
By: Steven Wise