Journal article
The phantom collapse of student achievement in New York
2014
By: John Cronin, Nate Jensen

Abstract
When New York state released the first results of the exams under the Common Core State Standards, many wrongly believed that the results showed dramatic declines in student achievement. A closer look at the results showed that student achievement may have increased. Another lesson from the exams is that states need to closely coordinate new data with existing data when they switch to different measuring instruments.
See MoreThis article was published outside of NWEA. The full text can be found at the link above.
Topics: Measurement & scaling
Related Topics


Three measures of test adaptation based on optimal test information
This study extends the work of Reckase, Zu, and Kim (2019) by introducing three new measures of test adaptation.
By: G. Gage Kingsbury, Steven Wise
Topics: Measurement & scaling


Do response styles affect estimates of growth on social-emotional constructs? Evidence from four years of longitudinal survey scores
In this study, we conducted empirical and simulation analyses in which we scored surveys using item response theory (IRT) models that do and do not account for response styles, and then used those different scores in growth models and compared results.
By: James Soland, Megan Kuhfeld
Topics: Social-emotional learning, Growth modeling, Measurement & scaling


Measuring social-emotional learning – the tradeoff between measuring narrower skills versus broad competencies
Is social-emotional learning (SEL) a set of discrete skills or a broader competency? New research provides insights.
By: Megan Kuhfeld


Validating the SEDA measures of district educational opportunities via a common assessment
his study describes a convergent validity analysis of the SEDA growth estimates in mathematics and English Language Arts (ELA) by comparing the SEDA estimates against estimates derived from NWEA’s MAP Growth assessments.
By: Megan Kuhfeld, Thurston Domina, Paul Hanselman
Topics: Measurement & scaling, Equity, Growth modeling


Using assessment metadata to quantify the impact of test disengagement on estimates of educational effectiveness
In this study, we introduce those disengagement metrics for a policy and evaluation audience, including how disengagement might bias estimates of educational effectiveness. Analytically, we use data from a state administering a computer-based test to examine the effect of test disengagement on estimates of school contributions to student growth, achievement gaps, and summer learning loss.
By: Megan Kuhfeld, James Soland
Topics: School & test engagement, Measurement & scaling, Student growth & accountability policies


Using assessment metadata to quantify the impact of test disengagement on estimates of educational effectiveness
In this study, we examine the impact of two techniques to account for test disengagement—(a) removing unengaged test takers from the sample and (b) adjusting test scores to remove rapidly guessed items—on estimates of school contributions to student growth, achievement gaps, and summer learning loss.
By: Megan Kuhfeld, James Soland
Topics: School & test engagement, Measurement & scaling, Student growth & accountability policies


Reconciling long-term education policy goals with short-term school accountability models
Schools are increasingly held accountable for their contributions to students’ academic growth in math and reading. Under The Every Student Succeeds Act, most states are estimating how much schools improve student achievement over time and using those growth metrics to identify the bottom 5% of schools for remediation.
By: James Soland, Yeow Meng Thum, Greg King
Topics: Student growth & accountability policies, Growth modeling, Measurement & scaling