Journal article

Examining the performance of the trifactor model for multiple raters

December 2021

Published in:

Applied Psychological Measurement (46), 1. https://doi.org/10.1177/01466216211051728

By: James Soland, Megan Kuhfeld


Abstract

Researchers in the social sciences often obtain ratings of a construct of interest provided by multiple raters. While using multiple raters provides a way to help avoid the subjectivity of any given person’s responses, rater disagreement can be a problem. A variety of models exist to address rater disagreement in both structural equation modeling and item response theory frameworks. Recently, a model was developed by Bauer et al. (2013) and referred to as the “trifactor model” to provide applied researchers with a straightforward way of estimating scores that are purged of variance that is idiosyncratic by rater. Although the intent of the model is to be usable and interpretable, little is known about the circumstances under which it performs well, and those it does not. We conduct simulation studies to examine the performance of the trifactor model under a range of sample sizes and model specifications and then compare model fit, bias, and convergence rates.

See More


This article was published outside of NWEA. The full text can be found at the link above.

Related Topics