Skip to main content
SLU:s publikationsdatabas (SLUpub)

Sammanfattning

This article addresses the problem of accounting overall multivariate chance-corrected interobserver agreement when targets have been rated by different sets of judges (not necessarily equal in number). The proposed approach builds on Janson and Olsson's multivariate generalization of Cohen's kappa but incorporates weighting for number of judges and applies an expression for expected disagreement suitable for the case with different judges. The authors suggest that the attractiveness of this approach to multivariate agreement measurement lies in the interpretability of the terms of expected and observed disagreement as average distances between observations, and that addressing agreement without regard to the covariance structure among variables has advantages in simplicity and interpretability. Correspondences to earlier approaches are noted, and application of the proposed measure is exemplified using hypothetical data sets.

Nyckelord

interrater reliability; multivariate analysis; rating; measurement

Publicerad i

Educational and Psychological Measurement
2004, volym: 64, nummer: 1, sidor: 62-70
Utgivare: SAGE PUBLICATIONS INC

SLU författare

UKÄ forskningsämne

Sannolikhetsteori och statistik

Publikationens identifierare

  • DOI: https://doi.org/10.1177/0013164403260195

Permanent länk till denna sida (URI)

https://res.slu.se/id/publ/3095