Skip to main content
Research article - Peer-reviewed, 2018

Multiple-rater kappas for binary data: Models and interpretation

Stoyan, Dietrich; Pommerening, Arne; Hummel, Manuela; Kopp-Schneider, Annette


Interrater agreement on binary measurements with more than two raters is often assessed using Fleiss' κ, which is known to be difficult to interpret. In situations where the same raters rate all items, however, the far less known κ suggested by Conger, Hubert, and Schouten is more appropriate. We try to support the interpretation of these characteristics by investigating various models or scenarios of rating. Our analysis, which is restricted to binary data, shows that conclusions concerning interrater agreement by κ heavily depend on the population of items or subjects considered, even if the raters have identical behavior. The standard scale proposed by Landis and Koch, which verbally interprets numerical values of κ, appears to be rather subjective. On the basis of one of the models for rater behavior, we suggest an alternative verbal interpretation for kappa. Finally, we reconsider a classical example from pathology to illustrate the application of our methods and models. We also look for subgroups of raters with similar rating behavior using hierarchical clustering.


binary ratings; carcinoma data; Conger-Hubert-Schouten kappa; Fleiss' kappa; modeling rater behavior

Published in

Biometrical Journal
2018, Volume: 60, number: 2, pages: 381-394

    SLU Authors

    UKÄ Subject classification

    Bioinformatics (Computational Biology)

    Publication Identifiers


    Permanent link to this page (URI)