How to Assess Rater Rankings? A Theoretical and a Simulation Approach Using the Sum of the Pairwise Absolute Row Differences (PARDs)

Larissa Bartok, Matthias Alexander Burzler

Veröffentlichungen: Beitrag in FachzeitschriftArtikelPeer Reviewed

Abstract

Although the evaluation of inter-rater agreement is often necessary in psychometric procedures (e.g., standard settings or assessment centers), the measures typically used are not unproblematic. Existing measures are known for penalizing raters in specific settings, and some of them are highly dependent on the marginals and should not be used in ranking settings. This article introduces a new approach using the probability of consistencies in a setting where n independent raters rank k items. The discrete theoretical probability distribution of the sum of the pairwise absolute row differences (PARDs) is used to evaluate inter-rater agreement of empirically retrieved rating results. This is done by calculating the sum of PARDs in an empirically obtained n x k matrix together with the theoretically expected distribution of the sum of PARDs assuming raters randomly ranking items. In this article, the theoretical considerations of the PARDs approach are presented and two first simulation studies are used to investigate the performance of the approach.

OriginalspracheEnglisch
Aufsatznummer37
Seitenumfang16
FachzeitschriftJournal of Statistical Theory and Practice
Jahrgang14
DOIs
PublikationsstatusVeröffentlicht - 8 Juni 2020

ÖFOS 2012

  • 101018 Statistik

Zitationsweisen