Multi-rater delta: extending the delta nominal measure of agreement between two raters to many raters

نویسندگان

چکیده

The need to measure the degree of agreement among R raters who independently classify n subjects within K nominal categories is frequent in many scientific areas. most popular measures are Cohen's kappa (R = 2), Fleiss' kappa, Conger's and Hubert's $\geq$ 2) coefficients, which have several defects. In 2004, delta coefficient was defined for case 2, did not defects coefficient. This article extends from 2 2. multi-rater has same advantages as with regard type coefficients: i) it intuitive easy interpret, because refers proportion replies that concordant non random; ii) summands give its value allow each category be measured accurately, no collapsed; iii) affected by marginal imbalance.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Delta: a new measure of agreement between two raters.

The most common measure of agreement for categorical data is the coefficient kappa. However, kappa performs poorly when the marginal distributions are very asymmetric, it is not easy to interpret, and its definition is based on hypothesis of independence of the responses (which is more restrictive than the hypothesis that kappa has a value of zero). This paper defines a new measure of agreement...

متن کامل

Kappa Test for Agreement Between Two Raters

Introduction This module computes power and sample size for the test of agreement between two raters using the kappa statistic. The power calculations are based on the results in Flack, Afifi, Lachenbruch, and Schouten (1988). Calculations are based on ratings for k categories from two raters or judges. You are able to vary category frequencies on a single run of the procedure to analyze a wide...

متن کامل

A-Kappa: A measure of Agreement among Multiple Raters

Abstract: Medical data and biomedical studies are often imbalanced with a majority of observations coming from healthy or normal subjects. In the presence of such imbalances, agreement among multiple raters based on Fleiss’ Kappa (FK) produces counterintuitive results. Simulations suggest that the degree of FK’s misrepresentation of the observed agreement may be directly related to the degree o...

متن کامل

Agreement Between an Isolated Rater and a Group of 1 Raters

11 The agreement between two raters judging items on a categorical scale 12 is traditionally measured by Cohen’s kappa coefficient. We introduce a new 13 coefficient for quantifying the degree of agreement between an isolated rater 14 and a group of raters on a nominal or ordinal scale. The coefficient, which 15 is defined on a population-based model, requires a specific definition of the 16 co...

متن کامل

A review of agreement measure as a subset of association measure between raters

Agreement can be regarded as a special case of association and not the other way round. Virtually in all life or social science researches, subjects are being classified into categories by raters, interviewers or observers and both association and agreement measures can be obtained from the results of this researchers. The distinction between association and agreement for a given data is that, ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Statistical Computation and Simulation

سال: 2021

ISSN: ['1026-7778', '1563-5163', '0094-9655']

DOI: https://doi.org/10.1080/00949655.2021.2013485