Agreement Between an Isolated Rater and a Group of 1 Raters

نویسنده

  • A. Albert
چکیده

11 The agreement between two raters judging items on a categorical scale 12 is traditionally measured by Cohen’s kappa coefficient. We introduce a new 13 coefficient for quantifying the degree of agreement between an isolated rater 14 and a group of raters on a nominal or ordinal scale. The coefficient, which 15 is defined on a population-based model, requires a specific definition of the 16 concept of perfect agreement but possesses the same properties as Cohen’s 17 kappa coefficient. Further, it reduces to the classical kappa when there is 18 only one rater in the group. An intraclass and a weighted versions of the 19 coefficient are also introduced. The new approach overcomes the problem of 20

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Non-native English Speaking Teachers’ Pragmatic Criteria in the Holistic and Analytic Rating of the Agreement Speech Act Productions of Iranian EFL Learners

Pragmatic rating is considered as one of the novel and crucial aspects of second language education which has not been maneuvered upon in the literature. To address this gap, the current study aimed to inspect the matches and mismatches, to explore rating variations, and to assess the rater consistency between the holistic and analytic rating methods of the speech act of agreement in L2 by non-...

متن کامل

Functional Movement Screen in Elite Boy Basketball Players: A Reliability Study

Purpose: To investigate the reliability of Functional Movement Screen (FMS) in basketball players. A few studies have compared the reliability of FMS between raters with different experience in athletes. The purpose of this study was to compare the FMS scoring between the beginners and expert raters using video records.  Methods: This is a cross-sectional study. The study subjects compris...

متن کامل

Comparison between inter-rater reliability and inter-rater agreement in performance assessment.

INTRODUCTION Over the years, performance assessment (PA) has been widely employed in medical education, Objective Structured Clinical Examination (OSCE) being an excellent example. Typically, performance assessment involves multiple raters, and therefore, consistency among the scores provided by the auditors is a precondition to ensure the accuracy of the assessment. Inter-rater agreement and i...

متن کامل

Raters’ Perception and Expertise in Evaluating Second Language Compositions

The consideration of rater training is very important in construct validation of a writing test because it is through training that raters are adapted to the use of students’ writing ability instead of their own criteria for assessing compositions (Charney, 1984). However, although training has been discussed in the literature of writing assessment, there is little research regarding raters’ pe...

متن کامل

A Study of Raters’ Behavior in Scoring L2 Speaking Performance: Using Rater Discussion as a Training Tool

The studies conducted so far on the effectiveness of resolution methods including the discussion method in resolving discrepancies in rating have yielded mixed results. What is left unnoticed in the literature is the potential of discussion to be used as a training tool rather than a resolution method. The present study addresses this research gap by exploring the data coming from rating behavi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007