𝔖 Bobbio Scriptorium
✦   LIBER   ✦

The measurement of interobserver agreement based on categorical scales

✍ Scribed by M.A.A. Moussa


Book ID
103049083
Publisher
Elsevier Science
Year
1985
Weight
340 KB
Volume
19
Category
Article
ISSN
0010-468X

No coin nor oath required. For personal study only.

✦ Synopsis


The Kappa statistic is used to measure the interobserver similarity based on categorical scales. The cases of two or more observers with two or more rating categories are considered. Allowance is made for the attachment of disagreement weights, based on rational or clinical grounds, to different rating categories. Tests of hypotheses about the conditions Kappa = 0 and Kappa > 0 are conducted.


πŸ“œ SIMILAR VOLUMES


Measures of clinical agreement for nomin
✍ Louis Cyr; Kennon Francis πŸ“‚ Article πŸ“… 1992 πŸ› Elsevier Science 🌐 English βš– 556 KB

The desire to determine the extent inter-rater measurements obtained in a clinical setting are free from measurement error and reflect true scores has spumed a renewed interest in assessment of reliability. The kappa coefficient is considered the statistic of choice to analyze the reliability of nom

On population-based measures of agreemen
✍ Kerrie P. Nelson; Don Edwards πŸ“‚ Article πŸ“… 2008 πŸ› John Wiley and Sons 🌐 French βš– 183 KB

## Abstract The authors describe a model‐based kappa statistic for binary classifications which is interpretable in the same manner as Scott's pi and Cohen's kappa, yet does not suffer from the same flaws. They compare this statistic with the data‐driven and population‐based forms of Scott's pi in