The measurement of interobserver agreement based on categorical scales
β Scribed by M.A.A. Moussa
- Book ID
- 103049083
- Publisher
- Elsevier Science
- Year
- 1985
- Weight
- 340 KB
- Volume
- 19
- Category
- Article
- ISSN
- 0010-468X
No coin nor oath required. For personal study only.
β¦ Synopsis
The Kappa statistic is used to measure the interobserver similarity based on categorical scales. The cases of two or more observers with two or more rating categories are considered. Allowance is made for the attachment of disagreement weights, based on rational or clinical grounds, to different rating categories. Tests of hypotheses about the conditions Kappa = 0 and Kappa > 0 are conducted.
π SIMILAR VOLUMES
The desire to determine the extent inter-rater measurements obtained in a clinical setting are free from measurement error and reflect true scores has spumed a renewed interest in assessment of reliability. The kappa coefficient is considered the statistic of choice to analyze the reliability of nom
## Abstract The authors describe a modelβbased kappa statistic for binary classifications which is interpretable in the same manner as Scott's pi and Cohen's kappa, yet does not suffer from the same flaws. They compare this statistic with the dataβdriven and populationβbased forms of Scott's pi in