kappam_gold {kappaGold} | R Documentation |
Agreement of a group of nominal-scale raters with a gold standard
Description
First, Cohen's kappa is calculated between each rater against the gold
standard which is taken from the 1st column. The average of these kappas is
returned as 'kappam_gold0'. The variant setting (robust=
) is forwarded to
Cohen's kappa. A bias-corrected version 'kappam_gold' and a corresponding
confidence interval are provided as well via the jackknife method.
Usage
kappam_gold(ratings, robust = FALSE, ratingScale = NULL, conf.level = 0.95)
Arguments
ratings |
matrix subjects by raters |
robust |
flag. Use robust estimate for random chance of agreement by Brennan-Prediger? |
ratingScale |
Possible levels for the rating. Or |
conf.level |
confidence level for confidence interval |
Value
list. agreement measures (raw and bias-corrected) kappa with
confidence interval. Entry raters
refers to the number of tested raters,
not counting the reference rater
Examples
# matrix with subjects in rows and raters in columns.
# 1st column is taken as goldstandard
m <- matrix(c("O", "G", "O",
"G", "G", "R",
"R", "R", "R",
"G", "G", "O"), ncol = 3, byrow = TRUE)
kappam_gold(m)
[Package kappaGold version 0.3.2 Index]