Observer agreement paradoxes in 2x2 tables: comparison of agreement measures

Viswanathan Shankar, Shrikant I Bangdiwala, Viswanathan Shankar, Shrikant I Bangdiwala

Abstract

Background: Various measures of observer agreement have been proposed for 2 x 2 tables. We examine the behavior of alternative measures of observer agreement for 2 x 2 tables.

Methods: The alternative measures of observer agreement and the corresponding agreement chart were calculated under various scenarios of marginal distributions (symmetrical or not, balanced or not) and of degree of diagonal agreement, and their behaviors are compared. Specifically, two specific paradoxes previously identified for kappa were examined: (1) low kappa values despite high observed agreement under highly symmetrically imbalanced marginals, and (2) higher kappa values for asymmetrical imbalanced marginal distributions.

Results: Kappa and alpha behave similarly and are affected by the marginal distributions more so than the B-statistic, AC1-index and delta measures. Delta and kappa provide values that are similar when the marginal totals are asymmetrically imbalanced or symmetrical but not excessively imbalanced. The AC1-index and B-statistics provide closer results when the marginal distributions are symmetrically imbalanced and the observed agreement is greater than 50%. Also, the B-statistic and the AC1-index provide values closer to the observed agreement when the subjects are classified mostly in one of the diagonal cells. Finally, the B-statistic is seen to be consistent and more stable than kappa under both types of paradoxes studied.

Conclusions: The B-statistic behaved better under all scenarios studied as well as with varying prevalences, sensitivities and specificities than the other measures, we recommend using B-statistic along with its corresponding agreement chart as an alternative to kappa when assessing agreement in 2 x 2 tables.

Figures

Figure 1
Figure 1
Agreement chart for hypothetical data from Table1assessing agreement between two raters classifying N units into the same two categories.
Figure 2
Figure 2
Agreement charts (a-c) for scenarios 1-3 of Table2, addressing Paradox 1.
Figure 3
Figure 3
Measures of agreement as a function of prevalence, for (a) both sensitivity and specificity set at 95%, (b) sensitivity of 70% and specificity of 95%, (c) sensitivity of 95% and specificity of 70%, and (d) both sensitivity and specificity set at 60%.
Figure 4
Figure 4
Agreement charts (a-e) for scenarios 4-8 of Table2, addressing Paradox 2.
Figure 5
Figure 5
Agreement charts (a-b) for scenarios 9-10 of Table2, addressing Paradox 2 with high PO.
Figure 6
Figure 6
Agreement charts (a-d) for scenarios 11-14 of Table2, addressing PO< =0.5.

References

    1. Banerjee M, Capozzoli M, McSweeney L, Sinha D. Beyond kappa: A review of interrater agreement measures. Can J Stat. 1999;27(1):3–23. doi: 10.2307/3315487.
    1. Kraemer HC, Periyakoil VS, Noda A. Kappa coefficients in medical research. Stat Med. 2002;21(14):2109–2129. doi: 10.1002/sim.1180.
    1. Landis JR, King TS, Choi JW, Chinchilli VM, Koch GG. Measures of agreement and concordance with clinical research applications. Stat Biopharma Res. 2011;3(2) doi:10.1198/sbr.2011.10019.
    1. Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20:37–46. doi: 10.1177/001316446002000104.
    1. Brennan RL, Prediger DJ. Coefficient kappa: Some uses, misuses, and alternatives. Educ Psychol Meas. 1981;41(3):687–699. doi: 10.1177/001316448104100307.
    1. Byrt T, Bishop J, Carlin JB. Bias, prevalence and kappa. J Clin Epidemiol. 1993;46(5):423–429. doi: 10.1016/0895-4356(93)90018-V.
    1. Feinstein AR, Cicchetti DV. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990;43(6):543–549. doi: 10.1016/0895-4356(90)90158-L.
    1. Cicchetti DV, Feinstein AR. High agreement but low kappa: II. Resolving the paradoxes. J Clin Epidemiol. 1990;43(6):551–558. doi: 10.1016/0895-4356(90)90159-M.
    1. Kraemer HC. Ramifications of a population model forκ as a coefficient of reliability. Psychometrika. 1979;44(4):461–472. doi: 10.1007/BF02296208.
    1. Nelson JC, Pepe MS. Statistical description of interrater variability in ordinal ratings. Stat Methods Med Res. 2000;9(5):475–496. doi: 10.1191/096228000701555262.
    1. Thompson WD, Walter SD. A reappraisal of the kappa coefficient. J Clin Epidemiol. 1988;41(10):949–958. doi: 10.1016/0895-4356(88)90031-5.
    1. Lantz CA, Nebenzahl E. Behavior and interpretation of the kappa statistic: resolution of the two paradoxes. J Clin Epidemiol. 1996;49(4):431–434. doi: 10.1016/0895-4356(95)00571-4.
    1. Bangdiwala SI. The Agreement Chart. Chapel Hill: The University of North Carolina; 1988.
    1. Bangdiwala SI, Shankar V. The agreement chart. BMC Med Res Methodol. 2013;13(1):97. doi: 10.1186/1471-2288-13-97.
    1. Aickin M. Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen’s kappa. Biometrics. 1990;46:293–302. doi: 10.2307/2531434.
    1. Andres AM, Femia-Marzo P. Chance-corrected measures of reliability and validity in 2× 2 tables. Commun Stat Theory Met. 2008;37(5):760–772. doi: 10.1080/03610920701669884.
    1. Andrés AM, Marzo PF. Delta: a new measure of agreement between two raters. Brit J Math Stat Psychol. 2004;57(1):1–19. doi: 10.1348/000711004849268.
    1. Gwet KL. Computing inter‒rater reliability and its variance in the presence of high agreement. Brit J Math Stat Psychol. 2008;61(1):29–48. doi: 10.1348/000711006X126600.
    1. Bangdiwala SI. 45th International Statistical Institute Meeting, 1985. Amsterdam; 1985. A Graphical Test for Observer Agreement; pp. 307–308.
    1. Meyer D, Zeileis A, Hornik K, Meyer MD, KernSmooth S. The vcd package. Retrieved October. 2007;3:2007.
    1. Friendly M. Visualizing Categorical Data. Cary, NC: SAS Institute; 2000.
    1. Guggenmoos‒Holzmann I. How reliable are change‒corrected measures of agreement? Stat Med. 1993;12(23):2191–2205. doi: 10.1002/sim.4780122305.
    1. Guggenmoos-Holzmann I. The meaning of kappa: probabilistic concepts of reliability and validity revisited. J Clin Epidemiol. 1996;49(7):775–782. doi: 10.1016/0895-4356(96)00011-X.

Source: PubMed

3
Abonnieren