Visual agreement analyses of traditional chinese medicine: a multiple-dimensional scaling approach

Lun-Chien Lo, John Y Chiang, Tsung-Lin Cheng, Pei-Shuan Shieh, Lun-Chien Lo, John Y Chiang, Tsung-Lin Cheng, Pei-Shuan Shieh

Abstract

The study of TCM agreement in terms of a powerful statistical tool becomes critical in providing objective evaluations. Several previous studies have conducted on the issue of consistency of TCM, and the results have indicated that agreements are low. Traditional agreement measures only provide a single value which is not sufficient to justify if the agreement among several raters is strong or not. In light of this observation, a novel visual agreement analysis for TCM via multiple dimensional scaling (MDS) is proposed in this study. If there are clusters present in the raters in a latent manner, MDS can prove itself as an effective distinguisher. In this study, a group of doctors, consisting of 11 experienced TCM practitioners having clinical experience ranging from 3 to 15 years with a mean of 5.5 years from the Chinese Medicine Department at Changhua Christian Hospital (CCH) in Taiwan were asked to diagnose a total of fifteen tongue images, the Eight Principles derived from the TCM theorem. The results of statistical analysis show that, if there are clusters present in the raters in a latent manner, MDS can prove itself as an effective distinguisher.

Figures

Figure 1
Figure 1
MDS graphs for multiple attributes of Eight Principles for 11 TCM practitioners and 15 patients.

References

    1. Goodman Leo A, Kruskal William H. Measures of association for cross classifications. Journal of the American Statistical Association. 1954;49:732–764.
    1. Cohen J. A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 1960;20(1):37–46.
    1. Cohen J. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin. 1968;70(4):213–220.
    1. Fleiss JL, Jacob C. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement. 1973;33:613–619.
    1. Fleiss JL. Measuring nominal scale agreement among many raters. Psychological Bulletin. 1971;76(5):378–382.
    1. Krippendorff K. Estimating the reliability, systematic error, and random error of interval data. Educational and Psychological Measurement. 1970;30(1):61–70.
    1. Krippendorff K. Quantitative guidelines for communicable disease control programs. Biometrics. 1978;34(1):p. 142.
    1. Lo LC, Cheng TL, Huang YC, Chen YL, Wang JT. Analysis of agreement on traditional Chinese medical diagnostics for many practitioners. Evidence-Based complementary and Alternative Medicine. 2012;2012:5 pages.17801
    1. Kupper LL, Hafner KB. On assessing interrater agreement for multiple attribute responses. Biometrics. 1989;45(3):957–967.
    1. Everitt BS, Landau S, Leese M, Stahl D. Cluster Analysis. Chichester, UK: John Wiley & Sons, LTD; 2011. (Wiley Series in Probability and Statistics).
    1. Kruskal JB. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika. 1964;29(1):1–27.
    1. Kruskal JB. Nonmetric multidimensional scaling: a numerical method. Psychometrika. 1964;29(2):115–129.

Source: PubMed

3
订阅