Evidence of Validity for a Newly Developed Digital Cognitive Test Battery

Stefan Vermeent, Ron Dotsch, Ben Schmand, Laura Klaming, Justin B Miller, Gijs van Elswijk, Stefan Vermeent, Ron Dotsch, Ben Schmand, Laura Klaming, Justin B Miller, Gijs van Elswijk

Abstract

Clinical practice still relies heavily on traditional paper-and-pencil testing to assess a patient's cognitive functions. Digital technology has the potential to be an efficient and powerful alternative, but for many of the existing digital tests and test batteries the psychometric properties have not been properly established. We validated a newly developed digital test battery consisting of digitized versions of conventional neuropsychological tests. Two confirmatory factor analysis models were specified: a model based on traditional neuropsychological theory and expert consensus and one based on the Cattell-Horn-Carroll (CHC) taxonomy. For both models, the outcome measures of the digital tests loaded on the cognitive domains in the same way as established in the neuropsychological literature. Interestingly, no clear distinction could be made between the CHC model and traditional neuropsychological model in terms of model fit. Taken together, these findings provide preliminary evidence for the structural validity of the digital cognitive test battery.

Keywords: Cattell-Horn-Carroll model; confirmatory factor analysis; digital cognitive test battery; digital testing; structural validity.

Copyright © 2020 Vermeent, Dotsch, Schmand, Klaming, Miller and van Elswijk.

Figures

FIGURE 1
FIGURE 1
Graphic overview of (A) the neuropsychological consensus model and (B) the CHC model. Single-headed arrows represent factor loadings, double-headed arrows represent covariances. Covariances printed with solid lines were pre-specified. Covariances printed with dashed lines were added after inspecting modification indices. COWAT, Controlled Oral Word Association Test; CFT, Category Fluency Test; OCT, O-Cancellation Test; SCT, Star-Cancellation Test; RAVLT, Rey Auditory Verbal Learning Test; ROCFT, Rey-Osterrieth Complex Figure Test.

References

    1. Agelink van Rentergem J. A., de Vent N. R., Schmand B. A., Murre J. M. J., Staaks J. P. C. ANDI Consortium et al. (2020). The factor structure of cognitive functioning in cognitively healthy participants: a meta-analysis and meta-analysis of individual participant data. Neuropsychol. Rev. 30 51–96. 10.1007/s11065-019-09423-6
    1. American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. (2014) Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
    1. Bauer R. M., Iverson G. L., Cernich A. N., Binder L. M., Ruff R. M., Naugle R. I. (2012). Computerized neuropsychological assessment devices: joint position paper of the american academy of clinical neuropsychology and the national academy of neuropsychology. Clin. Neuropsychol. 26 177–196. 10.1080/13854046.2012.663001
    1. Carpenter R., Alloway T. (2018). Computer versus paper-based testing: are they equivalent when it comes to working memory? J. Psychoeduc. Assess. 37 382–394. 10.1177/0734282918761496
    1. Dowling N. M., Hermann B., La Rue A., Sager M. A. (2010). Latent structure and factorial invariance of a neuropsychological test battery for the study of preclinical Alzheimer’s disease. Neuropsychology 24 742–756. 10.1037/a0020176
    1. Feenstra H. E. M., Vermeulen I. E., Murre J. M. J., Schagen S. B. (2017). Online cognition: factors facilitating reliable online neuropsychological test results. Clin. Neuropsychol. 31 59–84. 10.1080/13854046.2016.1190405
    1. Floyd R. G., Bergeron R., Hamilton G., Parra G. R. (2010). How do executive functions fit with the Cattell–Horn–Carroll model? Some evidence from a joint factor factor analysis of the Delis–Kaplan executive function system and the Woodcock–Johnson III test of cognitive abilities. Psychol. Sch. 47 721–738. 10.1002/pits.20500
    1. Folstein M. F., Folstein S. E., White T., Messer M. A. (2010). MMSE-2: Mini-Mental State Examination, 2nd Edn Lutz, FL: Psychological Assessment Resources.
    1. Friedman N. P., Miyake A. (2017). Unity and diversity of executive functions: individual differences as a window on cognitive structure. Cortex 86 186–204. 10.1016/j.cortex.2016.04.023
    1. Galindo-Aldana G., Meza-Kubo V., Castillo-Medina G., Ledesma-Amaya I., Galarza-Del-Angel J., Padilla-López A., et al. (2018). “Computer-based neuropsychological assessment: a validation of structured examination of executive functions and emotion,” in Engineering Psychology and Cognitive Ergonomics, ed. Harris D. (Cham: Springer International Publishing; ), 306–316.
    1. Germine L., Reinecke K., Chaytor N. S. (2019). Digital neuropsychology: challenges and opportunities at the intersection of science and software. Clin. Neuropsychol. 33 271–286. 10.1080/13854046.2018.1535662
    1. Goodglass H., Kaplan E., Barresi B. (2001). The Assessment of Aphasia and Related Disorders, 3rd Edn Philadelphia: Lippincott, Williams & Wilkins.
    1. Hayden K. M., Jones R. N., Zimmer C., Plassman B. L., Browndyke J. N., Pieper C., et al. (2011). Factor structure of the National Alzheimer’s coordinating centers uniform dataset neuropsychological battery: an evaluation of invariance between and within groups over time. Alzheimer Dis. Assoc. Disord. 25 128–137. 10.1097/WAD.0b013e3181ffa76d
    1. Hoogland J., Boel J. A., de Bie R. M., Geskus R. B., Schmand B. A., Dalrymple-Alford J. C., et al. (2017). Mild cognitive impairment as a risk factor for Parkinson’s disease dementia. Mov. Disord. 32 1056–1065. 10.1002/mds.27002
    1. Hu L., Bentler P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. Multidiscip. J. 6 1–55. 10.1080/10705519909540118
    1. Jackson D. L., Gillaspy J. A., Purc-Stephenson R. (2009). Reporting practices in confirmatory factor analysis: an overview and some recommendations. Psychol. Methods 14 6–23. 10.1037/a0014694
    1. Jewsbury P. A., Bowden S. C. (2016). Construct validity of fluency and implications for the factorial structure of memory. J. Psychoeduc. Assess. 35 460–481. 10.1177/0734282916648041
    1. Jewsbury P. A., Bowden S. C., Duff K. (2017). The Cattell–Horn–Carroll model of cognition for clinical assessment. J. Psychoeduc. Assess. 35 547–567. 10.1177/0734282916651360
    1. Karr J. E., Areshenkoff C. N., Rast P., Hofer S. M., Iverson G. L., Garcia-Barrera M. A. (2018). The unity and diversity of executive functions: a systematic review and re-analysis of latent variable studies. Psychol. Bull. 144 1147–1185. 10.1037/bul0000160
    1. Kessels R. P. C. (2019). Improving precision in neuropsychological assessment: bridging the gap between classic paper-and-pencil tests and paradigms from cognitive neuroscience. Clin. Neuropsychol. 33 357–368. 10.1080/13854046.2018.1518489
    1. Klaming L., Vlaskamp N. S. (2018). Non-dominant hand use increases completion time on part B of the Trail Making Test but not on part A. Behav. Res. Methods 50 1074–1087. 10.3758/s13428-017-0927-1
    1. Kline R. B. (2011). Principles and Practice of Structural Equation Modeling. New York, NY: The Guilford Press.
    1. Kovacs K., Conway A. R. A. (2016). Process overlap theory: a unified account of the general factor of intelligence. Psychol. Inq. 27 151–177. 10.1080/1047840X.2016.1153946
    1. Larrabee G. J. (2014). Test validity and performance validity: considerations in providing a framework for development of an ability-focused neuropsychological test battery. Arch. Clin. Neuropsychol. Off. J. Natl. Acad. Neuropsychol. 29 695–714. 10.1093/arclin/acu049
    1. Larrabee G. J. (2015). The multiple validities of neuropsychological assessment. Am. Psychol. 70 779–788. 10.1037/a0039835
    1. Lezak M. D., Howieson D. B., Bigler E. D., Tranel D. (2012). Neuropsychological Assessment, 5th Edn New York, NY: Oxford University Press.
    1. McGrew K. S. (2009). CHC theory and the human cognitive abilities project: standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37 1–10. 10.1016/j.intell.2008.08.004
    1. Merkle E. C., You D. (2018). Getting Started with Nonnest2. Available online at: (accessed March 18, 2019).
    1. Merkle E. C., You D., Preacher K. J. (2016). Testing non-nested structural equation models. Psychol. Methods 21 151–163. 10.1037/met0000038
    1. Miller J. B., Barr W. B. (2017). The technology crisis in neuropsychology. Arch. Clin. Neuropsychol. 32 541–554. 10.1093/arclin/acx050
    1. Miyake A., Friedman N. P., Emerson M. J., Witzki A. H., Howerter A., Wager T. D. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: a latent variable analysis. Cogn. Psychol. 41 49–100. 10.1006/cogp.1999.0734
    1. Nyhus E., Barceló F. (2009). The wisconsin card sorting Test and the cognitive assessment of prefrontal executive functions: a critical update. Brain Cogn. 71 437–451. 10.1016/j.bandc.2009.03.005
    1. Park L. Q., Gross A. L., McLaren D. G., Pa J., Johnson J. K., Mitchell M., et al. (2012). Confirmatory factor analysis of the ADNI neuropsychological battery. Brain Imaging Behav. 6 528–539. 10.1007/s11682-012-9190-3
    1. Parsey C. M., Schmitter-Edgecombe M. (2013). Applications of technology in neuropsychological assessment. Clin. Neuropsychol. 27 1328–1361. 10.1080/13854046.2013.834971
    1. R Core Team (2018). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.
    1. Rabin L. A., Paolillo E., Barr W. B. (2016). Stability in test-usage practices of clinical neuropsychologists in the United States and Canada over a 10-year period: a follow-up survey of INS and NAN members. Arch. Clin. Neuropsychol. Off. J. Natl. Acad. Neuropsychol. 31 206–230. 10.1093/arclin/acw007
    1. Riordan P., Lombardo T., Schulenberg S. E. (2013). Evaluation of a computer-based administration of the rey complex figure test. Appl. Neuropsychol. 20 169–178. 10.1080/09084282.2012.670171
    1. Rosseel Y. (2012). Lavaan: an R package for structural equation modeling and more. Version 0.5–12 (BETA). J. Stat. Softw. 48 1–36. 10.18637/jss.v048.i02
    1. Salthouse T. A. (2005). Relations between cognitive abilities and measures of executive functioning. Neuropsychology 19 532–545. 10.1037/0894-4105.19.4.532
    1. Schermelleh-Engel K., Moosbrugger H., Müller H. (2003). Evaluating the fit of structural equation models: tests of significance and descriptive goodness-of-fit measures. Methods Psychol. Res. Online 8 23–74.
    1. Schlegel R. E., Gilliland K. (2007). Development and quality assurance of computer-based assessment batteries. Arch. Clin. Neuropsychol. 22(Suppl. 1), 49–61. 10.1016/j.acn.2006.10.005
    1. Schmand B. (2019). Why are neuropsychologists so reluctant to embrace modern assessment techniques? Clin. Neuropsychol. 33 209–219. 10.1080/13854046.2018.1523468
    1. Schneider W. J., McGrew K. S. (2018). “The Cattell-Horn-Carroll theory of cognitive abilities,” in Contemporary Intellectual Assessment: Theories, Tests, and Issues, eds Flanagan D. P., McDonough E. M. (New York, NY: The Guilford Press; ), 73–163.
    1. Siedlecki K. L., Honig L. S., Stern Y. (2008). Exploring the structure of a neuropsychological battery across healthy elders and those with questionable dementia and Alzheimer’s disease. Neuropsychology 22 400–411. 10.1037/0894-4105.22.3.400
    1. Strauss E., Sherman E. M., Spreen O. (2006). A Compendium of Neuropsychological Tests: Administration, Norms, and Commentary. New York, NY: Oxford University Press.
    1. Testa S. M., Winicki J. M., Pearlson G. D., Gordon B., Schretlen D. J. (2009). Accounting for estimated IQ in neuropsychological test performance with regression-based techniques. J. Int. Neuropsychol. Soc. 15 1012–1022. 10.1017/S1355617709990713
    1. UNESCO (1997/2006). International Standard Classification of Education: ISCED 1997 (re-edition). Paris: UNESCO.
    1. Vuong Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica 57 307–333. 10.2307/1912557
    1. Wasserman J. D. (2019). Deconstructing CHC. Appl. Meas. Educ. 32 249–268. 10.1080/08957347.2019.1619563
    1. Weintraub S., Dikmen S. S., Heaton R. K., Tulsky D. S., Zelazo P. D., Bauer P. J., et al. (2013). Cognition assessment using the NIH toolbox. Neurology 80 S54–S64. 10.1212/WNL.0b013e3182872ded
    1. Wild K., Howieson D., Webbe F., Seelye A., Kaye J. (2008). The status of computerized cognitive testing in aging: a systematic review. Alzheimer Dement. J. Alzheimers Assoc. 4 428–437. 10.1016/j.jalz.2008.07.003
    1. Williams D. J., Noyes J. M. (2007). Effect of experience and mode of presentation on problem solving. Comput. Hum. Behav. 23 258–274. 10.1016/j.chb.2004.10.011
    1. Williams J. E., McCord D. M. (2006). Equivalence of standard and computerized versions of the raven progressive matrices test. Comput. Hum. Behav. 22 791–800. 10.1016/j.chb.2004.03.005
    1. Zygouris S., Tsolaki M. (2015). Computerized cognitive testing for older adults: a review. Am. J. Alzheimers Dis. Dement. 30 13–28. 10.1177/1533317514522852

Source: PubMed

3
Abonneren