Advancing implementation science through measure development and evaluation: a study protocol

Cara C Lewis, Bryan J Weiner, Cameo Stanick, Sarah M Fischer, Cara C Lewis, Bryan J Weiner, Cameo Stanick, Sarah M Fischer

Abstract

Background: Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength.

Methods/design: For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data.

Discussion: The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.

Figures

Fig. 1
Fig. 1
Psychometric-pragmatic grid. Note. Predictions of psychometric and pragmatic strength for measures across domains
Fig. 2
Fig. 2
Nomological network. Note. Included in our nomological network are antecedent constructs that have high relevance for only one focal construct

References

    1. Grimshaw J, Eccles M, Thomas R, MacLennan G, Ramsay C, Fraser C, et al. Toward evidence-based quality improvement. J Gen Intern Med. 2006;21:S14–20.
    1. Glasgow RE, Riley WT. Pragmatic measures: what they are and why we need them. Am J Prev Med. 2013;45:237–43. doi: 10.1016/j.amepre.2013.03.010.
    1. Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, Martinez RG. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci, In Press.
    1. Chor KHB, Wisdom JP, Olin S-CS, Hoagwood KE, Horwitz SM. Measures for predictors of innovation adoption. Adm Policy Ment Health Ment Health Serv Res. 2014:1–29. Epub.
    1. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8:22. doi: 10.1186/1748-5908-8-22.
    1. Rabin BA, Purcell P, Naveed S, Moser RP, Henton MD, Proctor EK, et al. Advancing the application, quality and harmonization of implementation science measures. Implement Sci. 2012;7:119. doi: 10.1186/1748-5908-7-119.
    1. Hunsley J, Mash EJ. Developing criteria for evidence-based assessment: an introduction to assessments that work. 2008. pp. 3–14.
    1. Weisz JR, Ng MY, Bearman SK. Odd couple? Reenvisioning the relation between science and practice in the dissemination-implementation era. Clin Psychol Sci. 2014;2:58–74. doi: 10.1177/2167702613501307.
    1. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26:13–24. doi: 10.1002/chp.47.
    1. Brown SR. Q methodology and qualitative research. Qual Health Res. 1996;6:561–7. doi: 10.1177/104973239600600408.
    1. Michie S, Abraham C, Eccles MP, Francis JJ, Hardeman W, Johnston M. Strengthening evaluation and implementation by specifying components of behaviour change interventions: a study protocol. Implement Sci. 2011;6:10. doi: 10.1186/1748-5908-6-10.
    1. Pill J. The Delphi method: substance, context, a critique and an annotated bibliography. Socioecon Plann Sci. 1971;5:57–71. doi: 10.1016/0038-0121(71)90041-3.
    1. Hasson F, Keeney S, McKenna H. Research guidelines for the Delphi survey technique. J Adv Nurs. 2000;32:1008–15.
    1. Dajani JS, Sincoff MZ, Talley WK. Stability and agreement criteria for the termination of Delphi studies. Technol Forecast Soc Change. 1979;13:83–90. doi: 10.1016/0040-1625(79)90007-6.
    1. Lewis C, Borntrager C, Martinez R, Weiner B, Kim M, Barwick M, et al. The society for implementation research collaboration instrument review project: a collaborative methodology to promote rigorous evaluation. Implement Sci. 2015;10(1):2. doi: 10.1186/s13012-014-0193-x.
    1. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. doi: 10.1186/1748-5908-4-50.
    1. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7.
    1. Wisdom JP, Chor KHB, Hoagwood KE, Horwitz SM. Innovation adoption: a review of theories and constructs. Adm Policy Ment Health Ment Health Serv Res. 2014;41(4):480–502.
    1. DeVellis RF. Scale development: theory and applications. USA: Sage Publications; 2011.
    1. Viswanathan M. Measurement error and research design. Thousand Oaks, CA: Sage; 2005.
    1. Hinkin TR. A brief tutorial on the development of measures for use in survey questionnaires. Organ Res Methods. 1998;1:104–21. doi: 10.1177/109442819800100106.
    1. Nunnally JC. Bernstein: psychometric theory. New York: McGraw-Hill; 1994.
    1. Anderson JC, Gerbing DW. Predicting the performance of measures in a confirmatory factor analysis with a pretest assessment of their substantive validities. J Appl Psychol. 1991;76:732. doi: 10.1037/0021-9010.76.5.732.
    1. Huijg JM, Gebhardt WA, Crone MR, Dusseldorp E, Presseau J. Discriminant content validity of a theoretical domains framework questionnaire for use in implementation research. Implement Sci. 2014;9:11. doi: 10.1186/1748-5908-9-11.
    1. MacCallum RC, Browne MW, Sugawara HM. Power analysis and determination of sample size for covariance structure modeling. Psychol Methods. 1996;1:130. doi: 10.1037/1082-989X.1.2.130.
    1. Messick S. Validity of psychological assessment: validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. Am Psychol. 1995;50:741. doi: 10.1037/0003-066X.50.9.741.
    1. Kerlinger Fred N, Lee Howard B: Foundations of behavioral research. N Y 2000.
    1. Donnelly J, Trochim W. The research methods knowledge base. Ohio: Atomic Dog Publishing; 2007.
    1. Rogers E: M. Diffusion of innovations. New York: Free Press; 2008.
    1. Minasian LM, Carpenter WR, Weiner BJ, Anderson DE, McCaskill-Stevens W, Nelson S, et al. Translating research into evidence-based practice. Cancer. 2010;116:4440–9. doi: 10.1002/cncr.25248.
    1. Muthén LK, Muthén BO. Mplus. Compr Model Program Appl Res Users Guide. 2012;5:E22.
    1. Terwee CB, Mokkink LB, Knol DL, Ostelo RW, Bouter LM, de Vet HC. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist. Qual Life Res. 2012;21:651–7. doi: 10.1007/s11136-011-9960-1.
    1. Beidas RS, Stewart RE, Walsh L, Lucas S, Downey MM, Jackson K, et al. Free, brief, and validated: standardized instruments for low-resource mental health settings. Cogn Behav Pract. 2015;22(1):5–19. doi: 10.1016/j.cbpra.2014.02.002.

Source: PubMed

3
S'abonner