Validation of a detailed scoring checklist for use during advanced cardiac life support certification

Matthew D McEvoy, Jeremy C Smalley, Paul J Nietert, Larry C Field, Cory M Furse, John W Blenko, Benjamin G Cobb, Jenna L Walters, Allen Pendarvis, Nishita S Dalal, John J Schaefer 3rd, Matthew D McEvoy, Jeremy C Smalley, Paul J Nietert, Larry C Field, Cory M Furse, John W Blenko, Benjamin G Cobb, Jenna L Walters, Allen Pendarvis, Nishita S Dalal, John J Schaefer 3rd

Abstract

Introduction: Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, before setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by nonexpert raters during simulations of American Heart Association (AHA) Megacodes.

Methods: The reliability of scores generated from a detailed set of checklists, when used by 4 nonexpert raters, was tested by grading team leader performance in 8 Megacode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and a group of nonexpert raters. The videos were reviewed "continuously" and "with pauses." The grading made by 2 content experts served as the reference standard, and 4 nonexpert raters were used to test the reliability of the checklists.

Results: Our results demonstrate that nonexpert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intrarater reliability and agreement with a reference standard. The results also demonstrate that nonexpert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA Megacodes when using our checklist in a continuous mode because measures of agreement in total scoring were very strong [Lin's (Biometrics 1989;45:255-268) concordance correlation coefficient, 0.96; intraclass correlation coefficient, 0.97].

Conclusions: We have shown that our checklists can yield reliable scores, are appropriate for use by nonexpert raters, and are able to be used during continuous assessment of team leader performance during the review of a simulated Megacode. This checklist may be more appropriate for use by advanced cardiac life support instructors during Megacode assessments than the current tools provided by the AHA.

Conflict of interest statement

Conflict of Interest Statement:

Dr. Schaefer contributed mentorship, review of videos, guidance concerning checklist design, and manuscript review for this study, but pursuant to the Medical University of South Carolina Conflict of Interest (COI) policy, he did not participate in data collection, reduction or analysis related to this study due to his potential COI, which includes simulator patent and copyright royalties. Dr. Schaefer receives patent royalties from Laerdal Medical Corporation (SimMan/Baby/3G) and he is a non-majority owner of SimTunes, which is a commercial outlet for Medical University of South Carolina licensed, copyrightable simulation training products. These amount to <0.5% of Dr. Schaefer's annual income.

The full contents of this information are included in the attached COI forms. In brief, Dr. McEvoy and Mr. Smalley have non-majority equity interest in Patient Safety Strategies, LLC. This is a company that markets medical applications for iOS-compatible devices. Neither of these authors have received any remunerations from the company, nor were any company products tested during this investigation. Dr. Schaefer has Patent royalties from Laerdal Medical Corp. (SimMan/Baby/3G) and a non-majority ownership of SimTunes (an outlet copyrightable simulation material).

Figures

Figure 1
Figure 1
This figure shows the agreement with the reference standard by rater and by the order in which the video was graded during the first round of grading. The 8 videos were graded in different orders by each non-expert rater. As this figure represents the agreement with the reference standard during the first round of ‘continuous’ grading, it shows that no significant learning curve existed after the checklist and video training program was complete.
Figure 2
Figure 2
An example screenshot from the SimMan software interface that will need to be tested in the future in order to determine if these checklists can be reliably used in this format while simulations are actually occurring.

Source: PubMed

3
Abonnere