Teaching surgical skills: what kind of practice makes perfect?: a randomized, controlled trial

Carol-Anne E Moulton, Adam Dubrowski, Helen Macrae, Brent Graham, Ethan Grober, Richard Reznick, Carol-Anne E Moulton, Adam Dubrowski, Helen Macrae, Brent Graham, Ethan Grober, Richard Reznick

Abstract

Objective: Surgical skills laboratories have become an important venue for early skill acquisition. The principles that govern training in this novel educational environment remain largely unknown; the commonest method of training, especially for continuing medical education (CME), is a single multihour event. This study addresses the impact of an alternative method, where learning is distributed over a number of training sessions. The acquisition and transfer of a new skill to a life-like model is assessed.

Methods: Thirty-eight junior surgical residents, randomly assigned to either massed (1 day) or distributed (weekly) practice regimens, were taught a new skill (microvascular anastomosis). Each group spent the same amount of time in practice. Performance was assessed pretraining, immediately post-training, and 1 month post-training. The ultimate test of anastomotic skill was assessed with a transfer test to a live, anesthetized rat. Previously validated computer-based and expert-based outcome measures were used. In addition, clinically relevant outcomes were assessed.

Results: Both groups showed immediate improvement in performance, but the distributed group performed significantly better on the retention test in most outcome measures (time, number of hand movements, and expert global ratings; all P values <0.05). The distributed group also outperformed the massed group on the live rat anastomosis in all expert-based measures (global ratings, checklist score, final product analysis, competency for OR; all P values <0.05).

Conclusions: Our current model of training surgical skills using short courses (for both CME and structured residency curricula) may be suboptimal. Residents retain and transfer skills better if taught in a distributed manner. Despite the greater logistical challenge, we need to restructure training schedules to allow for distributed practice.

Figures

https://www.ncbi.nlm.nih.gov/pmc/articles/instance/1856544/bin/8FF1.jpg
FIGURE 1. Box plots of all expert-based measures. The bar represents median, the box 25th to 75th percentile, and the whiskers the range of the data. Microsurgical drills (pre, post, and retention tests) and live rat (transfer) performances are plotted for global ratings, checklists, final product analysis, and competency for the distributed (clear) and massed (shaded) groups. Significant (set at P < 0.05) differences between tests and between groups are highlighted with an asterisk.
https://www.ncbi.nlm.nih.gov/pmc/articles/instance/1856544/bin/8FF2.jpg
FIGURE 2. Box plots for both computer-based measures. The bar represents median, the box 25th to 75th percentile, and the whiskers the range of the data. Microsurgical drills (pre, post, and retention tests) and live rat (transfer) performances are plotted for time and number of dominant hand movements for the distributed (clear) and massed (shaded) groups. Significant (set at P < 0.05) differences between tests and between groups are highlighted with an asterisk.
https://www.ncbi.nlm.nih.gov/pmc/articles/instance/1856544/bin/8FF3.jpg
FIGURE 3. Learning curve of outcome measure (A, time to complete drill; B, number of dominant hand movements) versus test for both groups. Shaded areas represent the tests that were subsequently used as pre and post tests. There were 4 training sessions, and each participant was tested on the microsurgical drill immediately before and after each session.

Source: PubMed

3
Prenumerera