A Study to Evaluate Strategies for Teaching Effective Use of Diagnostic Tests

October 15, 2019 updated by: John Brush, Sentara Norfolk General Hospital
A recent Institute of Medicine monograph brought attention to high rates of diagnostic error and called for better educational efforts to improve diagnostic accuracy.1 Educational methods, however, are rarely tested and some educational efforts may be ineffective and wasteful.2 In this study, we plan to examine whether explicit instruction on diagnostic methods will have an effect on diagnostic accuracy of 2nd-year medical students and internal medicine residents.

Study Overview

Status

Completed

Intervention / Treatment

Detailed Description

Research has shown that expert diagnosticians use a two-step process to confirm a diagnosis: hypothesis generation to generate diagnostic possibilities, followed by hypothesis verification to confirm the most likely diagnostic possibility.3-5 The first step appears to be non-analytical, related to pattern recognition. The second step could be calculated using analytical reasoning, however, physicians rarely make an overt calculation of conditional probabilities. Instead, experienced clinicians typically use an implicit habit or heuristic called "anchoring and adjusting" to incorporate diagnostic testing information into their thinking.6,7 Cognitive psychologists have postulated that anchoring and adjusting provides a way that probability estimates can be updated based on additional new evidence. Most of the discussion in the literature focuses on how this heuristic can lead to biased thinking because of base-rate neglect or anchoring.6 Very little discussion is on how this heuristic could be improved to yield more accurate probability estimates and whether proper use of the heuristic could be taught.

The degree to which a diagnostic test should lead to an adjustment of a probability estimate depends on the operating characteristics of a test, that is, the sensitivity and specificity. Likelihood ratios, once understood, are easier to incorporate into one's thinking, and thus could be used to calibrate the anchoring and adjusting heuristic.7

In this randomized trial, we tested whether explicit conceptual instruction on Bayesian reasoning and likelihood ratios would improve Bayesian updating, compared with a second intervention where we provided multiple (27) examples of clinical problem solving. The third arm provided minimal teaching about diagnosis, but no explicit teaching or examples.

Study Type

Interventional

Enrollment (Actual)

65

Phase

  • Not Applicable

Contacts and Locations

This section provides the contact details for those conducting the study, and information on where this study is being conducted.

Study Locations

    • Virginia
      • Norfolk, Virginia, United States, 23507
        • Sentara Norfolk General Hospital

Participation Criteria

Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.

Eligibility Criteria

Ages Eligible for Study

18 years and older (Adult, Older Adult)

Accepts Healthy Volunteers

Yes

Genders Eligible for Study

All

Description

Inclusion Criteria:

  • Medical Student at McMaster University or Eastern Virginia Medical School
  • Completed 18 months of coursework

Study Plan

This section provides details of the study plan, including how the study is designed and what the study is measuring.

How is the study designed?

Design Details

  • Primary Purpose: Health Services Research
  • Allocation: Randomized
  • Interventional Model: Parallel Assignment
  • Masking: Single

Arms and Interventions

Participant Group / Arm
Intervention / Treatment
Experimental: Analytical
Students will receive brief instruction in probability, sensitivity, specificity, and likelihood ratios, with distributions and calculations. Pretest and posttest probabilities will be computed for two cases for each of the three conditions listed above.
The present study is designed to contrast two instructional methods - explicit instruction in likelihood ratios and pretest/posttest probabilities versus implicit instruction based on presentation of multiple cases. These will be compared to a "no intervention" control group.
Other Names:
  • Teaching through examples
  • No active teaching
Active Comparator: Experiential
Students will receive a brief instruction conceptually discussing sensitivity and specificity (e.g. "a sensitive test will be positive at even low levels of disease. However, this can lead to a number of false positive errors, when the test is positive even when there is no disease. As a result, it is most useful for ruling out a diagnosis"). They will then work through a total of 30 cases, 10 for each condition, in blocked sequence. For each brief written case they will be asked for a probability of diagnosis after the clinical information is presented. The test result will then be given and they will be asked for a post-test probability. Their estimate will be compared to the computed value based on published estimates of sensitivity and specificity and feedback provided.
The present study is designed to contrast two instructional methods - explicit instruction in likelihood ratios and pretest/posttest probabilities versus implicit instruction based on presentation of multiple cases. These will be compared to a "no intervention" control group.
Other Names:
  • Teaching through examples
  • No active teaching
Placebo Comparator: No Explicit Instruction or Examples
Students will receive 3 passages from a clinical text related to each of the 3 conditions in the study and asked to study them for 15 min each.
The present study is designed to contrast two instructional methods - explicit instruction in likelihood ratios and pretest/posttest probabilities versus implicit instruction based on presentation of multiple cases. These will be compared to a "no intervention" control group.
Other Names:
  • Teaching through examples
  • No active teaching

What is the study measuring?

Primary Outcome Measures

Outcome Measure
Measure Description
Time Frame
Accuracy of participants probability revisions were compared to posttest probability revisions that were calculated using Bayes Rule. An effect size was calculated to measure how close students matched the calculated revision.
Time Frame: Post-test was taken within 72 hours of instructional phase completion.
To perform the effect size analysis, two transformations were performed. First, the difference between the subjective estimate and the Bayesian calculation of post-test probability was squared to remove negative differences and permit combining of the effects of positive and negative test results. Second, a correction based on the intrinsic error of a probability estimate was applied by dividing each squared difference by p(1-p). In this manner, we transformed each raw difference to a squared effect size (difference / error of difference). Finally, the square root was computed, to transform the data back to an effect size. The resulting effect size was then used for statistical analysis. For this primary analysis, a mixed model ANOVA was used.
Post-test was taken within 72 hours of instructional phase completion.

Collaborators and Investigators

This is where you will find people and organizations involved with this study.

Publications and helpful links

The person responsible for entering information about the study voluntarily provides these publications. These may be about anything related to the study.

Study record dates

These dates track the progress of study record and summary results submissions to ClinicalTrials.gov. Study records and reported results are reviewed by the National Library of Medicine (NLM) to make sure they meet specific quality control standards before being posted on the public website.

Study Major Dates

Study Start (Actual)

May 15, 2018

Primary Completion (Actual)

January 1, 2019

Study Completion (Actual)

October 15, 2019

Study Registration Dates

First Submitted

October 15, 2019

First Submitted That Met QC Criteria

October 15, 2019

First Posted (Actual)

October 17, 2019

Study Record Updates

Last Update Posted (Actual)

October 17, 2019

Last Update Submitted That Met QC Criteria

October 15, 2019

Last Verified

October 1, 2019

More Information

Terms related to this study

Other Study ID Numbers

  • 18-04-EX-0062

Plan for Individual participant data (IPD)

Plan to Share Individual Participant Data (IPD)?

NO

Drug and device information, study documents

Studies a U.S. FDA-regulated drug product

No

Studies a U.S. FDA-regulated device product

No

This information was retrieved directly from the website clinicaltrials.gov without any changes. If you have any requests to change, remove or update your study details, please contact register@clinicaltrials.gov. As soon as a change is implemented on clinicaltrials.gov, this will be updated automatically on our website as well.

Clinical Trials on Instructional Methods

Clinical Trials on Conceptual teaching

3
Subscribe