Radiologist characteristics associated with interpretive performance of diagnostic mammography

Diana L Miglioretti, Rebecca Smith-Bindman, Linn Abraham, R James Brenner, Patricia A Carney, Erin J Aiello Bowles, Diana S M Buist, Joann G Elmore, Diana L Miglioretti, Rebecca Smith-Bindman, Linn Abraham, R James Brenner, Patricia A Carney, Erin J Aiello Bowles, Diana S M Buist, Joann G Elmore

Abstract

Background: Extensive variability has been noted in the interpretive performance of screening mammography; however, less is known about variability in diagnostic mammography performance.

Methods: We examined the performance of 123 radiologists who interpreted 35895 diagnostic mammography examinations that were obtained to evaluate a breast problem from January 1, 1996, through December 31, 2003, at 72 facilities that contribute data to the Breast Cancer Surveillance Consortium. We modeled the influence of radiologist characteristics on the sensitivity and false-positive rate of diagnostic mammography, adjusting for patient characteristics by use of a Bayesian hierarchical logistic regression model.

Results: The median sensitivity was 79% (range = 27%-100%) and the median false-positive rate was 4.3% (range = 0%-16%). Radiologists in academic medical centers, compared with other radiologists, had higher sensitivity (88%, 95% confidence interval [CI] = 77% to 94%, versus 76%, 95% CI = 72% to 79%; odds ratio [OR] = 5.41, 95% Bayesian posterior credible interval [BPCI] = 1.55 to 21.51) with a smaller increase in their false-positive rates (7.8%, 95% CI = 4.8% to 12.7%, versus 4.2%, 95% CI = 3.8% to 4.7%; OR = 1.73, 95% BPCI = 1.05 to 2.67) and a borderline statistically significant improvement in accuracy (OR = 3.01, 95% BPCI = 0.97 to 12.15). Radiologists spending 20% or more of their time on breast imaging had statistically significantly higher sensitivity than those spending less time on breast imaging (80%, 95% CI = 76% to 83%, versus 70%, 95% CI = 64% to 75%; OR = 1.60, 95% BPCI = 1.05 to 2.44) with non-statistically significant increased false-positive rates (4.6%, 95% CI = 4.0% to 5.3%, versus 3.9%, 95% CI = 3.3% to 4.6%; OR = 1.17, 95% BPCI = 0.92 to 1.51). More recent training in mammography and more experience performing breast biopsy examinations were associated with a decreased threshold for recalling patients, resulting in similar statistically significant increases in both sensitivity and false-positive rates.

Conclusions: We found considerable variation in the interpretive performance of diagnostic mammography across radiologists that was not explained by the characteristics of the patients whose mammograms were interpreted. This variability is concerning and likely affects many women with and without breast cancer.

Figures

Fig. 1
Fig. 1
Observed (unadjusted) radiologist-specific sensitivity versus false-positive rate and the corresponding receiver operating characteristic curve within the observed range of false-positive rates. The area of a circle is proportional to the number of mammograms from patients with a diagnosis of breast cancer that were interpreted by that radiolo-gist (range = 1 – 77 mammograms).
Fig. 2
Fig. 2
Unadjusted and adjusted sensitivity and false-positive rates for diagnostic mammography by radiologist characteristics. Rates were adjusted for patient age, time since last mammogram, self-report of lump, breast density, and mammography registry. Open squares = unadjusted values; solid diamonds = adjusted values; CI = confidence interval.
Fig. 3
Fig. 3
Association between radiologist characteristics and a true-positive (sensitivity) and false-positive mammogram. In addition to the radiolo-gist characteristics indicated, data were also adjusted for patient age, time since last mammogram, report of breast lump, mammographic breast density, and mammography registry. OR = odds ratio. BPCI = Bayesian posterior credible interval.
Fig. 4
Fig. 4
Observed and adjusted sensitivity (A) and false-positive rate (B) for each radiologist. Model 1 = observed (unadjusted) rates; model 2 = adjusted for registry and correlation within radiologists; model 3 = additionally adjusted for patient characteristics (age, breast density, time since last mammography examination, and self-reported presence of a breast lump); model 4 = additionally adjusted for radiologist characteristics. Data for each radiologist have been connected with a line.

Source: PubMed

3
Iratkozz fel