Baby-MONITOR: a composite indicator of NICU quality

Jochen Profit, Marc A Kowalkowski, John A F Zupancic, Kenneth Pietz, Peter Richardson, David Draper, Sylvia J Hysong, Eric J Thomas, Laura A Petersen, Jeffrey B Gould, Jochen Profit, Marc A Kowalkowski, John A F Zupancic, Kenneth Pietz, Peter Richardson, David Draper, Sylvia J Hysong, Eric J Thomas, Laura A Petersen, Jeffrey B Gould

Abstract

Background and objectives: NICUs vary in the quality of care delivered to very low birth weight (VLBW) infants. NICU performance on 1 measure of quality only modestly predicts performance on others. Composite measurement of quality of care delivery may provide a more comprehensive assessment of quality. The objective of our study was to develop a robust composite indicator of quality of NICU care provided to VLBW infants that accurately discriminates performance among NICUs.

Methods: We developed a composite indicator, Baby-MONITOR, based on 9 measures of quality chosen by a panel of experts. Measures were standardized, equally weighted, and averaged. We used the California Perinatal Quality Care Collaborative database to perform across-sectional analysis of care given to VLBW infants between 2004 and 2010. Performance on the Baby-MONITOR is not an absolute marker of quality but indicates overall performance relative to that of the other NICUs. We used sensitivity analyses to assess the robustness of the composite indicator, by varying assumptions and methods.

Results: Our sample included 9023 VLBW infants in 22 California regional NICUs. We found significant variations within and between NICUs on measured components of the Baby-MONITOR. Risk-adjusted composite scores discriminated performance among this sample of NICUs. Sensitivity analysis that included different approaches to normalization, weighting, and aggregation of individual measures showed the Baby-MONITOR to be robust (r = 0.89-0.99).

Conclusions: The Baby-MONITOR may be a useful tool to comprehensively assess the quality of care delivered by NICUs.

Keywords: infant; newborn; performance measurement; quality of care.

Copyright © 2014 by the American Academy of Pediatrics.

Figures

FIGURE 1
FIGURE 1
The base case is obtained by averaging the z scores of quality measures for each NICU (see Table 2) 2004 to 2007 data. Note: failure of 2 95% intervals to overlap corresponds to a statistically significant difference at approximately the 99% level.
FIGURE 2
FIGURE 2
The difference of each symbol from the trend line of the base case is the key purpose of this graph. NICUs are ordered according to the base case rank. For each NICU, ranks were computed using 5 different weighting and aggregation schemes. The base case uses equal weighting and additive aggregation of z scores. In addition, we used mean and median expert weights, ranks rather than z scores for aggregation, and multiplicative aggregation (see Supplemental Web Appendix, Section 4, Table B for numerical detail). If <5 symbols are displayed per NICU this is attributable to overlap. NICU ranks for all schemes are highly correlated (r = 0.89–0.99). However, of particular interest is the comparison of the base case using additive aggregation (trend line) and the geometric mean using multiplicative aggregation (large white circles). Geometric aggregation penalizes NICUs with extreme performance (NICU K drops from rank 12 to rank 21).

Source: PubMed

3
订阅