Utility of deep learning networks for the generation of artificial cardiac magnetic resonance images in congenital heart disease

Gerhard-Paul Diller, Julius Vahle, Robert Radke, Maria Luisa Benesch Vidal, Alicia Jeanette Fischer, Ulrike M M Bauer, Samir Sarikouch, Felix Berger, Philipp Beerbaum, Helmut Baumgartner, Stefan Orwat, German Competence Network for Congenital Heart Defects Investigators, Gerhard-Paul Diller, Julius Vahle, Robert Radke, Maria Luisa Benesch Vidal, Alicia Jeanette Fischer, Ulrike M M Bauer, Samir Sarikouch, Felix Berger, Philipp Beerbaum, Helmut Baumgartner, Stefan Orwat, German Competence Network for Congenital Heart Defects Investigators

Abstract

Background: Deep learning algorithms are increasingly used for automatic medical imaging analysis and cardiac chamber segmentation. Especially in congenital heart disease, obtaining a sufficient number of training images and data anonymity issues remain of concern.

Methods: Progressive generative adversarial networks (PG-GAN) were trained on cardiac magnetic resonance imaging (MRI) frames from a nationwide prospective study to generate synthetic MRI frames. These synthetic frames were subsequently used to train segmentation networks (U-Net) and the quality of the synthetic training images, as well as the performance of the segmentation network was compared to U-Net-based solutions trained entirely on patient data.

Results: Cardiac MRI data from 303 patients with Tetralogy of Fallot were used for PG-GAN training. Using this model, we generated 100,000 synthetic images with a resolution of 256 × 256 pixels in 4-chamber and 2-chamber views. All synthetic samples were classified as anatomically plausible by human observers. The segmentation performance of the U-Net trained on data from 42 separate patients was statistically significantly better compared to the PG-GAN based training in an external dataset of 50 patients, however, the actual difference in segmentation quality was negligible (< 1% in absolute terms for all models).

Conclusion: We demonstrate the utility of PG-GANs for generating large amounts of realistically looking cardiac MRI images even in rare cardiac conditions. The generated images are not subject to data anonymity and privacy concerns and can be shared freely between institutions. Training supervised deep learning segmentation networks on this synthetic data yielded similar results compared to direct training on original patient data.

Trial registration: ClinicalTrials.gov NCT00266188.

Conflict of interest statement

Not applicable.

Figures

Fig. 1
Fig. 1
Study overview illustrating the use of original cardiac magnetic resonance (CMR) images for generation of synthetic short axis (SAX) and long axis (LAX) images using a progressive generative adversarial network (PG GAN). The resulting images were subjected to visual inspection by CMR experts and general cardiologists. In addition, deep learning segmentation networks (with U-Net design) were built based, both, on PG GAN and actual CMR frames. The accuracy of the resulting segmentation networks was finally compared on a separate data set not used for training of either network
Fig. 2
Fig. 2
Illustration of the network design of the U-Net segmentation network. The network accepts a greyscale frame (128 × 128 pixels) and produces segmentation maps of equal size for the heart chambers involved. The network consists of a contracting path with multiple 3 × 3 convolutions followed by ReLU (Rectified Linear Unit) activation and a max. Pooling operation (2 × 2). The number of channels is doubled at each step of the contraction path. In the expanding part, the feature maps are upscaled symmetrically, with 2 × 2 up-convolutions. In addition, channels of the expanding path are combined with the corresponding part of the contracting path through concatenation. The number on top corresponds to the number of channels, while the dimensions are given on the left of the respective boxes. For details see Ref. [13]
Fig. 3
Fig. 3
Overview over the training of the progressive adversarial network (PG GAN) using increasing image resolution of 4 × 4, 8 × 8, 16 × 16, 32 × 32, 64 × 64 and 128 × 128 pixels. Finally, a maximal resolution of 256 × 256 pixels is achieved (right panel)
Fig. 4
Fig. 4
Comparison of synthetic cardiac magnetic resonance (CMR) images (top row) produced by the progressive generative adversarial network (GAN) with actual CMR images from patients with tetralogy of Fallot with the highest degree of statistical similarity (Wasserstein distance; for details see Method section)

References

    1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56. doi: 10.1038/s41591-018-0300-7.
    1. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18(8):500–510. doi: 10.1038/s41568-018-0016-5.
    1. Diller G-P, Babu-Narayan S, Li W, Radojevic J, Kempny A, Uebing A, et al. Utility of machine learning algorithms in assessing patients with a systemic right ventricle. Eur Heart J Cardiovasc Imaging. 2019;20(8):925–931. doi: 10.1093/ehjci/jey211.
    1. Bai W, Sinclair M, Tarroni G, et al. Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. J Cardiovasc Magn Reson. 2018;20(1):65. doi: 10.1186/s12968-018-0471-x.
    1. Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of GANs for improved quality, stability, and variation. 2017;arXiv:1710.10196.
    1. Diller GP, Orwat S, Vahle J, et al. Prediction of prognosis in patients with tetralogy of Fallot based on deep learning imaging analysis. Heart. 2020. 10.1136/heartjnl-2019-315962.
    1. Volotat AK. GANs library [Internet]. 2020. pp. 1–2. Available from: . Accessed 07 July 2020.
    1. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC. Advances in neural information processing systems. 2017. Improved training of wasserstein gans; pp. 5767–5777.
    1. Diller G-P, Kempny A, Alonso-Gonzalez R, Swan L, Uebing A, Li W, et al. Survival prospects and circumstances of death in contemporary adult congenital heart disease patients under follow-up at a large tertiary Centre. Circulation. 2015;132(22):2118–2125. doi: 10.1161/CIRCULATIONAHA.115.017202.
    1. Beerbaum P, Barth P, Kropf S, Sarikouch S, Kelter-Kloepping A, Franke D, et al. Cardiac function by MRI in congenital heart disease: impact of consensus training on interinstitutional variance. J Magn Reson Imaging. 2009;30(5):956–966. doi: 10.1002/jmri.21948.
    1. Sarikouch S, Koerperich H, Dubowy K-O, Boethig D, Boettler P, Mir TS, et al. Impact of gender and age on cardiovascular function late after repair of tetralogy of Fallot: percentiles based on cardiac magnetic resonance. Circ Cardiovasc Imaging. 2011;4(6):703–711. doi: 10.1161/CIRCIMAGING.111.963637.
    1. Orwat S, Diller G-P, Kempny A, Radke R, Peters B, Kühne T, et al. Myocardial deformation parameters predict outcome in patients with repaired tetralogy of Fallot. Heart. 2016;102(3):209–215. doi: 10.1136/heartjnl-2015-308569.
    1. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. Cham: Springer International Publishing; 2015. pp. 234–241.
    1. Zhang J, Gajjala S, Agrawal P, Tison GH, Hallock LA, Beussink-Nelson L, et al. Fully automated echocardiogram interpretation in clinical practice: feasibility and diagnostic accuracy. Circulation. 2018;138(16):1623–1635. doi: 10.1161/CIRCULATIONAHA.118.034338.
    1. Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert D. Deep learning for cardiac image segmentation: a review. 2019. arXiv:191103723. arXiv preprint.
    1. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Advances in neural information processing systems. 2014;2014:2672–80.
    1. Zhao M, Liu X, Liu H, Wong KKL. Super-resolution of cardiac magnetic resonance images using Laplacian pyramid based on generative adversarial networks. Comput Med Imaging Graph. 2020;80:101698. doi: 10.1016/j.compmedimag.2020.101698.
    1. Diller G-P, Lammers AE, Babu-Narayan S, Li W, Radke RM, Baumgartner H, et al. Denoising and artefact removal for transthoracic echocardiographic imaging in congenital heart disease: utility of diagnosis specific deep learning algorithms. Int J Cardiovasc Imaging. 2019;35(12):2189–2196. doi: 10.1007/s10554-019-01671-0.
    1. Jin C-B, Kim H, Liu M, Jung W, Joo S, Park E, et al. Deep CT to MR synthesis using paired and unpaired data. Sensors (Basel) 2019;19(10):2361. doi: 10.3390/s19102361.
    1. Shin HC, Tenenholtz NA, Rogers JK, Schwarz CG, Senjem ML, Gunter JL, et al. Medical image synthesis for data augmentation and anonymization using generative adversarial networks. Cham: Springer International Publishing; 2018. pp. 1–11.

Source: PubMed

3
Suscribir