Automatically discriminating and localizing COVID-19 from community-acquired pneumonia on chest X-rays

Zheng Wang, Ying Xiao, Yong Li, Jie Zhang, Fanggen Lu, Muzhou Hou, Xiaowei Liu, Zheng Wang, Ying Xiao, Yong Li, Jie Zhang, Fanggen Lu, Muzhou Hou, Xiaowei Liu

Abstract

The COVID-19 outbreak continues to threaten the health and life of people worldwide. It is an immediate priority to develop and test a computer-aided detection (CAD) scheme based on deep learning (DL) to automatically localize and differentiate COVID-19 from community-acquired pneumonia (CAP) on chest X-rays. Therefore, this study aims to develop and test an efficient and accurate deep learning scheme that assists radiologists in automatically recognizing and localizing COVID-19. A retrospective chest X-ray image dataset was collected from open image data and the Xiangya Hospital, which was divided into a training group and a testing group. The proposed CAD framework is composed of two steps with DLs: the Discrimination-DL and the Localization-DL. The first DL was developed to extract lung features from chest X-ray radiographs for COVID-19 discrimination and trained using 3548 chest X-ray radiographs. The second DL was trained with 406-pixel patches and applied to the recognized X-ray radiographs to localize and assign them into the left lung, right lung or bipulmonary. X-ray radiographs of CAP and healthy controls were enrolled to evaluate the robustness of the model. Compared to the radiologists' discrimination and localization results, the accuracy of COVID-19 discrimination using the Discrimination-DL yielded 98.71%, while the accuracy of localization using the Localization-DL was 93.03%. This work represents the feasibility of using a novel deep learning-based CAD scheme to efficiently and accurately distinguish COVID-19 from CAP and detect localization with high accuracy and agreement with radiologists.

Keywords: COVID-19; Chest X-ray (CXR); Community-acquired pneumonia (CAP); Computer-aided detection (CAD); Deep learning (DL).

Conflict of interest statement

The authors declare no competing interests.

© 2020 Elsevier Ltd. All rights reserved.

Figures

Fig. 1
Fig. 1
Flow diagram of the proposed CAD system illustrating the discrimination and localization of COVID-19 from CAP on chest X-ray radiographs. We utilized the Discrimination-DL to distinguish COVID-19 from CAP on chest X-rays, and the Localization-DL was trained to detect lung localization (i.e., left lung or right lung or bipulmonary). Abbreviations: Healthy: healthy controls; CAP: community-acquired pneumonia; Left: left lung; Right: right lung.
Fig. 2
Fig. 2
Deep learning architectures. (1). We utilized the proposal of a lung regressor (PoL) with superpixel to generate the Discrimination-DL input. The PoL matrix is a 2 × 4 vector, which illustrates the bipulmonary region coordinates. (2). The Discrimination-DL adopts a feature pyramid network as a backbone network on top of a ResNet architecture and generates a differentiated probability across cohort categories. (3). The Localization-DL constructed attention modules use a state-of-the-art residual attention network basic unit. The located region is defined as a 1 × 2 vector and represents all potential pulmonary locations. Abbreviations: PoL: proposal of lung regressor; CAP: community-acquired pneumonia; Left/L: left lung; Right/R: right lung; Bilateral: bipulmonary.
Algorithm 1
Algorithm 1
Training procedure for Discrimination-DL.
Algorithm 2
Algorithm 2
Training procedure for Localization-DL.
Algorithm 3
Algorithm 3
Testing procedure for the computer-aided diagnosis scheme.
Fig. 3
Fig. 3
Performance of discriminating COVID-19 from CAP on testing subset. (A). ROC curve for the Discrimination-DL with radiologists for performance comparison. The area under the ROC of the Discrimination-DL was 99%. (B). ROC curves for COVID-19 from CAP on the testing subset trained with Discrimination-DL were 1 for COVID-19, 1 for healthy controls, and 0.99 for CAP. Abbreviations: receiver operating characteristic (ROC) curve (AUC); accuracy (Acc); community-acquired pneumonia (CAP); healthy controls (Healthy).
Fig. 4
Fig. 4
Representative chest X-ray radiographs corresponding to Grad-CAM images.
Fig. 5
Fig. 5
Performance of localizing infected pulmonary on the testing subset. (A). The ROC curve for Localization-DL compared with radiologist performance. The area under the curve was 93%. (B). The ROC curve of each case with the trained Localization-DL was 0.92 for left pulmonary, 0.93 for right pulmonary and 0.87 for bipulmonary. Abbreviations: receiver operating characteristic (ROC) curve (AUC); accuracy (Acc); left pulmonary (left); right pulmonary (right); bipulmonary (Bilaterel).
Fig. 6
Fig. 6
Several examples illustrating that the high-level part features with attention masks.

References

    1. Li Q., Guan X., Wu P., Wang X., Zhou L., Tong Y., Ren R., Leung K.S., Lau E.H., Wong J.Y., Xing X., Xiang N., Wu Y., Li C., Chen Q., Li D., Liu T., Zhao J., Liu M., Tu W., Chen C., Jin L., Yang R., Wang Q., Zhou S., Wang R., Liu H., Luo Y., Liu Y., Shao G., Li H., Tao Z., Yang Y., Deng Z., Liu B., Ma Z., Zhang Y., Shi G., Lam T.T., Wu J.T., Gao G.F., Cowling B.J., Yang B., Leung G.M., Feng Z. Early transmission dynamics in wuhan, china, of novel coronavirus infected pneumonia. N. Engl. J. Med. 2020;382(13):1199–1207.
    1. Li L., Qin L., Xu Z., Yin Y., Wang X., Kong B., Bai J., Lu Y., Fang Z., Song Q., Cao K., Liu D., Wang G., Xu Q., Fang X., Zhang S., Xia J., Xia J. Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct. Radiology. 2020:200905.
    1. Zhu N., Zhang D., Wang W., Li X., Yang B., Song J., Zhao X., Huang B., Shi W., Lu R., Niu P., Zhan F., Ma X., Wang D., Xu W., Wu G., Gao G. A novel coronavirus from patients with pneumonia in china, 2019. N. Engl. J. Med. 2020;382:727–733.
    1. Chung M., Bernheim A., Mei X., Zhang N., Huang M., Zeng X., Cui J., Xu W., Yang Y., Fayad Z.A., Jacobi A., Li K., Li S., Shan H. Ct imaging features of 2019 novel coronavirus (2019-ncov) Radiology. 2020;295(1):202–207.
    1. Kim Y., Shin H.J., Kim M.-J., Lee M.-J. Comparison of effective radiation doses from x-ray, ct, and pet/ct in pediatric patients with neuroblastoma using a dose monitoring program. Diagn. Int. Radiol. 2016;22(4):390–394.
    1. Shelhamer E., Long J., Darrell T. ully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2016;39
    2. 1–1

    1. Simonyan K., Zisserman A. International Conference on Learning Representations. 2015. Very deep convolutional networks for large-scale image recognition.
    1. He K., Zhang X., Ren S., Sun J. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. Deep residual learning for image recognition; pp. 770–778.
    1. Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojna Z. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. Rethinking the inception architecture for computer vision; pp. 2818–2826.
    1. Chollet F. 30th IEEE Conference on Computer Vision and Pattern Recognition. 2017. Xception: Deep learning with depthwise separable convolutions; pp. 1800–1807.
    1. Gecer B., Aksoy S., Mercan E., Shapiro L., Weaver D., Elmore J. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. Pattern Recognit. 2018;84
    1. Chudzik P., Majumdar S., Caliva F., Al-Diri B., Hunter A. Medical Imaging 2018: Image Processing. Vol. 10574. SPIE; 2018. Exudate segmentation using fully convolutional neural networks and inception modules; pp. 785–792.
    1. Bi L., Feng D.D.F., Fulham M., Kim J. Multi-label classification of multi-modality skin lesion via hyper-connected convolutional neural network. Pattern Recognit. 2020;107:107502.
    1. Zhang R., Zheng Y., Poon C.C., Shen D., Lau J.Y. Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker. Pattern Recognit. 2018;83:209–219.
    1. Ozturk T., Talo M., Yildirim A., Baloglu U.B., Acharya U.R. Automated detection of covid-19 cases using deep neural networks with x-ray images. Comput. Biol. Med. 2020;28:103792.
    1. Khan A.I., Shah J.L., Bhat M.M. Coronet: a deep neural network for detection and diagnosis of covid-19 from chest x-ray images. Comput. Methods Programs Biomed. 2020;196:105581.
    1. A. Narin, C. Kaya, Z. Pamuk, Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks, 2020.
    1. Mei X., Lee H.-C., yue Diao K., Huang M., Yang Y. Artificial intelligence-enabled rapid diagnosis of patients with covid-19. Nat. Med. 2020:1–5.
    1. Bai H., Wang R., Xiong Z., Hsieh B., Chang K., Halsey K., Tran T., Choi J., Wang D.-C., Shi L.-B., Mei J., Jiang X.-L., Pan I., Zeng Q.-H., Hu P.-F., Li Y.-H., Fu F.-X., Huang R., Sebro R., Liao W.-H. Ai augmentation of radiologist performance in distinguishing covid-19 from pneumonia of other etiology on chest ct. Radiology. 2020:201491.
    1. Nasr-Esfahani E., Samavi S., Karimi N., Soroushmehr S.M.R., Najarian K. 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Vol. 2016. 2016. Vessel extraction in x-ray angiograms using deep learning; pp. 643–646.
    1. Rajpurkar P., Irvin J., Ball R.L., Zhu K., Yang B., Mehta H., Duan T., Ding D., Bagul A., Langlotz C.P. Deep learning for chest radiograph diagnosis: aretrospective comparison of the chexnext algorithm to practicing radiologists. PLoS Med. 2018;15(11):1002686.
    1. R.S. of North America, Rsna pneumonia detection challenge, 2019, .
    1. J.P. Cohen, P. Morrison, L. Dao, Covid-19 image data collection, 2020.
    1. Dosovitskiy A., Springenberg J., Riedmiller M., Brox T. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 2014;1
    1. Girshick R., Donahue J., Darrell T., Malik J. 2014 IEEE Conference on Computer Vision and Pattern Recognition. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation; pp. 580–587.
    1. Lin T., Dollár P., Girshick R., He K., Hariharan B., Belongie S. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Feature pyramid networks for object detection; pp. 936–944.
    1. Lin T., Goyal P., Girshick R., He K., Dollár P. 2017 IEEE International Conference on Computer Vision (ICCV) 2017. Focal loss for dense object detection; pp. 2999–3007.
    1. Zeiler M.D., Fergus R. Computer Vision – ECCV 2014. 2014. Visualizing and understanding convolutional networks; pp. 818–833.
    1. F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, X. Tang, Residual attention network for image classification, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6450–6458.
    1. C. Szegedy, A. Toshev, D. Erhan, Deep neural networks for object detection, in: Proceedings of the 26th International Conference on Neural Information Processing Systems, volume 2, pp. 2553–2561.
    1. Kingma D., Ba J. Adam: a method for stochastic optimization. Int. Confer. Learn.Represent. 2014;1
    1. N. Ketkar, Deep Learning with Python: A Hands-on Introduction, pp. 97–111.
    1. N. Ketkar, Deep Learning with Python: A Hands-on Introduction, pp. 159–194.
    1. E.E.-D. Hemdan, M. Shouman, M. Karar, Covidx-net: A framework of deep learning classifiers to diagnose COVID-19 in x-ray images, 2020.
    1. Selvaraju R.R., Cogswell M., Das A., Vedantam R., Parikh D., Batra D. 2017 IEEE International Conference on Computer Vision (ICCV) 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization; pp. 618–626.
    1. Das A., Agrawal H., Zitnick L., Parikh D., Batra D. Human attention in visual question answering: do humans and deep networks look at the same regions? Comput. Vision Image Understanding. 2017;163:90–100.
    1. Chen W.-H., Strych U., Hotez P., Bottazzi M. The sars-cov-2 vaccine pipeline: an overview. Curr. Trop. Med. Rep. 2020;7:61–64.
    1. Dong E., Du H., Gardner L. An interactive web-based dashboard to track COVID-19 in real time. Lancet Infect. Dis. 2020;20(5):533–534.

Source: PubMed

3
Se inscrever