Deep learning based prediction of extraction difficulty for mandibular third molars

Jeong-Hun Yoo, Han-Gyeol Yeom, WooSang Shin, Jong Pil Yun, Jong Hyun Lee, Seung Hyun Jeong, Hun Jun Lim, Jun Lee, Bong Chul Kim, Jeong-Hun Yoo, Han-Gyeol Yeom, WooSang Shin, Jong Pil Yun, Jong Hyun Lee, Seung Hyun Jeong, Hun Jun Lim, Jun Lee, Bong Chul Kim

Abstract

This paper proposes a convolutional neural network (CNN)-based deep learning model for predicting the difficulty of extracting a mandibular third molar using a panoramic radiographic image. The applied dataset includes a total of 1053 mandibular third molars from 600 preoperative panoramic radiographic images. The extraction difficulty was evaluated based on the consensus of three human observers using the Pederson difficulty score (PDS). The classification model used a ResNet-34 pretrained on the ImageNet dataset. The correlation between the PDS values determined by the proposed model and those measured by the experts was calculated. The prediction accuracies for C1 (depth), C2 (ramal relationship), and C3 (angulation) were 78.91%, 82.03%, and 90.23%, respectively. The results confirm that the proposed CNN-based deep learning model could be used to predict the difficulty of extracting a mandibular third molar using a panoramic radiographic image.

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Predicted Pederson difficulty score (PDS) distribution of the actual PDS. The distribution of predicted PDS was generally close to the actual PDS. Whereas it performed well for PDS 4–7, it overestimated the cases of PDS 3 and underestimated the cases of PDS 8 and 9.
Figure 2
Figure 2
Confusion matrix showing the classification results for each criterion.
Figure 3
Figure 3
Example probability distribution inferred by the proposed model. The blue-dashed line indicates the actual score, and the red-dashed line indicates the expectation of the predicted PDS. If the model obtains a probability distribution as depicted in (A), it will be received a score of 1. Conversely, if a probability distribution similar to that depicted in (B) is obtained, its score will be miscalculated as 2.
Figure 4
Figure 4
Preprocessing of panoramic images.
Figure 5
Figure 5
Entire diagnosis process adopted in this study.

References

    1. Krois J, et al. Deep learning for the radiographic detection of periodontal bone loss. Sci. Rep. 2019;9:8495. doi: 10.1038/s41598-019-44839-3.
    1. Fernandez Rojas R, Huang X, Ou KL. A Machine learning approach for the identification of a biomarker of human pain using fNIRS. Sci. Rep. 2019;9:5645. doi: 10.1038/s41598-019-42098-w.
    1. Kwak GH, et al. Automatic mandibular canal detection using a deep convolutional neural network. Sci. Rep. 2020;10:5711. doi: 10.1038/s41598-020-62586-8.
    1. Jaskari J, et al. Deep learning method for mandibular canal segmentation in dental cone beam computed tomography volumes. Sci. Rep. 2020;10:5842. doi: 10.1038/s41598-020-62321-3.
    1. Kim DW, et al. Deep learning-based survival prediction of oral cancer patients. Sci. Rep. 2019;9:6994. doi: 10.1038/s41598-019-43372-7.
    1. Hallac RR, Lee J, Pressler M, Seaward JR, Kane AA. Identifying ear abnormality from 2D photographs using convolutional neural networks. Sci. Rep. 2019;9:18198. doi: 10.1038/s41598-019-54779-7.
    1. Chang HJ, et al. Deep learning hybrid method to automatically diagnose periodontal bone loss and stage periodontitis. Sci. Rep. 2020;10:7531. doi: 10.1038/s41598-020-64509-z.
    1. Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020;26:152–158. doi: 10.1111/odi.13223.
    1. Choi J, Eun H, Kim C. Boosting proximal dental caries detection via combination of variational methods and convolutional neural network. J. Signal Process. Syst. 2018;90:87–97. doi: 10.1007/s11265-016-1214-6.
    1. Mallishery S, Chhatpar P, Banga KS, Shah T. & Gupta, P. The precision of case difficulty and referral decisions: An innovative automated approach. Clinical Oral Investigations; 2019.
    1. Yu, H. J. et al. Automated skeletal classification with lateral cephalometry based on artificial intelligence. J. Dent. Res. 22034520901715 (2020).
    1. Jeong SH, et al. Deep learning based discrimination of soft tissue profiles requiring orthognathic surgery by facial photographs. Sci. Rep. 2020;10:16235. doi: 10.1038/s41598-020-73287-7.
    1. Lee KS, Jung SK, Ryu JJ, Shin SW, Choi J. Evaluation of transfer learning with deep convolutional neural networks for screening osteoporosis in dental panoramic radiographs. J. Clin. Med. 2020;9:392. doi: 10.3390/jcm9020392.
    1. Hiraiwa T, et al. A deep-learning artificial intelligence system for assessment of root morphology of the mandibular first molar on panoramic radiography. Dentomaxillofac. Radiol. 2019;48:20180218. doi: 10.1259/dmfr.20180218.
    1. Renton T, Smeeton N, McGurk M. Factors predictive of difficulty of mandibular third molar surgery. Br. Dent. J. 2001;190:607–610. doi: 10.1038/sj.bdj.4801052.
    1. Gbotolorun OM, Arotiba GT, Ladeinde AL. Assessment of factors associated with surgical difficulty in impacted mandibular third molar extraction. J. Oral Maxillofac. Surg. 2007;65:1977–1983. doi: 10.1016/j.joms.2006.11.030.
    1. Yuasa H, Kawai T, Sugiura M. Classification of surgical difficulty in extracting impacted third molars. Br. J. Oral Maxillofac. Surg. 2002;40:26–31. doi: 10.1054/bjom.2001.0684.
    1. Liu, W. et al. SSD: Single Shot MultiBox detector, in European Conference on Computer Vision (2016).
    1. Suphangul S, Rattanabanlang A, Amornsettachai P, Wongsirichat N. Dimension distortion of digital panoramic radiograph on posterior mandibular regions. M. Dent. J. 2016;36:279–286.
    1. Vinayahalingam S, Xi T, Bergé S, Maal T, de Jong G. Automated detection of third molars and mandibular nerve by deep learning. Sci. Rep. 2019;9:1–7. doi: 10.1038/s41598-019-45487-3.
    1. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 770–778 (2016).

Source: PubMed

3
Sottoscrivi