Weakly supervised individual ganglion cell segmentation from adaptive optics OCT images for glaucomatous damage assessment

Somayyeh Soltanian-Zadeh, Kazuhiro Kurokawa, Zhuolin Liu, Furu Zhang, Osamah Saeedi, Daniel X Hammer, Donald T Miller, Sina Farsiu, Somayyeh Soltanian-Zadeh, Kazuhiro Kurokawa, Zhuolin Liu, Furu Zhang, Osamah Saeedi, Daniel X Hammer, Donald T Miller, Sina Farsiu

Abstract

Cell-level quantitative features of retinal ganglion cells (GCs) are potentially important biomarkers for improved diagnosis and treatment monitoring of neurodegenerative diseases such as glaucoma, Parkinson's disease, and Alzheimer's disease. Yet, due to limited resolution, individual GCs cannot be visualized by commonly used ophthalmic imaging systems, including optical coherence tomography (OCT), and assessment is limited to gross layer thickness analysis. Adaptive optics OCT (AO-OCT) enables in vivo imaging of individual retinal GCs. We present an automated segmentation of GC layer (GCL) somas from AO-OCT volumes based on weakly supervised deep learning (named WeakGCSeg), which effectively utilizes weak annotations in the training process. Experimental results show that WeakGCSeg is on par with or superior to human experts and is superior to other state-of-the-art networks. The automated quantitative features of individual GCLs show an increase in structure-function correlation in glaucoma subjects compared to using thickness measures from OCT images. Our results suggest that by automatic quantification of GC morphology, WeakGCSeg can potentially alleviate a major bottleneck in using AO-OCT for vision research.

Figures

Fig. 1.
Fig. 1.
Details of WeakGCSeg for instance segmentation of GCL somas from AO-OCT volumes. (A) Overview of WeakGCSeg. (B) Network architecture. The numbers in parentheses denote the filter size. The number of filters for each conv. layer is written under each level. Nf = 32 is the base number of filters. Black circles denote summation. Conv, convolution; ReLU, rectified linear unit; BN, batch-normalization; S, stride. (C) Post-processing the CNN’s output to segment GCL somas without human supervision. The colored boxes correspond to steps with matching colors. Scale bar: 50 μm.
Fig. 2.
Fig. 2.
Results on IU’s dataset. (A) Average precision-recall curves of WeakGCSeg compared to average expert grader performances (circle markers). Each plotted curve is the average of eight and five curves at the same threshold values for the 3.75°/12.75° and 8.5° data, respectively. (B) GCL soma diameters across all subjects compared to previously reported values. Circle and square markers denote mean soma diameters from in vivo and histology studies, respectively. Error bars denote one standard deviation. “r” denotes the range of values. P, parasol GCs; M, midget GCs; fm, foveal margin; pm, papillomacular; pr, peripheral retina.
Fig. 3.
Fig. 3.
En face (XY) and cross-sectional (XZ and YZ) slices illustrate (top) soma detection results compared to the gold-standard manual markings and (bottom) overlay of soma segmentation masks, with each soma represented by a randomly assigned color. Cyan, red, and yellow markers denote TP, FN, and FP, respectively. Only somas with centers located within 5 μm from the depicted slices are marked in the top row. The intensities of AO-OCT images are shown in log-scale. Scale bars: 50 μm and 25 μm for en face and cross-sectional slices, respectively.
Fig. 4.
Fig. 4.
Results on FDA’s healthy and glaucoma subjects. (A) Average precision-recall curves compared to average expert grader performances (circle markers). Each plotted curve is the average of six and 10 curves for the healthy and glaucoma volumes, respectively. (B) En face (XY) and cross-sectional (XZ and YZ) slices illustrating soma detection and segmentation results. See Fig. 3 for further details.
Fig. 5.
Fig. 5.
Structural and functional characteristics of glaucomatous eyes compared to controls. (A) GCL soma diameters compared to values reported in the literature. (B) Automatic cell densities and average diameters for all volumes from FDA’s device. (C) TD measurements versus cell densities and GCL thickness values for four glaucoma subjects. ρ, Pearson corr. coef. Subjects are shown with different marker shapes.

References

    1. Weinreb RN and Khaw PT, “Primary open-angle glaucoma,” Lancet 363, 1711–1720 (2004).
    1. Heijl A, Leske MC, Bengtsson B, Hyman L, Bengtsson B, and Hussein M, “Reduction of intraocular pressure and glaucoma progression: results from the Early Manifest Glaucoma Trial,” Arch. Ophthalmol 120, 1268–1279 (2002).
    1. Tielsch JM, Katz J, Singh K, Quigley HA, Gottsch JD, Javitt J, and Sommer A, “A population-based evaluation of glaucoma screening: the Baltimore eye survey,” Am. J. Epidemiol 134, 1102–1110 (1991).
    1. Tafreshi A, Sample PA, Liebmann JM, Girkin CA, Zangwill LM, Weinreb RN, Lalezary M, and Racette L, “Visual function-specific perimetry to identify glaucomatous visual loss using three different definitions of visual field abnormality,” Invest. Ophthalmol. Vis. Sci 50, 1234–1240 (2009).
    1. Henson DB, Chaudry S, Artes PH, Faragher EB, and Ansons A, “Response variability in the visual field: comparison of optic neuritis, glaucoma, ocular hypertension, and normal eyes,” Invest. Ophthalmol. Vis. Sci 41, 417–421 (2000).
    1. Heijl A, Lindgren A, and Lindgren G, “Test-retest variability in glaucomatous visual fields,” Am. J. Ophthalmol 108, 130–135 (1989).
    1. Tatham AJ and Medeiros FA, “Detecting structural progression in glaucoma with optical coherence tomography,” Ophthalmology 124, S57–S65 (2017).
    1. Dong ZM, Wollstein G, and Schuman JS, “Clinical utility of optical coherence tomography in glaucoma,” Invest. Ophthalmol. Vis. Sci 57, OCT556 (2016).
    1. Ogden TE, “Nerve fiber layer of the primate retina: morphometric analysis,” Invest. Ophthalmol. Vis. Sci 25, 19–29 (1984).
    1. Banister K, Boachie C, Bourne R, Cook J, Burr JM, Ramsay C, Garway-Heath D, Gray J, McMeekin P, and Hernández R, “Can automated imaging for optic disc and retinal nerve fiber layer analysis aid glaucoma detection?” Ophthalmology 123, 930–938 (2016).
    1. Mwanza J-C, Budenz DL, Warren JL, Webel AD, Reynolds CE, Barbosa DT, and Lin S, “Retinal nerve fibre layer thickness floor and corresponding functional loss in glaucoma,” Br. J. Ophthalmol 99, 732–737 (2015).
    1. Zhang X, Dastiridou A, Francis BA, Tan O, Varma R, Greenfield DS, Schuman JS, and Huang D, and Advanced Imaging for Glaucoma Study Group, “Comparison of glaucoma progression detection by optical coherence tomography and visual field,” Am. J. Ophthalmol 184, 63–74 (2017).
    1. Christopher M, Bowd C, Belghith A, Goldbaum MH, Weinreb RN, Fazio MA, Girkin CA, Liebmann JM, and Zangwill LM, “Deep learning approaches predict glaucomatous visual field damage from OCT optic nerve head En face images and retinal nerve fiber layer thickness maps,” Ophthalmology 127, 346–356 (2020).
    1. George YM, Antony B, Ishikawa H, Wollstein G, Schuman JS, and Garnavi R, “Attention-guided 3D-CNN framework for glaucoma detection and structural-functional association using volumetric images,” IEEE J. Biomed. Health Inf 24, 3421–3430 (2020).
    1. Wang X, Chen H, Ran A-R, Luo L, Chan PP, Tham CC, Chang RT, Mannil SS, Cheung CY, and Heng P-A, “Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning,” Med. Image Anal 63, 101695 (2020).
    1. Liu Z, Kurokawa K, Zhang F, Lee JJ, and Miller DT, “Imaging and quantifying ganglion cells and other transparent neurons in the living human retina,” Proc. Natl. Acad. Sci. USA 114, 12803–12808 (2017).
    1. Liu Z, Tam J, Saeedi O, and Hammer DX, “Trans-retinal cellular imaging with multimodal adaptive optics,” Biomed. Opt. Express 9, 4246–4262 (2018).
    1. Wells-Gray EM, Choi SS, Slabaugh M, Weber P, and Doble N, “Inner retinal changes in primary open-angle glaucoma revealed through adaptive optics-optical coherence tomography,” J. Glaucoma 27, 1025–1028 (2018).
    1. Rossi EA, Granger CE, Sharma R, Yang Q, Saito K, Schwarz C, Walters S, Nozato K, Zhang J, and Kawakami T, “Imaging individual neurons in the retinal ganglion cell layer of the living eye,” Proc. Natl. Acad. Sci. USA 114, 586–591 (2017).
    1. Fang L, Cunefare D, Wang C, Guymer RH, Li S, and Farsiu S, “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express 8, 2732–2744 (2017).
    1. Kugelman J, Alonso-Caneiro D, Read SA, Vincent SJ, and Collins MJ, “Automatic segmentation of OCT retinal boundaries using recurrent neural networks and graph search,” Biomed. Opt. Express 9, 5759–5777 (2018).
    1. Roy AG, Conjeti S, Karri SPK, Sheet D, Katouzian A, Wachinger C, and Navab N, “ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks,” Biomed. Opt. Express 8, 3627–3642 (2017).
    1. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, Askham H, Glorot X, O’Donoghue B, and Visentin D, “Clinically applicable deep learning for diagnosis and referral in retinal disease,” Nat. Med 24, 1342–1350 (2018).
    1. Miri MS, Abràmoff MD, Kwon YH, Sonka M, and Garvin MK, “A machine-learning graph-based approach for 3D segmentation of Bruch’s membrane opening from glaucomatous SD-OCT volumes,” Med. Image Anal 39, 206–217 (2017).
    1. Venhuizen FG, van Ginneken B, Liefers B, van Asten F, Schreur V, Fauser S, Hoyng C, Theelen T, and Sánchez CI, “Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography,” Biomed Opt. Express 9, 1545–1569 (2018).
    1. Pekala M, Joshi N, Liu TA, Bressler NM, DeBuc DC, and Burlina P, “Deep learning based retinal OCT segmentation,” Comput. Biol. Med 114, 103445 (2019).
    1. Chiu SJ, Lokhnygina Y, Dubis AM, Dubra A, Carroll J, Izatt JA, and Farsiu S, “Automatic cone photoreceptor segmentation using graph theory and dynamic programming,” Biomed. Opt. Express 4, 924–937 (2013).
    1. Davidson B, Kalitzeos A, Carroll J, Dubra A, Ourselin S, Michaelides M, and Bergeles C, “Automatic cone photoreceptor localisation in healthy and Stargardt afflicted retinas using deep learning,” Sci. Rep 8, 7911 (2018).
    1. Heisler M, Ju MJ, Bhalla M, Schuck N, Athwal A, Navajas EV, Beg MF, and Sarunic MV, “Automated identification of cone photoreceptors in adaptive optics optical coherence tomography images using transfer learning,” Biomed. Opt. Express 9, 5353–5367 (2018).
    1. Cunefare D, Huckenpahler AL, Patterson EJ, Dubra A, Carroll J, and Farsiu S, “RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images,” Biomed. Opt. Express 10, 3815–3832 (2019).
    1. Çiçek Ö, Abdulkadir A, Lienkamp SS, Brox T, and Ronneberger O, “3D U-Net: learning dense volumetric segmentation from sparse annotation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2016), pp. 424–432.
    1. Milletari F, Navab N, and Ahmadi S-A, “V-Net: fully convolutional neural networks for volumetric medical image segmentation,” in 4th International Conference on 3D Vision (3DV) (IEEE, 2016), pp. 565–571.
    1. Ran AR, Cheung CY, Wang X, Chen H, Luo L-Y, Chan PP, Wong MO, Chang RT, Mannil SS, and Young AL, “Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis,” Lancet Digital Health 1, e172–e182 (2019).
    1. Soltanian-Zadeh S, Sahingur K, Blau S, Gong Y, and Farsiu S, “Fast and robust active neuron segmentation in two-photon calcium imaging using spatiotemporal deep learning,” Proc. Natl. Acad. Sci. USA 116, 8554–8563 (2019).
    1. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, and Batra D, “Grad-CAM: visual explanations from deep networks via gradient-based localization,” in IEEE International Conference on Computer Vision (2017), pp. 618–626.
    1. Tang M, Perazzi F, Djelouah A, Ben Ayed I, Schroers C, and Boykov Y, “On regularized losses for weakly-supervised cnn segmentation,” in European Conference on Computer Vision (ECCV) (2018), pp. 507–522.
    1. Chiu SJ, Li XT, Nicholas P, Toth CA, Izatt JA, and Farsiu S, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18, 19413–19428 (2010).
    1. He K, Sun J, and Tang X, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell 35, 1397–1409 (2012).
    1. Bennett AG, Rudnicka AR, and Edgar DF, “Improvements on Littmann’s method of determining the size of retinal features by fundus photography,” Graefe’s Arch. Clin. Exp. Ophthalmol 232, 361–367 (1994).
    1. Blanks JC, Torigoe Y, Hinton DR, and Blanks RH, “Retinal pathology in Alzheimer’s disease. I. Ganglion cell loss in foveal/parafoveal retina,” Neurobiol. Aging 17, 377–384 (1996).
    1. Curcio CA and Allen KA, “Topography of ganglion cells in human retina,” J. Comp. Neurol 300, 5–25 (1990).
    1. Pavlidis M, Stupp T, Hummeke M, and Thanos S, “Morphometric examination of human and monkey retinal ganglion cells within the papillomacular area,” Retina 26, 445–453 (2006).
    1. Rodieck RW, Binmoeller K, and Dineen J, “Parasol and midget ganglion cells of the human retina,” J. Comp. Neurol 233, 115–132 (1985).
    1. Stone J and Johnston E, “The topography of primate retina: a study of the human, bushbaby, and new-and old-world monkeys,” J. Comp. Neurol 196, 205–223 (1981).
    1. Liu Z, Saeedi O, Zhang F, Vilanueva R, Asanad S, Agrawal A, and Hammer DX, “Quantification of retinal ganglion cell morphology in human glaucomatous eyes,” Invest. Ophthalmol. Vis. Sci 62(3), 34 (2021).
    1. Zhou Z, Siddiquee MMR, Tajbakhsh N, and Liang J, “UNet++: redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Trans. Med. Imaging 39, 1856–1867 (2020).
    1. Zhou Z, Siddiquee MMR, Tajbakhsh N, and Liang J, “Official Keras implementation for UNet++,” Github (2019), .
    1. Demšar J, “Statistical comparisons of classifiers over multiple data sets,” J. Mach. Learn. Res 7, 1–30 (2006).
    1. Garcia S and Herrera F, “An extension on ‘Statistical comparisons of classifiers over multiple data sets’ for all pairwise comparisons,” J. Mach. Learn. Res 9, 2677–2694 (2008).
    1. Abozaid MA, Langlo CS, Dubis AM, Michaelides M, Tarima S, and Carroll J, “Reliability and repeatability of cone density measurements in patients with congenital achromatopsia,” in Retinal Degenerative Diseases (Springer, 2016), pp. 277–283.
    1. Zhang L, Wang X, Yang D, Sanford T, Harmon S, Turkbey B, Wood BJ, Roth H, Myronenko A, Xu D, and Xu Z, “Generalizing deep learning for medical image segmentation to unseen domains via deep stacked transformation,” IEEE Trans. Med. Imaging 39, 2531–2540 (2020).
    1. Perone CS, Ballester P, Barros RC, and Cohen-Adad J, “Unsupervised domain adaptation for medical imaging segmentation with self-ensembling,” NeuroImage 194, 1–11 (2019).
    1. Wang G, Li W, Zuluaga MA, Pratt R, Patel PA, Aertsen M, Doel T, David AL, Deprest J, and Ourselin S, “Interactive medical image segmentation using deep learning with image-specific fine tuning,” IEEE Trans. Med. Imaging 37, 1562–1573 (2018).
    1. Majumder S and Yao A, “Content-aware multi-level guidance for interactive instance segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (2019), pp. 11602–11611.
    1. Soltanian-Zadeh S, Kurokawa K, Liu Z, Zhang F, Saeedi O, Hammer DX, Miller DT, and Farsiu S, “Data set for Weakly supervised individual ganglion cell segmentation from adaptive optics OCT images for glaucomatous damage assessment,” Duke University Repository (2021), .

Source: PubMed

3
Abonneren