Tissue classification for laparoscopic image understanding based on multispectral texture analysis

Yan Zhang, Sebastian J Wirkert, Justin Iszatt, Hannes Kenngott, Martin Wagner, Benjamin Mayer, Christian Stock, Neil T Clancy, Daniel S Elson, Lena Maier-Hein, Yan Zhang, Sebastian J Wirkert, Justin Iszatt, Hannes Kenngott, Martin Wagner, Benjamin Mayer, Christian Stock, Neil T Clancy, Daniel S Elson, Lena Maier-Hein

Abstract

Intraoperative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study through statistical analysis, we show that (1) multispectral imaging data are superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) combining the tissue texture with the reflectance spectrum improves the classification performance. The classifier reaches an accuracy of 98.4% on our dataset. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.

Keywords: multispectral laparoscopy; multispectral texture analysis; tissue classification.

Figures

Fig. 1
Fig. 1
Concept for multispectral tissue classification. After multispectral image acquisition, noise is removed and the resulting image is cropped into patches (1). From each of these patches, the LBP texture feature and the AS are calculated (2) and fed into an SVM model to classify the organ characterized by the patch under investigation (3).
Fig. 2
Fig. 2
Setup for capturing multispectral images of kidney tissue. From left to right: the multispectral laparoscope. The three porcine kidneys originating from three different pigs. Camera pose, where the red region denotes the tissue and the black bar denotes the rod lens. Additionally, the yellow region denotes the light, and the dark background indicates that images are captured in a dark environment.
Fig. 3
Fig. 3
Image annotation is completed by excluding areas not covered by tissue and high/low exposure regions. These exposure regions (e.g., specular reflections) do not contain tissue-specific information and can be excluded automatically in future work. The red overlay indicates regions to classify. From left to right: colon, gallbladder, liver, and kidney.
Fig. 4
Fig. 4
Eight patches from two subsets in S and the associated AS of each patch. The patches on the left and right side correspond to tissue areas of ∼12  mm×12  mm and 25  mm×25  mm, respectively. They had been extracted from the image corresponding to the central wavelength of 470 nm. In each spectrum plot, the x-axis denotes the wavelengths (470, 480, 511, 560, 580, 600, 660, and 700 nm) and the y-axis denotes the image intensity.
Fig. 5
Fig. 5
Accuracy obtained for different descriptors using RGB data (rose) and eight channels of multispectral data (red). Each box extends from the first quartile to the third quartile. The whiskers denote the range from the fifth percentile to the 95th percentile. Outliers are visualized as well. In each figure, the horizontal axis shows the feature description methods. The vertical axis indicates the accuracy rate. The black point and the bar within each box denote the mean and the median of accuracy rates, respectively. The first row: all camera poses are incorporated in the training set. The second row: the camera pose used in testing set is excluded from the training set.

Source: PubMed

3
Suscribir