TY - JOUR T1 - Segmentation of Salivary Gland in Tc-99m pertechnetate Quantitative SPECT/CT using Deep Convolutional Neural Networks JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 400 LP - 400 VL - 60 IS - supplement 1 AU - Junyoung Park AU - Dongkyu Oh AU - Won Woo Lee AU - Jae Sung Lee Y1 - 2019/05/01 UR - http://jnm.snmjournals.org/content/60/supplement_1/400.abstract N2 - 400Objectives: Salivary glands are multifunctional organs that have protective, digestive, exocrine, and endocrine functions. Dysfunction of the glands has been evaluated using Tc-99m pertechnetate planar scans with limited accuracy. Quantitative single-photon emission computed tomography (SPECT)/computed tomography (CT) is a promising alternative for the functional evaluation of salivary glands [1]. However, manual drawing of the region of interest (ROI) of the whole salivary gland in CT images is required, preventing the wide use of this effective approach. Manual drawing usually takes 15 to 20 min per scan and is performed by a nuclear medicine physician; it is a time-consuming and labor-intensive task. In this study, we developed an automated salivary gland segmentation method based on a deep leaning approach. Methods: We used retrospectively acquired salivary SPECT/CT data from 335 xerostomic patients (268 for training and 67 for testing). The multiple ROIs were manually drawn by nuclear medicine physicians. A modified 3D U-Net is used to learn an end-to-end between a CT and salivary segmented volume. Unlike conventional U-Net, we adapted a pre-activation residual block and an element-wise sum array to forward feature maps from one stage of the network to the other. We used a 3D spatial drop-out technique, as it has shown a better performance compared with batch normalization when adjacent voxels within feature maps are strongly correlated. For a quantitative evaluation of the segmentation performance, the Dice similarity coefficient between the manual segmentation and deep learning based automatic segmentation were calculated. We also assessed the correlation and mean absolute percentage error for the %injected dose (%ID) of parotid (%PU) and submandibular (%SU) glands using these segmentation methods. In addition, we performed a five-fold cross-validation to confirm the consistency of the performance. Results: We were able to automatically segment the salivary glands in the CT image using the proposed method. We achieved a high Dice similarity coefficient relative to the manual segmentation (mean ± SD = 0.81 ± 0.09 for the main experiment). The automatic segmentation for both the parotid and submandibular glands took only a few seconds per patient compared to approximately 15 min per scan for manual segmentation. The manual and automatic segmentation methods showed comparable performances in the supporting material. Both results included high activities of parotid and submandibular uptakes in the SPECT images. The %ID values derived using the manual and automatic segmentation methods were strongly correlated (R2 = 0.93 for %PU and R2 = 0.94 for %SU in the main experiment). Scattered and Bland-Altman plots between the measurement of the %ID for each side of the parotid and submandibular glands using manual and deep-learning-generated volumes are shown in the supporting material. We have also shown the mean absolute percentage error between the measurements of the %ID obtained using manual and CNN-based volumes in all the five-fold cross-validations. The absolute difference between the two methods for the %PU was 7.75 ± 8.28% in the main experiment. The correlation coefficient, R2, also showed consistent results, ranging from 0.93 to 0.94. Discussion: The proposed deep learning approach for the 3D segmentation of salivary glands in CT enables accurate and fast %ID measurement. Therefore, this method will be useful for facilitating %ID measurements and functional evaluations of salivary glands using quantitative SPECT/CT technology. ER -