PT - JOURNAL ARTICLE AU - Chin, Bennett AU - Wehrend, Jonathan AU - Silosky, Michael AU - Halley, Christopher AU - Niman, Remy AU - Moses, Katie AU - Karki, Ramesh AU - Xing, Fuyong TI - Automated Liver Lesion Detection in 68Ga DOTATATE PET / CT: Preliminary Results using a Deep Learning 3D Fully Convolutional Network DP - 2021 May 01 TA - Journal of Nuclear Medicine PG - 1184--1184 VI - 62 IP - supplement 1 4099 - http://jnm.snmjournals.org/content/62/supplement_1/1184.short 4100 - http://jnm.snmjournals.org/content/62/supplement_1/1184.full SO - J Nucl Med2021 May 01; 62 AB - 1184Objectives: An automated method for hepatic lesion identification in 68Ga DOTATATE PET / CT may aid in physician clinical workflow. Hepatic lesions are important sites of neuroendocrine metastases, however, these are especially challenging because of high normal background liver activity. The purpose of this study is to develop and test a deep learning algorithm to identify 68Ga DOTATATE PET / CT hepatic lesions. Methods: 68Ga DOTATATE PET / CTs (n=60 subjects) were deidentified and reviewed. Manual and semi-automated (MIM v6.9) methods to identify definitely positive lesions and their boundaries were assessed. A 3D U-Net-like neural network was developed to automatically locate individual lesions in livers by 68Ga DOTATATE PET / CT [1]. This encoder-decoder architecture contains a downsampling path of 3 stacked residual learning blocks [2], and a 3D convolutional operation with stride of 2 to connect adjacent blocks. The upsampling path also contains 3 residual blocks linked with 3D transposed convolutions with stride of 2. Three long-range skip connections directly connected the outputs of the downsampling residual blocks to the outputs of corresponding upsampling residual blocks. We also introduced two contextual information aggregation layers [3] to the upsampling path and fused the information with the output of the last residual block, which is fed into a final convolutional layer for lesion detection. The 3D U-Net network was trained for 100,000 iterations with stopping if the performance on the validation set did not improve for 20,000 successive iterations. To reduce the effects of noisy predictions, we removed low values below a specified threshold in the prediction map (10%). True positive predictions were defined if the intersection over union between the predictions and corresponding gold standards was greater than specified values. Results: Based on speed and reproducibility of lesion definition, a MIM workflow method with modified PERCIST criteria was chosen to identify definitely positive lesions and define lesion boundaries as the gold standard method. Lesions in each transaxial slice were then identified, confirmed, and annotated by two trained physicians. Liver datasets were found to be abnormal in 35, and normal in 25 subjects. Data was then randomly split into training (15 abnormal and 15 normal), validation (10 abnormal and 10 normal) and test (10 abnormal) datasets. Using the 3D fully convolutional network, preliminary results achieved lesion detection performance with a 0.6 F1 score. Conclusions: This preliminary study demonstrates the feasibility and potential of deep neural networks for automated lesion detection using very limited training data. Ongoing improvements in data annotation methods, increasing sample sizes, and novel data training methods are anticipated to yield higher detection performance. [1] Cicek O et al. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). 2016; 9901:424-432. [2] He K et al. Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016; 770-778. [3] Chen H et al. DCAN: Deep contour-aware networks for accurate gland segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016; 2487-2496.