RT Journal Article SR Electronic T1 A fully automated deep-learning based method for lesion segmentation in 18F-DCFPyL PSMA PET images of patients with prostate cancer JF Journal of Nuclear Medicine JO J Nucl Med FD Society of Nuclear Medicine SP 399 OP 399 VO 60 IS supplement 1 A1 Kevin Leung A1 Saeed Ashrafinia A1 Mohammad Salehi Sadaghiani A1 Pejman Dalaie A1 Rima Tulbah A1 Yafu Yin A1 Ryan VanDenBerg A1 Jeffrey Leal A1 Michael Gorin A1 Yong Du A1 Martin Pomper A1 Steven Rowe A1 Arman Rahmim YR 2019 UL http://jnm.snmjournals.org/content/60/supplement_1/399.abstract AB 399Objectives: Reliable segmentation of prostate cancer (PCa) lesions from 18F-DCFPyL prostate-specific membrane antigen (PSMA) PET images is an important need towards discovery and validation of imaging biomarkers for the diagnosis and prognosis of PCa [1,2]. Segmentation of PET images is challenging due to the relatively low spatial resolution and high noise levels [2]. Lesion delineation is typically performed manually, but manual segmentation often suffers from inter- and intra-operator variability [2]. In this work, we aimed to develop a fully automated deep-learning based method for lesion delineation in 18F-DCFPyL PET images. Such fully automated segmentation methods could be utilized to further develop a prognostic tool for PCa and to assist in individualized treatment planning and monitoring. Methods: 18F-DCFPyL PSMA PET images of 207 patients with PCa were manually segmented by four nuclear medicine physicians. The dataset contained a total of 1,224 PCa lesions where each patient had approximately 6 lesions on average. A deep convolutional neural network (CNN) was developed to delineate those PCa lesions. The 207 patient images were randomly partitioned into a training and test set containing 145 and 62 patients, respectively. The hyperparameters of the network were optimized via a 10-fold cross-validation on the training set. The network was trained with a cross-entropy loss function and a first-order stochastic gradient-based optimization algorithm [3]. The proposed method was then evaluated on the test set. Segmentation accuracy was evaluated on the basis of standard evaluation metrics, including Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), true positive fraction (TPF) and true negative fraction (TNF) [1]. DSC and JSC are measures of overlap where higher values indicate more accurate segmentation. The proposed method was further evaluated on the quantification of PCa lesions based on lesion volume measured in cubic centimeters (cc) and SUVmean. The mean absolute error (MAE) between the lesion volume (MAEvol) and SUVmean (MAESUVmean) values derived from the predicted lesion delineation and the manual segmentation ground truth were quantified. Lower values of MAEvol and MAESUVmean indicate more accurate PCa lesion quantification. The proposed method was also compared to commonly used semi-automated thresholding-based techniques that used thresholds of 30%, 40%, and 50% SUVmax, respectively. Results: The proposed fully automated deep-learning method yielded a DSC, JSC, MAEvol and MAESUVmean of 0.71 (95% confidence interval (CI): 0.69, 0.72), 0.60 (95% CI: 0.58, 0.61), 0.28 (95% CI: 0.26, 0.30) and 2.23 (95% CI: 2.01, 2.46), respectively, on the test set. The best performing semi-automated thresholding-based technique (30% SUVmax) yielded a DSC, JSC, MAEvol and MAESUVmean of 0.66 (95% CI: 0.65, 0.68), 0.53 (95% CI: 0.52, 0.54), 0.58 (95% CI: 0.52, 0.64) and 3.73 (95% CI: 3.44, 4.02), respectively. Overall, the proposed method significantly (paired sample t-test p-value<0.05) outperformed the semi-automated thresholding-based methods and yielded more accurate segmentation and lesion quantification. Conclusions: A deep-learning based segmentation method as applied to PSMA PET images was developed and showed significant promise towards automated delineation and quantification of PCa lesions.