%0 Journal Article %A Kevin Leung %A Wael Marashdeh %A Rick Wray %A Saeed Ashrafinia %A Arman Rahmim %A Martin Pomper %A Abhinav Jha %T A deep-learning-based fully automated segmentation approach to delineate tumors in FDG-PET images of patients with lung cancer %D 2018 %J Journal of Nuclear Medicine %P 323-323 %V 59 %N supplement 1 %X 323Objectives: Accurate delineation of lung tumors from PET images is important for PET-based radiotherapy-treatment planning and for reliable quantification of metrics such as metabolic tumor volume and radiomics features. However, the high noise and limited resolution of PET images makes reliable tumor delineation challenging[1,2].Deep-learning methods have shown promise in delineating tumors in several imaging modalities, although their value in PET remains to be carefully explored[3]. The purpose of this study was to develop a fully automated deep-learning approach for tumor segmentation of FDG-PET images and evaluate the approach using realistic simulations and patient data. Methods: A convolutional neural network (CNN)-based deep-learning method was developed that automatically locates and segments tumors on FDG-PET images of patients with lung cancer, outputting a tumor mask (Fig. 1A). The CNN architecture learns the feature maps via an encoder network consisting of convolutional layers. The encoder output is mapped to a lesion mask in the decoding network. The method requires no user inputs to indicate tumor location and is thus fully automated. The method was first evaluated using realistic simulations, where ground truth tumor boundaries were known. Using the anthropomorphic XCAT phantom[4], realistic digital phantoms with lung tumors of different sizes and uptakes, all based on existing clinical data, were generated. Projection data for these phantoms was obtained by simulating a PET system modeling the various image-degrading processes including noise and blur. The data were reconstructed using the 2D OSEM algorithm to yield 14,000 simulated images for different phantoms. The realism of these images was evaluated via visual interpretation of randomly selected images by a board-certified nuclear-medicine radiologist. The CNN was trained on 10,000 simulated images by minimizing a loss function quantifying the error between predicted and true lesion mask. The CNN hyperparameters were then optimized on a validation dataset of 2,000 images. The optimized CNN was tested on the remaining 2,000 images. The deep-learning approach was next evaluated using existing clinical FDG-PET images from patients with lung cancer. For these images, manual tumor segmentation by a board-certified nuclear-medicine radiologist was used as the ground truth. The CNN obtained with the simulated images was fine-tuned using 1300 patient images and evaluated on 369 patient images. For both simulated and clinical PET images, the training-testing process was repeated for different combinations of training, validation and testing sets to assess the robustness of the approach to different data combinations. The accuracy of the segmentation output obtained with the approach was quantified using the metrics of dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), true positive fraction (TPF) and true negative fraction (TNF)[5]. The CNN accuracy was compared to semi-automated thresholding approaches using 30%, 40% and 50% of SUVmax. Results: The proposed fully-automated deep-learning approach yielded a DSC of 0.85±0.15 and 0.88±0.14 on patient and simulated images, respectively, indicating accurate tumor delineation. Results with other metrics are reported in supporting data (Table 1). The proposed approach outperformed the semi-automated thresholding methods on the basis of various metrics. Representative segmentation results are shown in Figs. 1B-C. Conclusions: A CNN-based deep-learning approach to segmentation showed significant promise for fully automated delineation of lung tumors in FDG-PET images. %U