PT - JOURNAL ARTICLE AU - Yousefirizi, Fereshteh AU - Holloway, Caroline AU - Alexander, Abraham AU - Tonseth, Pete AU - Uribe, Carlos AU - Rahmim, Arman TI - <strong>Tumor segmentation of multi-centric whole-body PET/CT images from different cancers using a 3D convolutional neural network</strong> DP - 2022 Aug 01 TA - Journal of Nuclear Medicine PG - 2517--2517 VI - 63 IP - supplement 2 4099 - http://jnm.snmjournals.org/content/63/supplement_2/2517.short 4100 - http://jnm.snmjournals.org/content/63/supplement_2/2517.full SO - J Nucl Med2022 Aug 01; 63 AB - 2517 Introduction: PET/CT segmentation can be challenging in different tasks due to presence of noise and partial volume effects. Methods developed may have limited generalizability to multi-centric data, given distinct acquisition and reconstruction settings and different cancer types. Tumor segmentation in some cancer types such as cervical cancer is particularly challenging due to the proximity of high-uptake organs (e.g. cervix and bladder) where conventional segmentation methods are ineffective. In some cancer types, such as diffuse large B-cell lymphoma (DLBCL), the varied shape and heterogeneity of lesions affects the renormalizabilty of artificial intelligence (AI) approaches for segmentation. In this study, we explore the feasibility of training a convolutional neural network (CNN) to segment tumors in whole-body (WB) PET/CT images of different cancer types collected from different centers; i.e. collective deep learning. We additionally explore further aiding the network, via ensemble voting, via dedicated regional 3D U-Nets (Figure 1).Methods: We developed a multi-channel (PET &amp; CT) 3D U-Net with residual layers, supplemented with squeeze &amp; excitation (SE) normalization (Figure 2), with a hybrid loss function as a combination of distribution-based (focal), region-based (Dice) and boundary-based loss functions (Mumford-Shah (MS)). We trained and validated the model on PET/CT images of DLBCL, PMBCL and NSCLC. Additional scans from these cancers, as well as cervical cancer patients, including data from previously unseen centers, were considered as external test set (Table 1). We additionally explored aiding our network, via ensemble learning, by additionally exploring regional 3D U-Net models trained on different anatomical regions to refine the segmentations (Jemaa et al. 2020). Specifically, we trained three 3D U-Nets for tumor and lesion segmentation in three anatomical regions of interest (ROIs); i.e. head-neck (HN), chest, and abdomen-pelvis (AP). To automatically find references for these anatomical regions, we detected the brain, lungs, liver and bladder regions in each patient by (i) the method by Andrearczyk et al. 2020 for HN extraction in PET images; (ii) 3D active contour method to find the center of mass for lung area in CT images; (iii) thresholding technique and morphological operations (Bauer et al. 2012) to detect the liver in PET images, and (iv) a pre-trained 3D U-Net (Farag et al. 2021) to detect the bladder in PET images. In the external test set involving cervical cancer, we considered the anterior location of bladder with respect to cervix as prior knowledge. We studied our proposed 3D WB U-Net as well as ensemble model that includes regional models (Figure 1). Performance was reported in terms of the Dice similarity coefficient (DSC) and Hausdorff distance (HD) (Table 1). Results: Our results (Table 1) suggest that training a 3D U-Net with SE modules and hybrid loss on PET/CT images DLBCL, PMBCL and NSCLC cancers can result in good segmentation performance including on data from distinct cancers not used in training (cervical), and for data in unseen centers (DLBCL and cervical). The results showed that our model’s performance improved further when followed by regional models via ensemble learning. Conclusions: Our proposed CNN for segmenting whole-body PET/CT images from different cancer types using collective deep learning has to potential to rapidly assess whole-body tumor burden in PET images. As multi-centric PET/CT data from different cancers are increasingly utilized, there is a real possibility that a single neural network can be trained to quantify tumor burden from different cancers. Our results showed that the WB model trained on PMBCL, DLBCL and NSCLC data perfumed better than training on only DLBCL data, this can be due to the diverse size and location of DLBCL lesions. Future efforts include investigating improved, robust application to increasing data and cancer types with low tumor burden and small lesions.