Abstract
3348
Introduction: Large heterogenous multicentric data sets are indispensable to build a generalizable deep learning model for PET/CT image segmentation. However, legal, ethical and patients privacy prevent sharing data sets between centers. In the current study, we developed decentralized federated deep learning framework for multi-institutional PET/CT image segmentation.
Methods: In the current study, we enrolled 220 clinical PET/CT images of head and neck cancer patients gathered form 5 different centers. Eighty percent of each center were used for training/validation whereas the rest of data were kept for testing. All tumors were manually segmented on fused PET/CT images. Image normalization was performed linearly on each data set with respect to the maximum value within the datasets. All images were cropped to body contour (head and neck region) and then resized to 200×200matrix size. PET and CT images were fused using guided filtering-based fusion (GFF) algorithm. We implemented a novel pure Transformer U-shape network for fully core segmentation algorithms. We implemented parallel federated framework standard image segmentation. In parallel federated algorithms, the global model is distributed from server to different centers and the models are trained independently in each center and then trained models from each center are returned to the server to aggregate and update the central global model. These steps are repeated until model convergence. Different image metrics, including Dice similarity index (Dice), and Jaccard coefficient were used for performance assessment of the algorithms. In addition, SUVmax and SUVmean were calculated for quantitative comparison with respect to ground truth segmentation. We compared federated model by centralized deep learning model where all data were pooled to one center and the model built on the whole dataset.
Results: The Dice/Jaccard coefficients were 0.81±0.01/0.71±0.01 and 0.80±0.02/0.70±0.03 for centralized and federated learning algorithms. There was no statistically significant difference between centralized and federated learning algorithms. All images resulted in relative errors less than 5% for SUVmax and SUVmean for both federated and centralized techniques.
Conclusions: The developed federated learning-based algorithm exhibited promising performance for head and neck tumor segmentation in PET/CT images and achieved centralized deep learning model performance. The proposed model provides generalizable PET/CT segmentation model while allowing access to large heterogenous datasets across different centers.