Abstract
241953
Introduction: The advancement of ultrasensitive, high-resolution, total-body (TB) PET/CT with an extended field of view has significantly broadened the scope of dynamic PET applications. Nevertheless, the discrepancy in temporal resolution between PET and CT poses a challenge, potentially compromising quantitative accuracy. Additionally, errors in kinetic modeling are exacerbated by organ segmentations derived from CT. Our objective is to leverage the enhanced anatomical details provided by TB PET to conduct attenuation (AC) and scatter correction (SC), coupled with frame-by-frame multi-organ segmentation. This approach aims to alleviate the impact of temporal resolution disparities and enhance the precision of quantitative analyses in dynamic PET imaging.
Methods: Deep learning algorithms were developed using static TB PET images from a cohort of 430 patients, all of whom underwent scanning with the United Imaging uExplorer system. The algorithms’ efficacy was evaluated on three dynamic TB PET scans, each comprising 92 frames, as depicted in Table 1. As a first step, a 3D UNet was trained using the nnU-Net framework [1], utilizing non-attenuation and non-scatter corrected PET images to perform multi-organ segmentation. Ground-truth segmentation maps were generated using the CT images as input to TotalSegmentator [2]. As a second step, a dedicated decomposition-based network [3] was trained to handle attenuation and scatter correction. In the dynamic data application, the process involved predicting organ segmentations for each non-corrected frame utilizing the segmentation network as the first step. Subsequently, the attenuation and scatter correction network was applied to the frames. Comparative analysis was conducted by juxtaposing the algorithms' outcomes against manually refined segmentation labels provided by two physicians. For each frame, Dice coefficients were calculated to assess the concordance between predicted dynamic organ segmentations and CT-based organ segmentations. Attenuation and scatter corrected PET were visually assessed by two board-certified nuclear-medicine physicians against CT-based corrected PET.
Table 1
Results: The trained model demonstrated an average Dice coefficient of 0.96 across all eight organs and all dynamic frames. When applying the CT-based segmentation maps to the dynamic frames, an average Dice coefficient of 0.77 was attained in comparison. Table 2 depicts the Dice coefficients for each of the eight organs, comparing the outcomes from the trained model and the CT-based approach. Visual assessments of segmentation and AC and SC are presented in Figures 1 and 2, respectively.
Table 2
Conclusions: The developed deep learning method holds promise for CT-free multi-organ segmentation, AC, and SC in dynamic TB PET scans. Its potential to enhance accuracy and efficiency in dynamic PET imaging could broaden its application scope.
[1] Isensee, Fabian et al. "nnU-Net ..." Nature Methods 18 (2020).[2] Wasserthal, Jakob et al. "TotalSegmentator ..." Radiology. Artificial intelligence 5 5 (2022).
[3] Guo, Rui et al. "Using domain ..." Nature Communications 13 (2022).