TY - JOUR T1 - <strong>Attenuation and Scatter Correction for Whole-body PET Using 3D Generative Adversarial Networks</strong> JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 174 LP - 174 VL - 60 IS - supplement 1 AU - Xiaofeng Yang AU - Yang Lei AU - Xue Dong AU - Tonghe Wang AU - Kristin Higgins AU - Tian Liu AU - Hyunsuk Shim AU - Walter Curran AU - Hui Mao AU - Jonathon Nye Y1 - 2019/05/01 UR - http://jnm.snmjournals.org/content/60/supplement_1/174.abstract N2 - 174Objectives: Deriving accurate structural maps for attenuation and scatter correction of whole-body PET remains challenging. Common problems include truncation, inter-scan motion, and erroneous transformation of structural voxel-intensities to PET mu-map values (e.g. modality artifacts, implanted devices, or contrast agents). This work presents a deep-learning-based method to derive the non-linear difference between PET images with and without attenuation and scatter correction (NAC) for whole-body PET imaging. Methods: We propose to integrate residual block minimization into a 3D cycle-consistent generative adversarial networks (cycle GAN) framework to synthesize attenuation and scatter corrected (AC) PET from NAC PET without the use of structural information. The method learns a transformation that minimizes the difference between the synthetic AC PET, generated from NAC PET, and the real AC PET images. It also learns an inverse transformation such that the cycle NAC PET image (inverse of the synthetic estimate) generated from synthetic AC PET is close to the real NAC PET image. In order to optimize the matching of the synthetic and cycle datasets to their respective real datasets, both transformations are implemented by a generator network and their outputs are judged by a discriminator. Training the generator takes into account the estimation errors between synthetic and real datasets, errors between cycle and real datasets, and the discriminator feedback. NAC PET images share similar anatomical structure to the AC PET image but lack contrast information. Therefore, residual blocks were integrated into the generator to capture the difference between NAC and AC PET paired training images. After training, the patches from a new NAC PET image were fed into transformation model to generate the synthetic AC PET patches. The synthetic AC PET image was then reconstructed through a patch fusion process. We conducted a retrospective study with whole-body PET/CT from 30 subjects to derive a synthetic AC PET dataset (self-AC) and compared these with CT-based AC PET (CT-PET). This method was estimated with a leave-one-out cross-validation method. Standardized uptake value (SUV) differences between the self-AC PET and CT-PET images were quantitatively evaluated using measures of mean absolute error (MAE) and mean error (ME). Results: In comparing our self-AC PET with CT-PET, the average ME and MAE of whole-body region were 2.49±7.98% and 16.55±4.43% for 30 patient datasets. In selected regions of normal physiologic uptake (brain, lung, heart, left and right kidney, liver and bladder), the average ME of the SUV values were -0.66±0.59%, -6.71 ±5.50%, -1.81±3.11%, -0.68±5.96%, 1.41±6.98%, -3.29±4.58%, and -1.97±6.92%, and the average MAE were 10.91±2.37%, 23.11±7.27%, 15.16±4.21%, 17.32±7.69%, 16.98±8.18%, 15.27±7.51%, and 18.17±9.98% for the proposed methods, respectively. Lesion-based ME and MAE were 0.74±2.17% and 13.06±4.97% for the proposed method, and there were no significant ME differences between our self-AC PET and CT-PET. Conclusion: We proposed a novel deep-learning-based approach to automatically correct whole-body PET attenuation and scatter from NAC PET that effectively captures the non-linear relationship between the NAC and CT-PET images for self-attenuation and scatter correction. The method is applicable to PET data collected on any hybrid platform (PET/CT or PET/MRI) and demonstrates excellent accuracy and mean errors well below accepted test/retest reliability of PET indicating a high quantitative equivalence. ER -