%0 Journal Article %A Tiantian Li %A Xuezhu Zhang %A Zhaoheng Xie %A Hongcheng Shi %A Simon Cherry %A Ramsey Badawi %A Jinyi Qi %T Total-Body Parametric Reconstruction with Deep Learning-based Data-driven Motion Compensation %D 2021 %J Journal of Nuclear Medicine %P 60-60 %V 62 %N supplement 1 %X 60Objectives: The uEXPLORER PET/CT system with ultra-high sensitivity and total-body coverage provides potential for more accurate quantification of a wide range of physiological parameters in vivo. Motion during dynamic PET data acquisition can cause image blurring and reduce quantitative accuracy. In this work, we apply our previously developed deep learning-based data-driven gating and motion compensation methods to improve the image quality of total-body parametric imaging. We demonstrate the capability of data-driven motion compensated direct parametric reconstruction using dynamic PET data acquired on the uEXPLORER scanner. Methods: In this work, we use our previously developed unsupervised deep neural networks (DNN) to perform respiratory gating on total-body dynamic PET data and to estimate deformation fields for motion compensation. The proposed motion-compensated total-body parametric imaging was formulated within the maximum likelihood expectation maximization (ML-EM) framework. An iterative optimization algorithm was derived by employing the principle of optimization transfer. Each iteration of the algorithm is decoupled into two steps: 1) image reconstruction with motion compensation by the ML-EM update and 2) kinetic parameter estimation. In the motion-compensated reconstruction step, projection data acquired in all gates are combined together to update the activity image with a set of pre-determined deformation fields. In the kinetic parameter estimation step, the time activity curve at each voxel was modeled by the linear Patlak model and the likelihood is maximized by an EM-like update. The proposed motion-compensated parametric reconstruction was evaluated on a 1-hr dynamic scan acquired on the uEXPLORER scanner with an intravenous injection of 256 MBq 18F-FDG. The input function was extracted from a region over the descending aorta. We binned the last 30 min of list mode data into 6×5mins frames, each with 5 respiratory gates. The gate with the highest count level was selected to be the reference gate, which had about 50% of the total events. The motion field between each gate and the reference gate was estimated using the DNN. Direct reconstruction of parametric images from the list-mode data with (direct-MC) or without (direct-ungated) motion compensation and from the reference gate data only (direct-ref) were obtained. For comparison, indirect Patlak analysis from the reconstructed frames with (indirect-MC) or without (indirect-ungated) motion compensation and from the reference gate reconstructions (indirect-ref) were also conducted. Results: Total-body parametric images showed that the proposed direct-MC method can generate images with sharper boundaries in the myocardial region and liver dome compared with the direct-ungated method and has lower noise than the direct-ref method. The proposed method also showed significant noise reduction compared with indirect-MC methods. Quantitatively the direct-MC method provides better contrast versus background noise tradeoff than the other methods compared. Conclusions: We have proposed a data-driven motion compensated direct parametric reconstruction framework incorporating deep learning based respiratory gating and deformation fields estimation. We validated the proposed method using a uEXPLORER dynamic dataset and showed its ability to reduce motion artifacts while utilizing all gated PET data. %U