Abstract
35
Introduction: Long axial field-of-view (LAFOV) PET systems offer great promise for total-body parametric imaging. However, estimation of parametric maps may be critically limited by subject motion, which implies a need for accurate total-body motion compensation. As an extension of our previous work [1], we employ conditional generative adversarial networks (cGAN) to generate PET navigators from LAFOV datasets to demonstrate the feasibility of deriving usable motion vectors for performing subsequent motion correction.
Methods: 15 healthy volunteers (26-78 years, weight 53-112 kg) underwent total-body PET examinations in the uEXPLORER PET/CT system after giving informed consent. Subjects were asked to limit voluntary movement throughout the entire scan duration. A 60-min PET list-mode acquisition was initiated with the intravenous injection of [18F]FDG ((372 ± 17) MBq). PET list-mode data were rebinned into a dynamic frame sequence (30x2s, 12x10s, 6x30s, 12x120s, 6x300s). The rebinned data were reconstructed into a 150 x 150 x 486 matrix (voxel size 4 x 4 x 4 mm3) using 3D TOF OSEM with all corrections except PSF modeling. Based on our previous study, frames that do not contain enough structural information (in our case < 30-32 s post-injection (p.i)) were omitted from the training process. A random 70%‐to‐30% data split of the full data was used for cGAN training (10 scans). Realistic data-augmentations (rotation, translation, shearing, brightness and additive noise) were performed to amplify the training dataset. Based on the assumption that motion within the last 5 minutes of acquisition was minimal, cGAN learning were carried out between the last high-count PET frame (55-60 min p.i) and all other early frames (0.5-55 min p.i). The obtained cGAN models were subsequently applied to the test datasets to obtain synthetically‐generated high‐count images. The artificial high-count images generated from the early frames can be considered as PET navigators that possess temporally-invariant activity distribution, therefore facilitating motion correction using standard normalized mutual information techniques. Test datasets were imposed with random artificial motion (0-10 mm translation - 3 axes, 0-10 degrees rotation - 3 axes) to investigate further the potential of the cGAN to address the problem of inter-frame motion. Finally, a mean image was created from the motion imposed dynamic frame for each subject with cGAN-aided motion correction and without motion correction. A visual comparison from a sample subject is shown in Figure 1.
Results: Our data indicate a visually acceptable quality of the PET navigators for frames starting from 30-32s p.i. (Figure 2). The summed images obtained from frames which underwent cGAN-processed motion correction showed a clear improvement over the summed images without motion correction. As anticipated, motion correction improves image sharpness, and one can appreciate an improvement of images processed using cGAN methodology.
Conclusions: This study demonstrated the utility of cGANs to generate high-quality total-body PET-navigators from low-count early images. The so-derived PET navigators can facilitate accurate motion correction for rigid body motion. Future work will examine the application of non-rigid body motion correction using the cGAN method. References: [1] Lalith Kumar Shiyam Sundar, David Iommi, Otto Muzik, Zacharias chalampalakis, Eva-Maria Klebermass, Marius Hienert, Lucas Rischka, Rupert Lanzenberger, Andreas Hahn, Ekaterina Pataraia, Tatjana Traub-Weidinger, and Thomas Beyer. Conditional Generative Adversarial Networks (cGANs) aided motion correction of dynamic 18F-FDG PET brain studies. The Journal of Nuclear Medicine. 2020.