Abstract
11
Objectives: Respiratory motion is one of the main sources of motion artifacts in positron emission tomography (PET) imaging. Emission image and patient motion can be estimated simultaneously from respiratory gated data through a joint estimation framework. However, conventional motion estimation methods based on registration of a pair of images are sensitive to noise. The goal of this study is to develop a robust joint estimation method that incorporates a deep learning (DL)-based image registration approach for motion estimation.
Methods: In this work, we use our previously proposed unsupervised deep neural network to estimate deformation fields for respiratory gated images. We propose a joint estimation algorithm by incorporating the learned image registration network into a regularized PET image reconstruction. The joint estimation was formulated as a constrained optimization problem with moving gated images related to a fixed image via the deep neural network. The constrained optimization problem is solved by the alternating direction method of multipliers (ADMM) algorithm. Each iteration of the algorithm consists of three separable steps: gated image reconstruction by the maximum a posteriori expectation maximization (MAP-EM) update, motion estimation by regularized DL-based image registration and regularized image fusion with motion compensation. The effectiveness of the algorithm was demonstrated using simulation and real data. We compared the proposed DL-ADMM joint estimation algorithm with a monotonic iterative joint estimation method using optimization transfer with the EM surrogate function. Motion compensated reconstructions using pre-calculated deformation fields by DL-based (DL-MC recon) and iterative (iterative-MC recon) image registration were also included for comparison.
Results: Our simulation study shows that the proposed DL-ADMM joint estimation method can generate images with sharper boundaries in the myocardium region compared with other methods. The resulting normalized root mean square errors (NRMS) calculated at a matched noise level are 22.74%, 33.32%, 26.01% and 34.37% for the DL-ADMM joint estimation, iterative joint estimation, DL-MC recon and iterative-MC recon, respectively, and 44.94% for ungated reconstruction. The proposed DL-ADMM based motion correction reduces the bias compared with the ungated image without increasing the noise level and outperforms the other methods. In the real data study, our proposed method also provides higher lesion contrast and shaper liver boundary than the ungated image and has lower noise than the reference gated image. The contrast of the proposed method is also higher than other motion correction methods at any matched noise level.
Conclusions: In this work, we proposed a joint estimation framework incorporating deep learning-based image registration for motion estimation with guaranteed convergence. We validated the proposed method using simulation and clinical data and showed its ability to reduce motion artifacts while utilizing all gated PET data.