TY - JOUR T1 - <strong>Attenuation Correction of PET/MR Using Cycle-Consistent Adversarial Network</strong> JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 171 LP - 171 VL - 60 IS - supplement 1 AU - Kuang Gong AU - Jaewon Yang AU - Kyungsang Kim AU - Georges El Fakhri AU - Youngho Seo AU - Quanzheng Li Y1 - 2019/05/01 UR - http://jnm.snmjournals.org/content/60/supplement_1/171.abstract N2 - 171Objectives: To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, attenuation correction is still challenging as MR images do not reflect attenuation coefficients directly. Previously we have developed a convolutional neural network (CNN) method to derive continuous attenuation coefficients for brain PET/MR imaging using Dixon MR images. One requirement of developing a CNN method is that paired MR and CT images from the same patient are needed, which is not easy to acquire if large amounts of training pairs are needed. In this work, we present a cycle-consistent adversarial network (CycleGAN) based attenuation correction method for brain PET/MR imaging, where the training MR and CT images need not be paired. Methods: Forty patients who had whole-body PET/CT scan, followed by additional PET/MRI scanning without second tracer administration, were included in this study. Dixon MR images were acquired during the PET/MRI scanning with average scan duration of 224.6 ± 133.7 s (range, 135-900 s). The average administered FDG dose was 305.2 ± 73.9 MBq (range, 170.2-468.1 MBq). When preparing the network training images, no registration was performed on the CT and MR images, and random rotation and permutation was performed on the training MR and CT images individually. Later quantitative analysis was based on Five-Fold cross validation. The schematic plot of Cycle-GAN is shown in Fig. 1(A). It consists of four networks inside: two generative networks are used to generate CT from MR or MR from CT; the other two discriminative networks are used to determine whether the generated pseudo CT/MR are similar to true CT/MR or not. The final objective function consists of: (1) discriminative loss: evaluate quality of the generated pseudo CT/MR; (2) cycle-consistence loss: whether the generated pseudo CT/MR can be transferred back to MR/CT. To make full use of the axial information, five neighboring axial slices were stacked occupying five input channels to reduce the axial aliasing artifacts. The network structure was implemented in TensorFlow 1.6. Batch size was set to 10 and 500 epochs were run. The segmentation and atlas methods based on Dixon MR images were adopted as baseline methods. Regional as well as global relative PET error were used to compare the performance of different methods. Results: The CycleGAN method can generate pseudo-CT images with more structure details (as presented in Fig 1(b)). Quantitative analysis (as presented in Table 1) shows that CycleGAN achieves smaller whole brain error (2.62% ± 1.35%) compared with Segmentation (6.79% ± 3.44%) and Atlas (4.36% ± 1.85%)) methods. Regional analysis in cerebellum, brain lobes and inner regions shown in Table 1 and the histogram analysis shown in Fig 1(d) also demonstrates the improvements by CycleGAN over traditional segmentation and atlas based methods. Conclusion: We have proposed a cycle-consistent adversarial network based method to generate the continuous attenuation map for brain PET/MR imaging based on Dixon MR images. Analysis using real data sets shows that the cycle-consistent adversarial network method can provide a promising new deep learning approach for the attenuation correction of PET/MR, where no paired MR and CT training images are needed. Acknowledgements: This work was supported by NIH grants R01 AG052653 and P41 EB022544. View this table:Relative PET error for different regions. ER -