TY - JOUR T1 - Attenuation Correction for Amyloid PET Imaging Using Deep Learning Based on 3D UTE/Multi-Echo Dixon MR images JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 111 LP - 111 VL - 61 IS - supplement 1 AU - Kuang Gong AU - Paul Han AU - Keith Johnson AU - Georges El Fakhri AU - Chao Ma AU - Quanzheng Li Y1 - 2020/05/01 UR - http://jnm.snmjournals.org/content/61/supplement_1/111.abstract N2 - 111Objectives: The cortical uptake of amyloid PET imaging is a powerful biomarker for the diagnosis and treatment monitoring of Alzheimer’s disease (AD). As the cortical regions are close to the bone, quantitative accuracy of amyloid PET imaging can be hugely influenced by the inaccuracy of attenuation correction (AC). This will be an issue if amyloid imaging is acquired from PET/MR systems, where attenuation maps cannot be obtained from MR images through simple transform. Previously we have developed a 3D mUTE sequence for PET/MR AC, which combines 3D UTE with multi-echo Dixon acquisitions, demonstrated to better resolve the bone and air regions. In this work, the mUTE sequence and the convolutional neural network (CNN) method were integrated together, denoted as CNN-mUTE, to derive accurate and continuous attenuation maps for amyloid PET imaging. Methods: Thirty-five subjects were scanned with approval from the local IRB. MR acquisitions were performed on a 3T MR scanner using the in-house developed 3D mUTE sequence, with different TE time (70, 2110, 2310, 3550, 3750, 4990, 5190 µs), voxel size of 1.875 x 1.875 x 1.875 mm3 and FOV of 240 mm3. A separate 11-C-PiB PET/CT scanning (15 mCi injection followed by a dynamic scan for 70 min) was performed on a whole-body PET/CT scanner for the same subjects. Only data from 40-50 min were used in this work for the quantification of different AC methods. CT images were acquired during the PET/CT scanning and were used as the training labels. For the CNN method, 3D U-net was employed as the network structure and L1 difference between the ground-truth CT and the generated CT was used as the training function. When preparing the network training pairs, we first registered mUTE images to the CT images through rigid transformation using the ANTs software. Random rotation and permutation was performed on the training pairs to avoid over-fitting. The network input has seven channels as each mUTE acquisition will produce seven 3D MR images. Quantitative analysis was based on five-fold cross-validation. The network structure was implemented in TensorFlow 1.12. The training batch size was set to 1 and 500 epochs were run. The atlas method based on Dixon MR images (Atlas-Dixon) was adopted as the baseline method. Regional as well as global relative PET error were used to compare the performance of different methods. Results: The CNN-mUTE method can generate pseudo-CT images with more structure details (as shown in Fig 1(a)). We can observe the inhomogeneity inside the bone region due to the signals present in mUTE MR images and the strong approximation power of CNN. Quantitative analysis in Table 1 shows that CNN-mUTE can achieve a Dice coefficient of 0.92 for the bone region above the eye. Surface maps of the PET error images shown in Fig 1(b) demonstrate that the CNN-mUTE method can produce smaller errors for the cortical regions. Regional analysis in cerebellum and brain lobes in Fig 1(c) and the histogram analysis shown in Fig 1(d) also demonstrates the improvement by CNN-mUTE over the Atlas-Dixon method. Conclusion: We have combined the mUTE sequence and the CNN method to generate accurate attenuation map for amyloid PET imaging. Analysis based on real data sets shows that the CNN-mUTE method has the potential for accurate PET/MR amyloid imaging. Acknowledgements: This work was supported by NIH grants R01 AG052653 and P41 EB022544. View this table:Comparisons of the generated pseudo-CT images based on 35 datasets. The Dice index of bone regions w ER -