RT Journal Article SR Electronic T1 On the use of routine clinical MR images for synthetic CT generation in the head using a deep learning approach JF Journal of Nuclear Medicine JO J Nucl Med FD Society of Nuclear Medicine SP 1215 OP 1215 VO 60 IS supplement 1 A1 Haley Massa A1 Tyler Bradshaw A1 Samuel Hurley A1 Steve Cho A1 Alan McMillan YR 2019 UL http://jnm.snmjournals.org/content/60/supplement_1/1215.abstract AB 1215Introduction: MRI is a flexible imaging modality that allows the generation of different depictions of anatomy through the application of specific pulse sequences that can differentiate tissue properties, such as T1 and T2. Recent work has shown that routine clinical MR images can be translated into synthetic CT (sCT) images for use in PET/MR attenuation correction using deep learning. What has yet to be evaluated is the optimality of MR sequence types for sCT synthesis. The goal of this research is to evaluate the capability of four different MR sequences to generate sCT images using deep learning. Methods: IRB-approved retrospective clinical image data was obtained from patients who had a clinical MR and CT scan on the same day. MR sequence types included T1-weighted gradient echo (T1), T2-weighted fat-suppressed fast spin echo (T2-FatSat), post-contrast T1-weighted gradient echo (T1-Post), and fast spin echo T2-weighted fluid attenuated inversion recovery (CUBE-FLAIR). Affine registration (via FSL FLIRT) was first used to co-register and resample each MR to a 1x1x1 mm template space, followed by affine registration of the CT to each MR. The deep learning model was a convolutional neural network encoder-decoder with skip connections and processing blocks analogous to the Unet structure except that Inception V3-inspired blocks are utilized in place of sequential convolution blocks. The model was implemented in Keras with starting channel size of 32 and depth of 4. Training was performed over 100 epochs with batch size of 4 using the Adam optimizer with a mean squared error loss function. Training utilized 50 subjects and 10 validation subjects. sCT accuracy was evaluated by comparing sCT images with actual CT images in 13 unique subjects using three measures: Structural Similarity Index (SSIM), Peak signal-to-noise ratio (PNSR) and Dice Coefficient, considered for Whole Brain, Air (HU <-500), Soft Tissue (HU>-500, HU<300), and bone (HU >500). Wilcoxon signed-rank tests were performed (p<0.05) to compare each sCT image against the sCT image derived from T1-Post, which was previously demonstrated in a similar deep learning approach [Liu et al., Radiology 286(2)2017]. Results: CUBE-FLAIR had the best performing PNSR values for Whole Brain (44.27±.097), Bone (44.57 ± 0.872) and Soft Tissue (35.90 ± .509). T1 had the best performing Bone SSIM (.880 ± .044) and Air PNSR (43.79 ± 1.551). T2-FatSat performed best for Air SSIM (.285 ± .022) and Bone Dice Coefficient (.814 ± .051). Compared to T1-Post, several measures were statistically different. For CUBE-FLAIR: All measures of Bone, Soft Tissue PSNR and Dice Coefficient, and Whole Brain PSNR. For T1: Whole Brain SSIM, Air PSNR and Dice Coefficient, Bone SSIM and Dice Coefficient, and Soft Tissue SSIM. For T2-FatSat: Whole Brain SSIM, Air PSNR and Dice Coefficient, Bone Dice Coefficient, and Soft Tissue SSIM. Discussion/Conclusion: All input MR sequences produced feasible sCT images with quantitative measures within reasonable ranges of current state-of-the-art sCT approaches. Furthermore, no one type of MR input appeared to perform vastly better than any of the others, suggesting that of the four types of MR routine clinical input sequences herein, all are likely suitable for sCT generation and PET/MR attenuation correction. Note that given similar performances in sCT generation, future work should evaluate performance for PET/MR attenuation correction in a prospective cohort.