Abstract
649
Objectives: Deriving accurate attenuation maps for PET/MRI remains a challenging problem because MRI voxel intensities are not related to properties of photon attenuation and bone/air interfaces have similarly low signal. In addition, it is desirable to use structural MRI data from a clinical sequence to generate attenuation correction (AC) maps opposed to purpose driven AC pulse sequences that require additional time. Methods employing template- and atlas-based segmentation can introduce non-patient specific features related to registration errors and interpatient anatomical variability. This work presents a learning-based method to derive patient-specific computed tomography (CT) maps from routine T1-weighted MRI in their native space for attenuation correction of brain PET.
Methods: A patch-based anatomical MRI signature and auto-context models were integrated into a machine learning framework to iteratively predict CT images. The algorithm was trained with a set of paired MRI and CT datasets with the CT images serving as the regression target of the MRI. Estimation of the pseudo CT consisted of two major stages: a training stage and a prediction stage. For the training stage, patch-based anatomical features with patient-specific information were extracted from the registered training images, and the most robust and informative MR-CT features were identified by feature selection. For each random forest (RF), multiple features from a MRI and the previous predicted CT (PCT) were used, as well as, the original CT (OCT) to perform RF training. For the prediction stage, selected features were extracted from a new patient MRI and the previous PCT and feed into the well-trained RFs to perform CT prediction and refinement. We performed leave-one-out cross-validation method to evaluate the proposed CT prediction algorithm. Fifteen subjects with FDG brain PET/CT and T1-weighted structural MRI images were retrospectively processed to generate a MRI derived PCT to compare to the OCT collected during their FDG brain PET/CT study. Absolute and relative differences as well as image and structure similarities between the PCT and OCT images were quantitatively evaluated using measures of mean absolute error (MAE) of Hounsfield Unit (HU), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) and Dice similarity coefficient (DSC). Analysis of covariance and paired-sample t tests were used for statistical comparison of PET voxel between PCT- and OCT-based AC reconstructions.
Results: In comparing the PCT with OCT, the mean MAE, PSNR and NCC inside the brain were 39.47±5.72HU, 24.63±2.39dB and 0.98±0.02 for 15 patients’ data. Specifically, the mean MAE inside the brain between PCT and OCT were 6.79±2.22HU for air, 27.18±4.85HU for soft tissue, and 47.74±9.13HU for bone. The mean DSC between PCT and OCT were 0.97±0.02 for air, 0.95±0.02 for soft tissue, and 0.86±0.06 for bone. In comparison to the PET reconstructions, there was an average difference of less than 1.0% in all brain regions and no significant difference between the PETs with PCT- and OCT-based AC. The correlation coefficients derived from joint histograms of PCT- and CT-based AC PETs is close to 0.99. Conclusion: This work demonstrates a novel learning-based approach to automatically generate CT images from routine MR images based on a random forest regression with patch-based anatomical signatures to effectively capture the relationship between the CT and MR images. Reconstructed PET images using the PCT exhibit errors well below accepted test/retest reliability of PET/CT indicating high quantitative equivalence.