TY - JOUR T1 - <strong>PET Attenuation Correction Using MRI-aided Two-Stream Pyramid Attention Network</strong> JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 110 LP - 110 VL - 61 IS - supplement 1 AU - Xiaofeng Yang AU - Yang Lei AU - Tonghe Wang AU - Yabo Fu AU - Aparna Kesarwala AU - Tian Liu AU - Kristin Higgins AU - Walter Curran AU - Jonathon Nye AU - Hui Mao Y1 - 2020/05/01 UR - http://jnm.snmjournals.org/content/61/supplement_1/110.abstract N2 - 110Objectives: Deriving accurate attenuation maps for PET/MRI remains a challenging problem because voxel-wised signal intensities from MRI are unrelated to photon attenuation. In addition, MRI has a poor signal from bone and often with image artifacts at bone/air interfaces. With the capability of simultaneous data acquisition from PET and MRI data, it is desirable to use structural MRI data from a clinical sequence to directly guide PET attenuation correction (AC). This work proposes a deep-learning-based method to use routine T1-weighted MRI to directly aid PET AC without the need for an attenuation coefficient map. Methods: We propose a novel network architecture, called Two-Stream Pyramid Attention Network (TSPAN), which can utilize any structural MRI of a PET/MRI scan to aid direct AC PET estimation from non-AC (NAC) PET. The proposed network integrates a self-attention strategy into a pyramid network for deep salient feature extraction. CT-based AC PETs from the same subjects were used as the ground truth for network training. Two dedicated sub-networks with the same pyramid architecture design were used to process NAC PET and MRI, independently. The outputs of these two sub-networks were compared with the ground truth to calculate two independent loss terms which were the NAC-only loss and MRI-only loss. Since the feature maps obtained from these two sub-networks are complementary to each other, therefore, it is beneficial to combine them in a late fusion sub-network for final estimation. In the late fusion sub-network, the learned feature maps of each corresponding pyramid level were first concatenated and then highlighted by attention gates. The final loss was calculated by comparing the final PET estimation and the ground truth. Our network was trained using a 3D-patch-based method. In the testing stage, the final set of synthetic AC PET images was generated using patch-based prediction, followed by patch fusion. The trained network was tested on brain datasets from 21 patients using leave-one-out cross-validation. Standardized uptake value (SUV) differences between the MRI-aided AC PET estimation and the ground truth from the PET/CT of the same patients were quantitatively evaluated using metrics including mean absolute error (MAE) and mean error (ME), and normalized cross-correlation (NCC). Statistical analysis was performed to compare the differences between the MRI-aided AC PET estimation and the ground truth. Results: The AC PET images generated using the proposed MRI-aided method show great resemblance to the reference PET/CT images. The average MAE, ME, and NCC between MRI-aided AC PET and CT-based AC PET images in the whole brain for all patients were 4.09±0.71% and -1.52±0.28%, and 0.958±0.032, respectively. The average ME ranged from -3.68% to 3.66% for all contoured volumes of interest. There was no significant difference between the MRI-aided AC PETs and the reference PET/CTs. Conclusion: We have demonstrated an MRI-aided PET AC method using a novel two-stream pyramid attention network. Our results showed that the proposed method was able to generate highly accurate AC PET estimations comparable to conventional PET/CT derived AC maps. This new approach leveraged simultaneously acquired MRI data in the PET/MRI scans with the superior MRI soft-tissue contrast to improve AC PET estimation with precise organ boundary definition. This technique could be used for accurate PET attenuation and partial volume correction in combined/integrated PET/MRI imaging applications. ER -