Abstract
241807
Introduction: In PET/CT systems, CT scans are utilized for providing anatomical localization and providing PET attenuation correction (AC). However, artifacts can be introduced to PET images due to artifacts in the CT images, such as beam-hardening, metal artifacts, and count-starving[1], or from PET-CT mis-alignment, which affect AC accuracy. Alternatively, for neurological studies, CT may not be necessary except for PET AC purpose since CT provides limited anatomical information in the brain and MRI is preferred. Recently, deep learning (DL) neural networks (NN) have been employed to synthesize attenuation maps (μ-DL) directly from PET data[2-4], which has the potential to provide accurate AC while eliminating CT scans and to decrease the radiation exposure. Lately, United Imaging has developed a PET Reconstruction Toolbox (URT) which contains an AI model for synthetic PET-based attenuation map generation. This NN model was trained with datasets acquired only from Asian population and from only one type of PET/CT scanner, i.e., the uMI Panorama. Here, we applied this pre-trained NN on 1) data from a Caucasian population acquired on a uMI Panorama at Huntsman Cancer Institute (HCI) and 2) data from the NeuroEXPLORER (NX), a brain dedicated next-generation PET system, acquired at Yale PET Center.
Methods: The neural network in this study was initially trained on 200 anonymized 18F-FDG PET datasets from uMI Panorama systems, using non-attenuation corrected PET images as inputs and CT-derived attenuation maps (μ-CT) as labels. We used four Panorama FDG datasets from Huntsman Cancer Institute (HCI) and three NX FDG datasets from Yale. The HCI data consisted of single bed position scans of the head and neck area, taken 60-64 minutes post-injection (p.i.), while the NX data used were from 70-75 minutes p.i. We compared PET images reconstructed with μ-DL to those reconstructed with μ-CT. Reconstructions were performed using OSEM with 7 iterations and 10 subsets, at voxel sizes of 1.2x1.2x1.4 mm for Panorama and 0.5x0.5x0.5 mm for NX. To evaluate effectiveness, we extracted mean SUVs from 11 brain regions using the URT's segmentation tool. However, one Yale subject's data was excluded from the analysis due to significant PET-CT misalignment caused by motion. It is noteworthy that we have conducted dynamic scans at Yale and there were large time differences between CT and PET for the NX studies.
Results: Figure1 demonstrates that μ-DL images were visually similar to μ-CT for both HCI and Yale data, despite minor variations in skull thickness and air area. The PET images reconstructed with these attenuation maps appeared visually analogous across both Panorama and NX systems. We calculated the mean and standard deviation of the percentage difference in SUV for 11 brain regions. In the HCI data, we observed a -1.8±0.8% difference across these regions from 4 subjects. For two subjects from the Yale NX data, the difference was less than 2% in most brain regions. However, significant PET-CT misalignment was noted in the third subject from the NX data, particularly evident when comparing μ-CT and μ-DL images. This misalignment led to potential under- and overestimation of brain regions in PET images reconstructed with μ-CT. Notably, the surface of the right lateral parietal and temporal lobe showed an approximate 10 SUV overestimation in PET reconstructed with μ-DL compared to μ-CT (Figure2).
Conclusions: In this initial study, we successfully demonstrated that a pre-trained, synthetic, deep learning-based neural network can provide accurate attenuation information for different patient populations and across various PET systems. A key advantage of μ-DL is its independence from PET-CT alignment issues. Moving forward, we plan to expand this study to include a larger patient cohort, encompassing a range of diseases and extending beyond brain regions.
Reference:
[1] Barrett. RadioGraphics, 2004, [2] Shi et al. MICCAI, 2019, [3] Toyonaga, et al. EJNMMI, 2022, [4] Shi et al. PMB, 2023.