Abstract
28
Objectives: Accurate spatial normalization (SN) of amyloid PET images for Alzheimer’s disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images.
Methods: Using 681 pairs of simultaneously acquired 11C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks (convolutional auto-encoder (CAE) and generative adversarial network (GAN)) that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets (72 for AD, 117 for MCI and 338 for CN) and validated using 154 datasets (20 for AD, 37 for MCI and 97 for CN). The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. To compare the different spatial normalization methods, MRI-based, deep learning-based, and average template-based ones, we calculated the regional activity concentration of 11C-PIB (kBq/ml) in eight brain regions (frontal, temporal, parietal, occipital, cingulate cortex, striatum, thalamus, and cerebellum) in each hemisphere using the Automated Anatomical Labeling (AAL) atlas. The standardized uptake value ratio (SUVr) was also obtained by normalizing the activity concentration of each region to that of the cerebellum. The correlation between PET- and MRI-based spatial normalization was assessed using Pearson’s correlation on the regional activity concentration and SUVr. We also performed Bland-Altman analysis on the regional activity concentration and SUVr.
Results: After the training, both the CAE and modified GAN successfully generated adaptive amyloid PET templates. Although the anatomical details shown in the templates were virtually equivalent, the texture of GAN-generated template was more similar to the MRI-spatial normalization. The images spatially normalized using the deep learning approaches show an amyloid uptake pattern that is very similar to the MRI-based spatial normalization result. In addition, the proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used (R2 between SUVr with and without MRI: 0.73 with average template, 0.91 with deep learning-based templates) . Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 seconds).
Conclusions: As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research.