PT - JOURNAL ARTICLE AU - Tzu-An Song AU - Samadrita Roy Chowdhury AU - Georges El Fakhri AU - Quanzheng Li AU - Joyita Dutta TI - Super-resolution PET imaging using a generative adversarial network DP - 2019 May 01 TA - Journal of Nuclear Medicine PG - 576--576 VI - 60 IP - supplement 1 4099 - http://jnm.snmjournals.org/content/60/supplement_1/576.short 4100 - http://jnm.snmjournals.org/content/60/supplement_1/576.full SO - J Nucl Med2019 May 01; 60 AB - 576Objectives: Partial volume effects arising from the limited spatial resolution capabilities of PET scanners are a major deterrent for accurate quantitation based on PET. Common approaches for addressing these limitations include point spread function modeling within the reconstruction framework, post-reconstruction partial volume correction using a segmented anatomical volume with labeled regions-on-interest, and post-reconstruction deconvolution with stabilizing penalties or priors. Our objective is to harness the latest developments in machine learning to super-resolve PET images using additional anatomical and spatial information. Methods: Super-resolution (SR) imaging refers to the task of converting a low-resolution (LR) image to a high-resolution (HR) one. We modified, characterized, and validated a state-of-the-art generative adversarial network (GAN) architecture for the task of super-resolving PET images. A GAN consists of two separate neural networks, a generator that creates synthetic data (e.g. synthetic high-resolution PET images) and a discriminator that evaluates them. Our optimized GAN architecture consisting of a spatially-sparse convolutional neural network (CNN) as the discriminator and an enhanced deep super-resolution CNN as the generator. The two networks are individually pretrained and then jointly trained and validated. Our modified network uses additional anatomical information in the form of high-resolution MR images and additional spatial information in the form of radial and axial location patches. Simulated data for training and validation were generated using the BrainWeb phantom series. LR images based on the ECAT HR+ scanner geometry were reconstructed using OSEM and post-filtered using the spatially variant PSF measured for the ECAT HR+. The uncorrupted ground-truth phantom images were used as the HR counterparts for training and validation. Of 20 BrainWeb phantoms (corresponding to different subject-specific T1-weighted MR scans), 15 were used for training and 5 were used for validation. All neural networks were implemented using PyTorch on an NVIDIA GPX XS8-2460V4-4GPU workstation. During training, the Adam algorithm was used as the optimizer, and the cost function was calculated as the L2 norm between the SR output and the HR input. Results: Extensive preliminary investigations were conducted to compare different CNN architectures. Optimized versions of the following state-of-the-art networks were compared: 3-layer SR CNN, 20-layer very deep SR, enhanced deep SR (containing residual blocks), and GAN.All CNN methods outperformed JE and TV penalized deconvolution. Among the deep learning architectures optimized and tested, the highest accuracy levels (averaged over epochs) were observed for the proposed GAN architecture. Conclusions: We developed a GAN architecture with anatomical and spatial inputs to create super-resolved brain PET images. Our results indicate that the proposed GAN outperformed all other candidate deep learning techniques as well as penalized deconvolution techniques.