RT Journal Article SR Electronic T1 Single Subject Deep Learning-based Partial Volume Correction for PET using Simulated Data and Cycle Consistent Networks JF Journal of Nuclear Medicine JO J Nucl Med FD Society of Nuclear Medicine SP 520 OP 520 VO 61 IS supplement 1 A1 Wei-jie Chen A1 Alan McMillan YR 2020 UL http://jnm.snmjournals.org/content/61/supplement_1/520.abstract AB 520Introduction: Positron Emission Tomography (PET) can provide an unparalleled metabolic view of the human body, but spatial resolution is limited due to limitations of PET scanners (particularly clinical scanners). These resolution limits result in partial volume effects (PVE) where anatomic boundaries are blurred relative to the ground truth. PVE can be described when the object is less than twice of the Full Width at Half Maximum (FWHM) resolution in three dimensions of the imaging system. While partial volume correction (PVC) methods have been previously studied, these methods have traditionally been limited by scan time and necessary anatomic information. Magnetic Resonance Imaging (MRI), particularly enabled by simultaneous PET/MR, can serve as a high spatial resolution anatomical prior. In this work, we exploit deep learning (DL) techniques to develop PVC for PET images based on MRI priors alone. Traditional deep learning training is ground-truth-dependent. However, the ground truth of the underlying anatomy is often inaccessible. To overcome this barrier, transfer of a model trained on simulated data is used, which has easier access to ground truth. Methods: A cycle-consistent Generative Adversarial Network (cycle-GAN) is demonstrated in Figure 1.This system contains two image-to-image translators (generators): A to B and B to A, with corresponding discriminators to encourage indistinguishable outputs from real images. To regularize these generators, cycle-consistency loss is introduced for the insight that if we translate an image to another domain and back again, we should acquire the same reconstruction image as input. After training using Adam optimizer for around eight hours, we exploit the generator of translating Pseudo PET images (blurred high resolution images of similar objects) to Original (unblurred) images. Models trained for PVC on Pseudo PET images can then be applied to real PET images.Three training datasets from a single subject were collected (Figure 2), including fat-suppressed (water-only) breast MRI, T1-weighted brain MRI, whole body fat-suppressed MRI. A fourth, non-human dataset, (satellite cloud images) was also obtained. To generate Pseudo PET datasets, these images images were smoothed across a range of Gaussian FWHM kernels (3,5,7,7,9,11 mm) for a better interpretation of non-uniform blurring kernels. The contrast of the brain MRI dataset was inverted to make an image similar to a brain PET image.The Satellite-cloud-images dataset from were collected for a baseline of model isolated from anatomical images. All patient image data was obtained on a Signa PET/MR scanner (GE Healthcare, Waukesha, WI). Results: As shown in Figure 3, the breast and brain PET images were evaluated over the four models. By observing the zoomed in (4x images), more accurate boundaries for regions with high intensity are seen. The model based on the Brain-T1(inverse) datasets shrinks the most partial volume from low metabolic regions (e.g., fat in breast, white matter in brain). Although the model trained on Satellite-cloud-images is able to deblur images, it introduces the most noise. Conclusion: PVC for PET images can be obtained based on a cycle-GAN trained on simulated PET images. This deep learning approach avoided the absence of ground truth of PET images and simulated images from the MR domain. Furthermore, the deep learning model was trained from datasets with only one subject provided, making this approach amenable to many applications in PET. Future work will explore improved simulation inputs for pseudo PET images and compare performance to existing PVC methods.