TY - JOUR T1 - Influence of gradient difference loss on MR to PET brain image synthesis using GANs JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 1431 LP - 1431 VL - 61 IS - supplement 1 AU - Clement Hognon AU - Florent Tixier AU - Thierry Colin AU - Olivier Gallinato AU - Dimitris Visvikis AU - Vincent Jaouen Y1 - 2020/05/01 UR - http://jnm.snmjournals.org/content/61/supplement_1/1431.abstract N2 - 1431Objectives: Considerable improvements in image synthesis have been achieved over the recent years using (deep) machine learning. Models based on generative adversarial neural networks (GANs) now enable the generation of high definition images capable of fooling the human eye. These methods are being increasingly used in medical imaging for various cross-modality image synthesis applications. The performances achieved however largely depend upon the different terms of the cost function to be minimized by the neural network. In this work, we explore the potential advantage of embedding a gradient difference loss in the cost function of GANs for MR to PET brain image synthesis, with the aim of enforcing the sharpness of functional regions in synthetic PET outputs. While not clinically meaningful, such synthetic images may be useful to generate additional data required for the training of various machine learning models or to help analyzing relationships between the two modalities. Methods: We used the publicly available PET-SORTEO database (CERMEP, Lyon, France) consisting of 15 highly realistic simulated volumetric brain MR and PET images. We focused the current study on T1-weighted MR to FDG-PET image synthesis. Volumetric images were resliced into 256x256 axial slices to be fed to a 2D image-to-image translation GAN architecture (pix2pix). The loss function of the proposed neural network consisted of the two losses classically found in pix2pix : Adversarial Loss (AL) and Pixel Loss (PL), and of an additional Gradient Difference Loss (GDL) enforcing consistency between gradient energies in both modalities. Network weights were optimized for 100 epochs using a NVIDIA RTX 2070 GPU card. The dataset was splitted between training (10 subjects, 2560 axial slices) and testing (5 subjects, 1280 axial slices). Results using AL+PL+GDL were compared to a baseline image-to-image translation GAN using AL+PL only. Two different weightings λ of the strength of the GDL were considered (λ=10 and λ=16). Quantitative evaluation was carried out using average root mean square error (RMSE, lower means better) and structural similarity (SSIM, higher means better) between the synthesized outputs and the simulated FDG images. Slice-wise results along the transaxial direction were also studied to analyze the method’s performances across the field of view. Results: Images obtained with AL+PL+GDL showed superior overall quality compared to AL+PL only. Consistently superior preservation of functional regions uptake was achieved, especially in subcortical regions striatum and thalamus. These visual observations were supported quantitatively both in terms of RMSE for GDL weighting λ=10 (15.35±1.17 for AL+PL against 15.07 ± 1.16 for AL+PL+GDL) and SSIM (0.68 ± 0.18 for AL+PL against 0.77 ± 0.24 for AL+PL+GDL). Visual quality seemed to increase with higher GDL weighting λ=16, although this observation did not reflect quantitatively with the metrics studied. Conclusion: This preliminary study suggested the potential interest of encouraging the preservation of gradient energy in the brain for MR to PET translation GANs. Sharpness and finer details of functional regions were better recovered when compared to baseline. Results were obtained on realistic simulated images. Future experiments will be conducted on real clinical data using larger cohorts and more quantitative metrics to further characterize the benefits of GDL for MR to PET image translation. Acknowledgments: this work has received support from the French National Research Agency under the "Investissements d'avenir" program bearing the reference ANR-17-RHUS-0005. ER -