Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • View or Listen to JNM Podcast
  • Visit JNM on Facebook
  • Join JNM on LinkedIn
  • Follow JNM on Twitter
  • Subscribe to our RSS feeds
Meeting ReportPhysics, Instrumentation & Data Sciences - Image Generation

Use of Deep Image-to-Image Translations to assess Complementary Value of Imaging Modalities: Application to PET and CT images in Head and Neck Cancer

Seyed Masoud Rezaeijo, Ali Mahboubisarighieh, Shabnam Jafarpoor Nesheli, Hossein Shaverdi, Mahdi Hosseinzadeh, Ilker Hacihaliloglu, Arman Rahmim and Mohammad R Salmanpour
Journal of Nuclear Medicine June 2023, 64 (supplement 1) P1585;
Seyed Masoud Rezaeijo
1Department of Medical Physics, School of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ali Mahboubisarighieh
2Kharazmi University
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shabnam Jafarpoor Nesheli
3Faculty of Engineering, University of Science and Culture, Tehran, Iran
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hossein Shaverdi
42Technological Virtual Collaboration (TECVICO Corp.), Vancouver, BC, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mahdi Hosseinzadeh
5Technological Virtual Collaboration (TECVICO Corp.)
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ilker Hacihaliloglu
6Department of Radiology, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arman Rahmim
7University of British Columbia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mohammad R Salmanpour
8University of British Columbia, BC Research Center
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
Loading

Abstract

P1585

Introduction: Deep learning methods, based on generative adversarial networks (GAN), have shown great potential in synthesizing medical images or transferring context between different medical imaging modalities. While it is premature to consider entire replacement of a modality with another (especially if the imaging principles are very distinct), it is desirable to quantify how truly complementary two modalities are. We believe that deep learning-based image-to-image translation can play a role to this end, since two modalities may be entirely distinct in their original scales (e.g. PET and CT). As demonstration, we investigate the relative complementarity of 18F-FDG PET and CT imaging of head and neck cancer, by synthesizing 3D-PET images from CT images, followed by comparison with real 3D-PET images.

Methods: We studied a large dataset of 865 head and neck cancer patients with PET and CT images including size of 167x167x129. The datapoints were collected form The Cancer Imaging Archive database. PET images were first resampled 00ae_registeredsign to CT, and then both images were enhanced and normalized. We utilized Pix2pix and Dual-Cycle Generative Adversarial Network (CycleGAN) for image-to-image translations between domains. Pix2pix works in a pairwise fashion in that it needs corresponding images from two different domains to learn to translate from one to the other whilst CycleGANs need no corresponding images from both domains to learn. It learns a two-way mapping between the domains. In current study, we applied Pix2Pix and CycleGAN to synthesize 3D PET images from 3D CT images. The Pix2Pix parameters were experimentally tuned using L1 loss function, Adam optimizer with beta of 0.5 and learning rate of 0.0002. Moreover, the Dual-CycleGAN parameters were experimentally optimized using six independent loss terms to balance quantitative/qualitative loss functions including adversarial, dual cycle-consistent, voxel-wise, gradient difference, perceptual, and structural similarity losses, all with default initialization parameters. A novelty of this study is the usage of 3D images for training, instead of 2D images, and both networks were trained for 4000 epochs with a batch size of 8. The Mean Absolute Error (MAE), Percent Mean Absolute Error (PMAE), Root-Mean-Square Error (RMSE), Structural Similarity Index (SSIM), and Percentage of Consonants Correct (PCC) metrics were utilized to evaluate and compare the models. The datasets were split into 679 and 186 subsets for training and testing propose, respectively.

Results: The Dual-CycleGAN model had a MAE of 5.45±3.52, RMSE of 14.9±6.0, SSIM of 0.84±0.10, and PCC of 0.82±0.16. Moreover, the Pix2Pix model had a MAE of 5.88±3.65, RMSE of 15.8±5.7, SSIM of 0.83±0.09, and PCC of 0.84±0.14. Specifically, PMAE between synesthetic PET and true PET were (26.0±6.9)% and (26.5±7.1)% for Dual-CycleGAN and Pix2Pix, respectively. This indicates that there is significant information from PET that can be recovered from CT, yet the complementary value of PET to CT is significant at around 25%, though usage of (1-SSIM) suggests a complementary value of around 16%. This problem can be framed, ultimately, in terms of a task of interest with its specific metrics, which can be applied in this proposed framework for complementarity assessments.

Conclusions: Our study indicates that optimized Dual-CycleGAN and Pix2pix models generated considerable information from CT images in synthetic PET images. Meanwhile, the complementarity of PET and CT images was captures by around 25% between the synthesized and real PET images.

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure
  • Download figure
  • Open in new tab
  • Download powerpoint
Previous
Back to top

In this issue

Journal of Nuclear Medicine
Vol. 64, Issue supplement 1
June 1, 2023
  • Table of Contents
  • Index by author
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Use of Deep Image-to-Image Translations to assess Complementary Value of Imaging Modalities: Application to PET and CT images in Head and Neck Cancer
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
Use of Deep Image-to-Image Translations to assess Complementary Value of Imaging Modalities: Application to PET and CT images in Head and Neck Cancer
Seyed Masoud Rezaeijo, Ali Mahboubisarighieh, Shabnam Jafarpoor Nesheli, Hossein Shaverdi, Mahdi Hosseinzadeh, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R Salmanpour
Journal of Nuclear Medicine Jun 2023, 64 (supplement 1) P1585;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Use of Deep Image-to-Image Translations to assess Complementary Value of Imaging Modalities: Application to PET and CT images in Head and Neck Cancer
Seyed Masoud Rezaeijo, Ali Mahboubisarighieh, Shabnam Jafarpoor Nesheli, Hossein Shaverdi, Mahdi Hosseinzadeh, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R Salmanpour
Journal of Nuclear Medicine Jun 2023, 64 (supplement 1) P1585;
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
  • Figures & Data
  • Info & Metrics

Related Articles

  • No related articles found.
  • Google Scholar

Cited By...

  • No citing articles found.
  • Google Scholar

More in this TOC Section

  • SPECT Myocardial Perfusion Imaging Projection Generation: A Dual Approach Utilizing DDPM and CNN, and Comparative Analysis on Dual-Domains
  • An in silico imaging trial to evaluate an AI-based transmission-less attenuation compensation method for dopamine transporter SPECT
  • MRI-free Brain Amyloid PET Quantification through Deep Learning-based Precontrast CT Parcellation
Show more Physics, Instrumentation & Data Sciences - Image Generation

Similar Articles

SNMMI

© 2025 SNMMI

Powered by HighWire