Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • View or Listen to JNM Podcast
  • Visit JNM on Facebook
  • Join JNM on LinkedIn
  • Follow JNM on Twitter
  • Subscribe to our RSS feeds
Meeting ReportInstrumentation & Data Analysis Track

A deep-learning-based fully automated segmentation approach to delineate tumors in FDG-PET images of patients with lung cancer

Kevin Leung, Wael Marashdeh, Rick Wray, Saeed Ashrafinia, Arman Rahmim, Martin Pomper and Abhinav Jha
Journal of Nuclear Medicine May 2018, 59 (supplement 1) 323;
Kevin Leung
4Johns Hopkins University Baltimore MD United States
5Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Wael Marashdeh
1John Hopkins Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rick Wray
2John Hopkins NY NY United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Saeed Ashrafinia
6Johns Hopkins University School of Medicine Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arman Rahmim
4Johns Hopkins University Baltimore MD United States
5Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Martin Pomper
3Johns Hopkins Medical Institutions Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Abhinav Jha
1John Hopkins Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
Loading

Abstract

323

Objectives: Accurate delineation of lung tumors from PET images is important for PET-based radiotherapy-treatment planning and for reliable quantification of metrics such as metabolic tumor volume and radiomics features. However, the high noise and limited resolution of PET images makes reliable tumor delineation challenging[1,2].Deep-learning methods have shown promise in delineating tumors in several imaging modalities, although their value in PET remains to be carefully explored[3]. The purpose of this study was to develop a fully automated deep-learning approach for tumor segmentation of FDG-PET images and evaluate the approach using realistic simulations and patient data.

Methods: A convolutional neural network (CNN)-based deep-learning method was developed that automatically locates and segments tumors on FDG-PET images of patients with lung cancer, outputting a tumor mask (Fig. 1A). The CNN architecture learns the feature maps via an encoder network consisting of convolutional layers. The encoder output is mapped to a lesion mask in the decoding network. The method requires no user inputs to indicate tumor location and is thus fully automated. The method was first evaluated using realistic simulations, where ground truth tumor boundaries were known. Using the anthropomorphic XCAT phantom[4], realistic digital phantoms with lung tumors of different sizes and uptakes, all based on existing clinical data, were generated. Projection data for these phantoms was obtained by simulating a PET system modeling the various image-degrading processes including noise and blur. The data were reconstructed using the 2D OSEM algorithm to yield 14,000 simulated images for different phantoms. The realism of these images was evaluated via visual interpretation of randomly selected images by a board-certified nuclear-medicine radiologist. The CNN was trained on 10,000 simulated images by minimizing a loss function quantifying the error between predicted and true lesion mask. The CNN hyperparameters were then optimized on a validation dataset of 2,000 images. The optimized CNN was tested on the remaining 2,000 images. The deep-learning approach was next evaluated using existing clinical FDG-PET images from patients with lung cancer. For these images, manual tumor segmentation by a board-certified nuclear-medicine radiologist was used as the ground truth. The CNN obtained with the simulated images was fine-tuned using 1300 patient images and evaluated on 369 patient images. For both simulated and clinical PET images, the training-testing process was repeated for different combinations of training, validation and testing sets to assess the robustness of the approach to different data combinations. The accuracy of the segmentation output obtained with the approach was quantified using the metrics of dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), true positive fraction (TPF) and true negative fraction (TNF)[5]. The CNN accuracy was compared to semi-automated thresholding approaches using 30%, 40% and 50% of SUVmax.

Results: The proposed fully-automated deep-learning approach yielded a DSC of 0.85±0.15 and 0.88±0.14 on patient and simulated images, respectively, indicating accurate tumor delineation. Results with other metrics are reported in supporting data (Table 1). The proposed approach outperformed the semi-automated thresholding methods on the basis of various metrics. Representative segmentation results are shown in Figs. 1B-C. Conclusions: A CNN-based deep-learning approach to segmentation showed significant promise for fully automated delineation of lung tumors in FDG-PET images.

Previous
Back to top

In this issue

Journal of Nuclear Medicine
Vol. 59, Issue supplement 1
May 1, 2018
  • Table of Contents
  • Index by author
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
A deep-learning-based fully automated segmentation approach to delineate tumors in FDG-PET images of patients with lung cancer
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
A deep-learning-based fully automated segmentation approach to delineate tumors in FDG-PET images of patients with lung cancer
Kevin Leung, Wael Marashdeh, Rick Wray, Saeed Ashrafinia, Arman Rahmim, Martin Pomper, Abhinav Jha
Journal of Nuclear Medicine May 2018, 59 (supplement 1) 323;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
A deep-learning-based fully automated segmentation approach to delineate tumors in FDG-PET images of patients with lung cancer
Kevin Leung, Wael Marashdeh, Rick Wray, Saeed Ashrafinia, Arman Rahmim, Martin Pomper, Abhinav Jha
Journal of Nuclear Medicine May 2018, 59 (supplement 1) 323;
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
  • Info & Metrics

Related Articles

  • No related articles found.
  • Google Scholar

Cited By...

  • Deep Semisupervised Transfer Learning for Fully Automated Whole-Body Tumor Quantification and Prognosis of Cancer on PET/CT
  • Google Scholar

More in this TOC Section

Instrumentation & Data Analysis Track

  • Deep Learning Based Kidney Segmentation for Glomerular Filtration Rate Measurement Using Quantitative SPECT/CT
  • The Benefit of Time-of-Flight in Digital Photon Counting PET Imaging: Physics and Clinical Evaluation
  • Preclinical validation of a single-scan rest/stress imaging technique for 13NH3 cardiac perfusion studies
Show more Instrumentation & Data Analysis Track

Deep Learning in Oncology PET Imaging

  • Deep learning for classification of benign and malignant bone lesions in [F-18]NaF PET/CT images.
  • Strategy to develop convolutional neural network-based classifier for diagnosis of whole-body FDG PET images
  • Automatic Lesion Detection and Segmentation of 18FET PET in gliomas : A Full 3D U-Net Convolutional Neural Network Study.
Show more Deep Learning in Oncology PET Imaging

Similar Articles

SNMMI

© 2025 SNMMI

Powered by HighWire