Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • View or Listen to JNM Podcast
  • Visit JNM on Facebook
  • Join JNM on LinkedIn
  • Follow JNM on Twitter
  • Subscribe to our RSS feeds
Meeting ReportPhysics, Instrumentation & Data Sciences

A deep learning-based approach for lesion classification in 3D 18F-DCFPyL PSMA PET images of patients with prostate cancer

Kevin Leung, Mohammad Salehi Sadaghiani, Pejman Dalaie, Rima Tulbah, Yafu Yin, Ryan VanDenBerg, Jeffrey Leal, Saeed Ashrafinia, Michael Gorin, Yong Du, Steven Rowe and Martin Pomper
Journal of Nuclear Medicine May 2020, 61 (supplement 1) 527;
Kevin Leung
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
1Biomedical Engineering Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mohammad Salehi Sadaghiani
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Pejman Dalaie
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rima Tulbah
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yafu Yin
3The first hospital of China Medical University Shenyang China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ryan VanDenBerg
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jeffrey Leal
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Saeed Ashrafinia
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michael Gorin
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yong Du
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Steven Rowe
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Martin Pomper
2The Russell H. Morgan Department of Radiology Johns Hopkins University Baltimore MD United States
1Biomedical Engineering Johns Hopkins University Baltimore MD United States
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Info & Metrics
Loading

Abstract

527

Objectives: Reliable classification of prostate cancer (PCa) lesions from 18F-DCFPyL prostate-specific membrane antigen (PSMA) PET images is an important clinical need for the diagnosis and prognosis of PCa (1). For this purpose, a PSMA-reporting and data systems (RADS) was developed to classify PSMA-targeted PET scans into categorizations that reflect the likelihood of PCa (1). Deep learning-based methods have recently shown promise in the classification tasks in medical images (2,3). In this work, we aimed to develop an automated deep learning-based method for lesion classification in 18F-DCFPyL PET images.

Methods: 18F-DCFPyL PSMA PET images of 267 patients with PCa were manually segmented by four nuclear medicine physicians. Each segmented region was assigned to one of nine possible PSMA-RADS categories (Table 1). The dataset contained 3,724 PCa lesions where each patient had approximately 14 lesions on average. A deep 3D convolutional neural network (3D-CNN) was developed to classify those lesions (Fig. 1a). The 3D PET images were cropped to yield cubic volumes-of-interest (VOIs) around the center of each lesion as the input to the network. VOI sizes varying from 83, 163, 323, 643, and 1283 cubic voxels were investigated. The VOI size that resulted in the best overall accuracy on the validation set was used in the final model. The VOI containing the lesion and the corresponding manual segmentation were given as inputs to the network. The output of the 3D-CNN was the predicted PSMA-RADS score. The 3,724 PCa lesions were randomly partitioned into training, validation and test datasets containing 2,607, 559 and 558 lesions, respectively, using a 70%/15%/15% split. The network was trained with a cross-entropy loss function and a first-order stochastic gradient-based optimization algorithm (4). Early stopping based on monitoring the error on the validation set was applied to prevent overfitting during training (5). The proposed method was then evaluated on the test set by assessing standard evaluation metrics, including overall accuracy, confusion matrix and area under receiver operating characteristic curve (AUROC). Overall accuracy was defined as the number of correctly classified observations divided by the total number of observations.

Results: Using an input size of 1283 cubic voxels, the proposed method yielded an overall accuracy of 67.3% (95% confidence interval (CI): 63.4%, 71.2%)) and AUROCs of 0.97, 0.94, 0.92, 0.90, 0.76, 0.96, 0.99, 0.89 and 0.89 for PSMA-RADS-1A, PSMA-RADS-1B, PSMA-RADS-2, PSMA-RADS-3A, PSMA-RADS-3B, PSMA-RADS-3C, PSMA-RADS-3D, PSMA-RADS-4 and PSMA-RADS-5, respectively, on the validation set (Fig. 1b). The input VOI size of 1283 cubic voxels significantly outperformed the smaller VOI sizes of 83, 163, 323 (paired sample t-test p-value<0.05) on the basis of overall accuracy (Fig. 1c). Using an input size of 1283 cubic voxels, the proposed method yielded an overall accuracy of 67.4% (95% CI: 63.5%, 71.3%) and AUROCs of 0.93, 0.95, 0.89, 0.93, 0.82, 0.98, 0.96, 0.88 and 0.90 for PSMA-RADS-1A, PSMA-RADS-1B, PSMA-RADS-2, PSMA-RADS-3A, PSMA-RADS-3B, PSMA-RADS-3C, PSMA-RADS-3D, PSMA-RADS-4 and PSMA-RADS-5, respectively, on the test set (Fig. 1d). The confusion matrix for the proposed method on the test set is shown in Fig. 1e.

Conclusions: A deep learning-based approach for lesion classification in PSMA PET images was developed and showed significant promise towards automated classification of PCa lesions.

Previous
Back to top

In this issue

Journal of Nuclear Medicine
Vol. 61, Issue supplement 1
May 1, 2020
  • Table of Contents
  • Index by author
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
A deep learning-based approach for lesion classification in 3D 18F-DCFPyL PSMA PET images of patients with prostate cancer
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
A deep learning-based approach for lesion classification in 3D 18F-DCFPyL PSMA PET images of patients with prostate cancer
Kevin Leung, Mohammad Salehi Sadaghiani, Pejman Dalaie, Rima Tulbah, Yafu Yin, Ryan VanDenBerg, Jeffrey Leal, Saeed Ashrafinia, Michael Gorin, Yong Du, Steven Rowe, Martin Pomper
Journal of Nuclear Medicine May 2020, 61 (supplement 1) 527;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
A deep learning-based approach for lesion classification in 3D 18F-DCFPyL PSMA PET images of patients with prostate cancer
Kevin Leung, Mohammad Salehi Sadaghiani, Pejman Dalaie, Rima Tulbah, Yafu Yin, Ryan VanDenBerg, Jeffrey Leal, Saeed Ashrafinia, Michael Gorin, Yong Du, Steven Rowe, Martin Pomper
Journal of Nuclear Medicine May 2020, 61 (supplement 1) 527;
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
  • Info & Metrics

Related Articles

  • No related articles found.
  • Google Scholar

Cited By...

  • Deep Semisupervised Transfer Learning for Fully Automated Whole-Body Tumor Quantification and Prognosis of Cancer on PET/CT
  • Google Scholar

More in this TOC Section

Physics, Instrumentation & Data Sciences

  • 3D Structural Convolutional Sparse Coding for PET Image Reconstruction
  • Exploration of Multi-objective Optimization with Genetic Algorithms for PET Image Reconstruction
  • AI-based methods for nuclear-medicine imaging: Need for objective task-specific evaluation
Show more Physics, Instrumentation & Data Sciences

Artificial Intelligence for Image Enhancement/Analysis and Disease Assessment

  • A fully unsupervised approach to create patient-like phantoms via Convolutional neural networks
  • A no-gold-standard technique for objective evaluation of quantitative nuclear-medicine imaging methods in the presence of correlated noise
  • Optimal Feature Selection and Machine Learning for Prediction of Outcome in Parkinson’s Disease
Show more Artificial Intelligence for Image Enhancement/Analysis and Disease Assessment

Similar Articles

SNMMI

© 2025 SNMMI

Powered by HighWire