TY - JOUR T1 - Unsupervised background removal by dual-modality PET/CT guidance: application to PSMA imaging of metastases JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 36 LP - 36 VL - 62 IS - supplement 1 AU - Ivan Klyuzhin AU - Yixi Xu AU - Sara Harsini AU - Anthony Ortiz AU - Carlos Uribe AU - Juan Lavista Ferres AU - Arman Rahmim Y1 - 2021/05/01 UR - http://jnm.snmjournals.org/content/62/supplement_1/36.abstract N2 - 36Objectives: Supervised detection and segmentation of metastatic cancer lesions is an area of active research in medical imaging, including targeted PET/CT imaging of prostate-specific membrane antigen (PSMA). However, due to the unpredictable location of metastasis occurrence, supervised learning methods may require very large collections of segmented images to achieve high levels of performance. Building such datasets requires significant time and resources. Alternatively, we aimed to develop a novel unsupervised framework for subtracting healthy tracer uptake patterns via deep learning and dual-modality PET/CT guidance, with application to PSMA PET imaging. After the removal of normal background, cancer metastases become more prominent in the residual images. Our method does not require existing lesion segmentations and can leverage lesion-negative images. Methods: We use the observation that cancer metastases appear in semi-random locations, while patterns of physiological tracer uptake are more consistent between patients. Hence, we attempt to encapsulate the physiological patterns within a fully-convolutional neural net. To achieve this, a U-net was trained to "predict" PET images (axial slices) from dual inputs: corresponding CT image, and blurred version of the same PET image (Fig. 1). Blurring was used to obfuscate lesions, and to prevent the U-net from learning an identity transformation. Residual images with removed background and enhanced lesions were obtained by subtracting the predicted images from the original images. Our study involved 526 whole-body PET/CT image pairs, acquired with the 18F-DCFPyL radiotracer. Transaxial PET and CT image matrix size was 192x192 pixels (3.64x3.64 mm/pixel). 400 images were used for U-net training, 50 for validation, and 76 for testing. The U-net was implemented using Tensorflow, and Adam algorithm was used to minimize the mean absolute error loss. The training was performed for 150 epochs, in batches of 64 images. The method was evaluated on the test set, which had 126 prostate cancer lesions segmented by a nuclear medicine resident. In the original and residual test images, we assessed the local background variability Bstd as the standard deviation of background standardized uptake values (SUVs) outside of a given lesion mask. In addition, the lesion contrast-to-noise ratio (CNR) was evaluated as (Lmax-Bmean)/Bstd, where Lmax is lesion SUVmax value and Bmean is background SUVmean value. Results: The U-net was able to predict full-resolution PET images with high accuracy (Fig. 2). Small regions of physiological uptake, e.g. lacrimal glands, were reproduced in the predicted PET images, utilizing the consistency of patterns and anatomical reference from CT. On the other hand, prostate cancer lesions were strongly attenuated or absent. The U-net could not accurately predict the SUV distribution within the kidneys and in the ureters. Bstd in the residual images remained unchanged or was reduced for all but one lesion; on average, Bstd was reduced by 36.2% (median: 33.2%) compared to the original images (Fig. 3), p-value < 0.0001 (paired t-test). Lesion CNR increased on average by 69.3% (median 33.2%), p-value < 0.0001. The fitted linear relationship was CNRR = 1.57 x CNRO, fit p-value<0.001, where CNRR and CNRO are CNR values in residual and original images, respectively (Fig. 3). Conclusions: Deep convolutional neural nets can accurately reconstruct full-resolution physiological PSMA-PET uptake patterns from a pair of CT and blurred PET images. The residual images reduce the background variability around lesions and improve contrast-to-noise ratios. Hence, the residual images can be used as visual aid for the radiology readers, or in automated lesion detection. The method is unsupervised and does not require annotations. Future work will focus on developing improved network architectures and semi-supervised method extensions for automated lesion segmentation. ER -