RT Journal Article SR Electronic T1 Deep learning-based denoising for PennPET Explorer data JF Journal of Nuclear Medicine JO J Nucl Med FD Society of Nuclear Medicine SP 574 OP 574 VO 60 IS supplement 1 A1 Jing Wu A1 Margaret Daube-Witherspoon A1 Hui Liu A1 Wenzhuo Lu A1 John Onofrey A1 Joel Karp A1 Chi Liu YR 2019 UL http://jnm.snmjournals.org/content/60/supplement_1/574.abstract AB 574Objectives: Promising results have been reported for low-statistics PET data denoising using a deep convolutional neural network (CNN) with standard-statistics images as the target and low-statistics images as input for training. The trained network can generate denoised low-statistics images, which have similar noise level as the standard-statistics images. The long axial field-of-view (AFOV) scanner of the PennPET Explorer leads to very high sensitivity and permits low dose, fast, and/or late imaging. A CNN trained on the high-count data from the PennPET scanner will be able to generate virtual-high-statistics images to permit ultra-low dose, ultra-fast, and/or ultra-late imaging. In this study, we investigated and optimized a fully 3D U-Net architecture for the PennPET Explorer data denoising. Methods: Two subjects were injected with 15 mCi FDG and scanned in a single bed position on the time-of-flight (TOF) PennPET Explorer scanner (currently with 64-cm AFOV). Subject #1 was scanned for 20 min after 105 min post-injection. Subject #2 was scanned for 20 min after 85 min post-injection, and then an ultra-late scan was acquired for 60 min after 10 half-lives of FDG (~18 hr post-injection). For each subject, 10 low-count (5% of full count, 1-min data) samples were generated by independent sampling from the full-count (20-min) data. All the images were reconstructed by list-mode TOF OSEM (25 subsets, 4 iterations). The image size was 288×288×320 with voxel size of 2×2×2 mm3, and then cropped to 144×272×320 for denoising. A fully 3D U-Net was trained for denoising by minimizing the L2 loss function using Adam optimization. The 10 low-count and 1 full-count images from subject #1 were used as input and target for training, respectively, and the 10 low-count images from subject #2 were used for testing. To augment the training dataset, a patch-based method was used with 64×64×16 patch size. To reduce artifacts on the tile edge caused by the overlapping-tile strategy, 144×272×160 patch size was used in testing. Regions of interest (ROIs) of cerebellum, bone marrow, myocardium, aorta wall, and lung were drawn on the full-count image of subject #2. The quantification accuracy was evaluated using the relative bias of ROI mean value with the full-count image as ground truth. The background noise was evaluated using the normalized standard deviation in the lung ROI. Finally, the trained network was applied to the ultra-late scan of subject #2 to generate a low-noise virtual-high-statistics image. Results: The trained 3D U-Net can generate denoised low-count images, which have comparable image quality as the full-count image. The aorta wall can be clearly identified in the denoised images, but not in the original low-count images. The quantification results showed that the noise levels in the low-count images were greatly reduced by a factor of about 4.5 after denoising, which were similar to that in the full-count image. The relative bias values in all ROIs were below 6% in the denoised low-count images. Finally, a low-noise virtual-high-statistics ultra-late image was generated using this optimized 3D U-Net, though further network training might be needed using images with matched noise levels in ultra-late scans. Conclusions: Fully 3D U-Net is able to predict full-count images from low-count data acquired on the PennPET Explorer scanner, although additional human data are needed to further optimize the network. The high sensitivity of long AFOV scanners coupled with an optimized CNN, may permit ultra-low dose, ultra-fast and/or ultra-late imaging. The optimized 3D U-Net can be further used for generating high quality images for ultra-late scans, which may enable significant clinical and research applications.