PT - JOURNAL ARTICLE AU - Wenyi Shao AU - Yong Du TI - SPECT Image Reconstruction by Deep Learning Using a Two-Step Training Method DP - 2019 May 01 TA - Journal of Nuclear Medicine PG - 1353--1353 VI - 60 IP - supplement 1 4099 - http://jnm.snmjournals.org/content/60/supplement_1/1353.short 4100 - http://jnm.snmjournals.org/content/60/supplement_1/1353.full SO - J Nucl Med2019 May 01; 60 AB - 1353Objectives: Machine learning technique has been widely used for image analysis and outcome prediction in the field of medical imaging. Recently it was also implemented in CT, MRI, and PET for image reconstruction. However, it has not yet been used in SPECT image reconstruction. In this work, we aim to develop the first deep learning network, named SPECTNet, for SPECT image reconstruction. Methods: The challenge of training a regression neural system for image reconstruction is that it has a large amount of outputs (equal to the number of pixels in an image, for example 128×128 ). This makes it difficult for the network to converge. To solve this problem, instead of creating a direct mapping between the projection-signal domain and the image domain, we first build up a mapping from the projection data to a compression image domain that contains much less outputs (for example16×16 ), and then decompress this result to a normal (128×128 ) image. Since the number of outputs (of the projection-data-to-compressed-image system) is significantly reduced, the neural network becomes much easier and faster to converge. The training of such network is however in a reversal sequence. The first step is to train an image compressor-decompressor system, accomplished by training an auto-encoder (comprising an encoder followed by a decoder), whose output is targeted to be as close as possible to its input. This training is an unsupervised learning process. In our study, 25,000 customized digital phantom images were employed to train the auto-encoder. Once the training of auto-encoder is finished, the 25, 000 phantom images were compressed to feature images by using the trained encoder. These feature images can be thought as representations of the original images in a lower-dimensional space. Note that differing from a traditional image-compression technique, the compressed feature images only need to be understood by machines, but not necessary by human, indicating the compressing rate can be very high. In the second step, a neural network comprising 7 convolutional layers and 2 fully connected layers was trained to create the mapping from SPECT projection data to the feature images obtained in the first step. Finally, the convolutional neural network is connected to (followed by) the decoder extracted from the auto-encoder to form a complete SPECTNet image reconstruction system. To use the developed system, the input of SPECTNet is projection data and the attenuation map, and the output is the activity image. Results: The trained system was validated by reconstructing simulated SPECT projection data that were never used in the training process. The developed SPECTNet system accurately reconstructed the images. The mean square error between SPECTNet reconstructed image and phantom were 2.5×10-3 , compared to 4.3×10-3 from conventional OSEM reconstruction with compensations for attenuation and resolution. Additionally, the SPECTNet reconstruction does not have the artificial ring that exists in the OSEM result. For noisy image (noise added to projection data), the background of OSEM reconstruction becomes inhomogeneous, which is unseen in the SPECTNet image. The variance of SPECTNet reconstruction is 46.3 while the variance of OSEM reconstruction is 107.2. Conclusions: In this work we developed a machine learning based SPECT reconstruction method, SPECTNet, that essentially splits a complex deep leaning network system into two subsystems, and trains them separately. Therefore, training difficulty is significantly reduced. Results indicate that SPECTNet can be used to accurately reconstructed SPECT images and is less sensitive to noise compared to OSEM.