RT Journal Article SR Electronic T1 Use of deep convolutional neural networks to predict Parkinson’s disease progression from DaTscan SPECT images JF Journal of Nuclear Medicine JO J Nucl Med FD Society of Nuclear Medicine SP 29 OP 29 VO 59 IS supplement 1 A1 Ivan Klyuzhin A1 Nikolay Shenkov A1 Arman Rahmim A1 Vesna Sossi YR 2018 UL http://jnm.snmjournals.org/content/59/supplement_1/29.abstract AB 29Objectives: Brain SPECT imaging with Ioflupane I123 (DaTscan) is clinically used to measure the striatal dopamine transporter levels and aid the diagnosis of Parkinson’s disease (PD). It has been shown that the mean tracer binding ratios (BRs) in manually-defined regions of interest (ROIs) may also aid in predicting the rate of disease progression, e.g. cognitive decline (Caspell-Garcia, 2017) [1]. However, the mean BR may neglect other disease-relevant image features, such as shapes and texture of the tracer distribution. Recently, deep learning has gained popularity as a promising approach that can lead to fundamental advances in imaging-based biomarkers (Vieira, 2017) [2]. Our objective was to test whether deep convolutional neural networks (CNN) can be trained to extract salient information from DaTscan images - features that are invariant to translation, rotation, and blurring. We train the CNNs that operate on voxels within simple bounding box (BB) ROIs to predict cognitive decline in de-novo PD patients. We compare the prediction accuracy between the CNN and conventional mean BRs within manually-placed ROIs (via a logistic classifier). Methods: Baseline DaTscan images (91x109x91 voxels with size 2.0 mm) for 370 de-novo PD subjects as well as mean BR values in the putamen and caudate were obtained from the PPMI (www.ppmi-info.org). For CNN training, the images were cropped to a BB (32x48x16 voxels) that encompassed the striatum. The predicted metric was the clinical diagnosis of cognitive impairment at 2-year follow-up: 305 subjects were not cognitively impaired (NCI), and 65 were impaired (CI). The subjects were split into stratified, equally-sized train and test sets with approximately equal NCI to CI ratio (Fig. 1). The train BRs were used to fit a logistic classifier. The train BB images were augmented for CNN training, by adding random rotation, translation and blurring, yielding 50,000 images. The test set was used to measure the receiver operating characteristic (ROC) area-under the curve (AUC) of the logistic and CNN classifiers. A CNN with 3 convolution layers (Fig. 2) was implemented using the TensorFlow library (https://www.tensorflow.org). Each convolution layer was followed by a Rectifying Linear Unit (ReLU) activation and a max-pooling layer. The cross-entropy was the optimized loss function. Each training iteration included 50 augmented BB images. Results: The train and test loss functions plotted against the iteration number (Fig. 3A) demonstrate the CNN learning process: the test loss was reduced in the process of train loss optimization. The AUC of the CNN increased with iteration number (Fig. 3B), and stabilized after approximately 400 iterations (~20,000 train BB images). Multiple training repetitions produced similar results. The mean CNN AUC from iterations 400-600 was 0.69±0.02, outperforming the AUC of 0.63 for conventional mean BR values (Fig. 3C). When age was included as a covariate, the AUC of both classifiers was 0.74, similar to previous findings with mean BR (Schrag, 2017) [3]. Conclusions: Deep CNNs can be trained to use DaTscan SPECT images for prediction of cognitive impairment in PD subjects. The network learned disease-related image features that are invariant to rotation, translation, and blurring. The AUC comparison suggests that CNNs may predict disease progression better than mean BRs. Thus, in conjunction with the simplified BB ROIs, deep CNN offer a fully-automated approach to DaTscan image analysis. Since the image features learned by the CNN are rotation and translation-invariant, the DaTscan images do not need to be realigned, and labor-intensive manual ROI placement is not required.