PT - JOURNAL ARTICLE AU - Hyun Gee Ryoo AU - Hongyoon Choi AU - Teck Huat Wong AU - Seung Kwan Kang AU - Jae Sung Lee AU - Dong Soo Lee TI - Development of a deep learning-based interpretation model for brain perfusion SPECT leveraging unstructured reading reports DP - 2019 May 01 TA - Journal of Nuclear Medicine PG - 402--402 VI - 60 IP - supplement 1 4099 - http://jnm.snmjournals.org/content/60/supplement_1/402.short 4100 - http://jnm.snmjournals.org/content/60/supplement_1/402.full SO - J Nucl Med2019 May 01; 60 AB - 402Objectives: Basal/acetazolamide brain perfusion single photon emission computed tomography (SPECT) is a diagnostic imaging tool to evaluate cerebrovascular functional status. In the clinical routine process, the imaging has been interpreted by visual analysis and summarized by unstructured text reports. Here, we develop a deep learning model for the interpretation of brain perfusion SPECT. As ground-truth labeling for the training of deep learning has become a bottleneck for the clinical application, we suggest a pipeline for developing a deep convolutional neural network (CNN) for SPECT leveraging extraction of structured labels from large and unstructured text report databases. Methods: Total 7345 reading reports and images of basal/acetazolamide stress brain perfusion SPECT were retrospectively collected. To extract structured labels for abnormalities of each vascular territory from the text reports, a Long Short-Term Memory (LSTM) network was used. Five hundred reports were selected and a nuclear medicine physician manually labeled for basal perfusion and vascular reserve status of 9 brain regions of vascular territories based on the reports. After the training of LSTM model, 6845 unstructured reports were converted into the structured labels for basal perfusion and vascular reserve status of 9 brain regions. To discriminate brain perfusion SPECT images with abnormal basal perfusion and vascular reserve from those with normal, 3-dimensional CNN (3D-CNN) was trained using the 6845 pairs of basal and acetazolamide perfusion SPECT images as inputs and corresponding structured labels of 9 brain regions. The performance for the automatic interpretation of images was measured by the area under the receiver-operating characteristic curve (AUROC) on 500 manually labeled independent test sets. Results: The LSTM network achieved superior performance for extraction of structured labels from unstructured text reports of brain perfusion SPECT. The accuracy was 95.3±2.3% (mean ± standard deviation, range: 89.0-99.0%), and the AUROC was 0.97±0.02 (range: 0.93-1.00) for basal perfusion and vascular reserve of 9 brain regions. The corresponding structured labels which represent basal perfusion and vascular reserve state of 9 brain regions were used as ground-truth labels for 3D-CNN training. The 3D-CNN achieved comparable performance for discriminating images with abnormal perfusion from those with normal and showed better performance in the decision of vascular reserve status than basal perfusion. The performance was highest for discriminating vascular reserve of the internal carotid artery territories with the accuracy of 89.0±2.5% (range: 87.2-90.8%) and AUC of 0.91±0.01 (range: 0.90-0.92). Conclusions: We suggest a pipeline for developing a deep learning-based image interpretation system for brain perfusion SPECT using image data combined with unstructured text reports. Considering that labels of medical images in the clinical routine are usually unstructured and complicated, an additional deep learning model to synthesize structured labels may facilitate the development of efficient image interpretation models based on CNN. According to our results, the deep learning-based image interpretation system of brain perfusion SPECT images is expected to be useful to support the human visual interpretation.