TY - JOUR T1 - Strategy to develop convolutional neural network-based classifier for diagnosis of whole-body FDG PET images JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 326 LP - 326 VL - 59 IS - supplement 1 AU - Keisuke Kawauchi AU - Kenji Hirata AU - Seiya Ichikawa AU - Osamu Manabe AU - Kentaro Kobayashi AU - Shiro Watanabe AU - Miki Haseyama AU - Takahiro Ogawa AU - Ren Togo AU - Tohru Shiga AU - Chietsugu Katoh Y1 - 2018/05/01 UR - http://jnm.snmjournals.org/content/59/supplement_1/326.abstract N2 - 326Objectives: bjectives As the number of PET-CT scanners increases and FDG PET-CT becomes a common study in oncology, the demand on artificial intelligence (AI) is rapidly growing to compensate the relative decrease of radiologists/nuclear medicine physicians and to prevent human oversight. Convolutional neural network (CNN) is one of the deep learning techniques and is known to be feasible to image classification by recognizing complex visual patters in a similar way to human perceptions. The final goal of AI diagnosis may be writing a radiology report completely by itself, but for the first step, in the current research, we aimed to develop a CNN-based diagnosis system for whole-body FDG PET-CT that predicts whether physician’s further diagnosis is required or not. Methods In this retrospective, single center study, we investigated sequential 1,053 studies of FDG PET-CT at our institute between January 2017 and June 2017. Scanner was either a Siemens Biograph 64 PET-CT scanner (N=879) or a Philips GEMINI TF-64 scanner (N=174). A nuclear medicine physician classified all the cases into the following 3 categories: (1) patient presenting with no obvious abnormality, (2) patient presenting with malignant uptake, or (3) equivocal. Maximum intensity projection (MIP) images (matrix size, 168 × 168) were built from these images and rotated by 10 degrees to generate 19 images per patient (~20,000 images in total). Subsequently, data was augmented to 45,000 images by noise and parallel translation. CNN was employed as an automated classifier for (1) vs. (2) vs. (3). CNN was trained and validated using the randomly selected 70% images while the data of the remaining 30% images were used for test Purpose: The process was repeated 5 times to calculate the accuracy. This experiment was performed under the following environment: OS, Windows 10 pro 64 bit; CPU, intel Core i7-6700K; GPU, 2 × NVIDIA GeForce GTX 1060 6GB; Framework, Keras 2.0.2 and TensorFlow 1.3.0; Language, Python 3.5.2; CNN, original CNN (Convolution layer, 5; Maxpooling layer, 4); Optimizer, Adam. Results Of 45,000 images were provided, and each category consisted of approximately 15,000 images. The CNN process spent ~5 minutes for training each fold dataset and <30 seconds for prediction. When images of patient presenting with no obvious abnormality were given to the learned model, the accuracy was up to 95.7±3.5%. Similarly, the accuracy for images of patient presenting with malignant uptake and images of equivocal were 93.2±3.9% and 87.8±5.3%, respectively. On patient-based analysis, roughly all patients were correctly categorized, resulting in overall accuracy of 98%. Conclusions: The convolutional neural network-based system successfully classified whether FDG-PET images needed further evaluation by a nuclear medicine physician. The system would reduce physicians’ burden and oversight. ER -