TY - JOUR T1 - <strong>Experience with Machine Learning to detect Breast CA and other tissue in PET/CT</strong> JF - Journal of Nuclear Medicine JO - J Nucl Med SP - 1209 LP - 1209 VL - 60 IS - supplement 1 AU - Mohammad Salehi Sadaghiani AU - Lilja Solnes AU - Jeffrey Leal Y1 - 2019/05/01 UR - http://jnm.snmjournals.org/content/60/supplement_1/1209.abstract N2 - 1209Objectives: To evaluate a semantic segmentation convolutional neural network to identify regions of 18F-FDG avidity in PET/CT cases of breast CA. Pre-screening of imaging studies by trained artificial intelligence systems has the potential to increase a radiologist’s efficiency and accuracy by preferentially highlighting areas of abnormal avidity over areas of normal avidity, allowing the radiologist to immediately focus their attention to those areas with the highest probability of being disease. Methods: Fifty-three baseline PET/CT studies from an IRB-approved multi-institutional clinical trial of breast CA were used in this study. Data was processed using in-house software (Auto-PERCIST™) to generate voxel classification maps of all voxels within the study volume. Voxels exterior to the body were auto-detected and classified as ‘background’, whereas the voxels within the body were initially classified as ‘nominal’. A ‘reference’ value was calculated using a 3cmdiameter spherical volume-of-interest (VOI) placed in the liver. All clusters (7+ connected voxels) of voxels having a mean SUL (SUV corrected for lean body mass) greater than a calculated threshold (Mean+2SD of the ‘reference’ VOI) were identified and presented to a radiologist for classification to one of 13 tissue type categories (for a total of 16 categories) [Figure 1-A]. The paired PET image data and classification maps were then exported for use as training and testing data. Seventy percent (70%) of the paired data were randomly selected for training, with the remaining 30% reserved for validation. Using MATLAB 2018b, we developed a semantic segmentation convolutional neural network which we trained using the previously described data. Our network consisted of 17 layers, with 5 distinct convolutional layers consisting of progressively decreasing filter sizes. The network was trained for 32 epochs of 385 iterations, for a total of 12320 iterations and a learning rate of 0.001. Results: Validation testing of the trained neural network demonstrated a high degree of global accuracy (97.3%) in the detection of the 16 classification categories. The weighted Intersection of Union (IoU), a ratio of voxels correctly classified, was 95.7%, and the Mean Boundary F1 score (a measure of boundary accuracy) was 74.6% [Table 1-A]. For individual tissue types, accuracy rates ranged between 62.8% and 99.3% [Table 1-B]. Review of the resulting auto-classified / auto-segmented image sets [Figure 1-B] allowed visual confirmation of the accuracy of the tissue detection, while also highlighting areas requiring further refinement by the evidence of noise and boundary discordance. Conclusions: Machine Learning holds much promise as a means to increase our efficiency and accuracy in the interpretation and evaluation of medical images. In this work, we have demonstrated that these techniques have applicability in the realm of molecular imaging. We can see a potential use of Machine Learning as a tool for pre-processing image data in a way that can assist the physician by both highlighting abnormal areas of hyper-avidity as well as suppressing signal from areas of normal avidity. At this time, however, we recognize that there remains much work going forward, in particular in network design, the encoding of source and classification training data, as well as post-processing and interpretation of auto-segmented data. View this table:Table 1B- Global Statistics View this table: ER -