Abstract
3009
Objectives: The medical community currently does not have a reliable gold standard to diagnose Alzheimer’s Disease (AD), nor an exact understanding of its cause. As a result, multiple different brain tracers are commonly used. Since amyloid and tau tracers measure different molecular changes associated withAD, it is helpful to measure both of these predisposing factors for an informed AD diagnosis. However, this requires scheduling multiple imaging sessions, which is costly and may be difficult for the participants. Therefore, we explored whether it might be possible to inject both tracers simultaneously and use deep learning to “disentangle” them to produce separate “pure” amyloid and tau images, similar to identifying different speakers in a crowded room (so-called “cocktail party problem”). In this study, we demonstrate a deep learning approach called DualNet to predict separate amyloid and tau PET scans from a combined dual-tracer image that is simulated from scans within the ADNI database.
Methods: From ADNI, we identified 482 pairs of normalized amyloid (AV-45) and tau (AV-1451) scans (264/133/37/6/42 scans for CDR score breakdown 0/0.5/1/2/unknown). Each tau-amyloid pair represents a unique patient, and the amyloid and tau scans selected per patient are the closest in time to one another out of all relevant scans for that patient. The input data to the model is the summation of amyloid and tau scans, and the output is the separate amyloid and tau scans. Since the tau and amyloid scans are not acquired concurrently, we account for this fact by using the co-registered, standard-voxel size images available in ADNI, and normalizing the sum of voxel values to 1. For the DualNet model, we use an architecture that follows the general residual UNet structure, as introduced in Chen et al.’s RED-CNN, except that it makes use of a VGG-11 encoder that is pre-trained on ImageNet, similar to Iglovikov et al.’s TernausNet. We also make use of Oktay et al.’s attention gate mechanisms for our residual connections, while removing the batchnorm layer present in Oktay et al.’s original implementation. Lastly, to simulate possible imbalances between amyloid and tau counts during PET acquisition, we experiment with training our model to handle 1:1, 1:3, and 3:1 ratios of amyloid to tau in the combined dual dose scan. To compare models, we used a baseline estimate that the amyloid and tau images are present in proportion to the overall number of counts, and assessed quality using the mean absolute (L1 loss) and mean square error (MSE).
Results: Overall, DualNet performs significantly better than the baseline model, with 44% and 71% reductions L1 and MSE loss, as well as a 32% increase in PSNR. Furthermore, from a qualitative perspective, the vast majority of predicted scans are visibly indistinguishable from the true amyloid and tau scans. However, in patients with a higher CDR score, the class-imbalanced model does not perform as well as in the average case. In patients with a (CDR ) score of 2.0, the model performs the worst, with the average L1 loss for amyloid and tau scans being 33% higher than the overall dataset and only improved by 28% when compared with the baseline.
Conclusions: On average, starting from a combined amyloid-tau image, the DualNet deep learning method can produce separate amyloid and tau scans that show a marked improvement over the baseline model, with high qualitative assessments. Future work will include clinical assessment of the predicted scans along with quantitative SUVR measurements, as well as more realistic simulations of the combined images using acquired list-mode data for each tracer. Overall, the method offers the potential to simplify the molecular imaging commonly done in AD, reducing costs as well as patient burden, and as such may be helpful for both clinical assessment and drug trials.