Abstract
P1391
Introduction: The progression of Alzheimer’s disease (AD), which is biologically marked by the pathological distribution of neurofibrillary tau tangles and amyloid beta (Aβ) plaques, can be predicted with the help of graph neural networks (GNNs). While GNNs provide an effective and accurate data-driven approach to model the complex patterns of tau spread, potentiated by Aβ, along the connectome of the human brain, they produce relatively few neurobiological insights due to their black-box nature. The lack of deducible intuition in predicted tau propagation patterns from GNNs necessitates the use of explainable artificial intelligence (XAI) methods to evaluate the strategies used by the GNN. Here we investigate the use of three graph-based explainability models (DeepLIFT, GradCAM and GNNExplainer) on the predicted standardized uptake value ratio (SUVR) of follow-up over baseline tau PET scans to identify the key subset of nodes and edges that are meaningfully indicative of the future spatiotemporal trajectory of tau.
Methods: The data used in this study consists of tau and Aβ SUVRs from 66 regions of interest (ROIs) dervied from two-timepoint tau and amyloid PET imaging using the 18F-flortaucipir and Pittsburgh compound B radiotracers, respectively, sourced from the Harvard Aging Brain Study (HABS). This longitudinal dataset of 163 subjects (102 females) is split into 112 training (70 females) and 51 test (32 females) subsets and passed to a 3-layer graph isomorphism network (GIN) as the GNN. An L2 loss function is regularized with a physics-informed equation based on the Fisher-Kolmogorov-Petrovski-Puskinovand network diffusion equation that models tau production and diffusion. A population-level connectivity profile (edges) between the ROIs (nodes) is obtained using diffusion tensor imaging (DTI). The predicted follow-up tau SUVRs are independently passed along with the input tau and Aβ SUVRs and the trained GIN to each of the three graph explainer models from the PyTorch-based Dive Into Graphs library, (i) DeepLIFT, that compares the backpropagated output to the input, (ii) GradCAM, that uses the final gradients of the prediction to produce a localization map, and (iii) GNNExplainer, that performs an optimization function to identify the most critical subgraph in the output set of nodes and edges and is model agnostic. The resultant mask of edge importance from each explainer is used to identify the corresponding subset of critical nodes and compared to the presently established biologically relevant ROIs of tau propagation.
Results: To assess the relative performance of each of the three explainability models, we calculated each model’s averaged importance score ascribed to the overall important set of nodes and edges calculated across all three models, with the order of importance scores across nodes, edges and proportion of inter-lobe edge inference being GNNExplainer (0.2151, 0.2332, 49.27%) > GradCAM (0.1796, 0.1382, 26.08%) > DeepLIFT (0.1465, 0.1491, 34.78%). In addition, the temporal lobe held the largest proportion of important ROIs (26.31%), followed by the occipital lobe (18.42%) and limbic region (18.42%).
Conclusions: The distinct pattern of pathological tau propagation can be interpreted using graph explainers that provide an importance mapping of nodes and edges driving GNN prediction. Our analysis suggests that the model-agnostic GNNExplainer, which assesses the relations only between input and predicted data, performs interpretability tasks better than the model-specific frameworks DeepLIFT and GradCAM, which rely on the back-propagated activations and gradient weights within the model. Graph-domain XAI techniques, in particular model-agnostic approaches, therefore, are promising for shedding light on tau spread and for furthering our understanding of AD progression.