CI-GNN: A Granger Causality-Inspired Graph Neural Network for
Interpretable Brain Network-Based Psychiatric Diagnosis
- URL: http://arxiv.org/abs/2301.01642v3
- Date: Sun, 28 Jan 2024 08:56:23 GMT
- Title: CI-GNN: A Granger Causality-Inspired Graph Neural Network for
Interpretable Brain Network-Based Psychiatric Diagnosis
- Authors: Kaizhong Zheng, Shujian Yu, Badong Chen
- Abstract summary: We propose a granger causality-inspired graph neural network (CI-GNN) to explain brain-network based psychiatric diagnosis.
CI-GNN learns disentangled subgraph-level representations alpha and beta that encode, respectively, the causal and noncausal aspects of original graph.
We empirically evaluate the performance of CI-GNN against three baseline GNNs and four state-of-the-art GNN explainers on synthetic data and three large-scale brain disease datasets.
- Score: 40.26902764049346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a recent trend to leverage the power of graph neural networks (GNNs)
for brain-network based psychiatric diagnosis, which,in turn, also motivates an
urgent need for psychiatrists to fully understand the decision behavior of the
used GNNs. However, most of the existing GNN explainers are either post-hoc in
which another interpretive model needs to be created to explain a well-trained
GNN, or do not consider the causal relationship between the extracted
explanation and the decision, such that the explanation itself contains
spurious correlations and suffers from weak faithfulness. In this work, we
propose a granger causality-inspired graph neural network (CI-GNN), a built-in
interpretable model that is able to identify the most influential subgraph
(i.e., functional connectivity within brain regions) that is causally related
to the decision (e.g., major depressive disorder patients or healthy controls),
without the training of an auxillary interpretive network. CI-GNN learns
disentangled subgraph-level representations {\alpha} and \b{eta} that encode,
respectively, the causal and noncausal aspects of original graph under a graph
variational autoencoder framework, regularized by a conditional mutual
information (CMI) constraint. We theoretically justify the validity of the CMI
regulation in capturing the causal relationship. We also empirically evaluate
the performance of CI-GNN against three baseline GNNs and four state-of-the-art
GNN explainers on synthetic data and three large-scale brain disease datasets.
We observe that CI-GNN achieves the best performance in a wide range of metrics
and provides more reliable and concise explanations which have clinical
evidence.The source code and implementation details of CI-GNN are freely
available at GitHub repository (https://github.com/ZKZ-Brain/CI-GNN/).
Related papers
- Graph Neural Network Causal Explanation via Neural Causal Models [14.288781140044465]
Graph neural network (GNN) explainers identify the important subgraph that ensures the prediction for a given graph.
We propose name, a GNN causal explainer via causal inference.
name significantly outperforms existing GNN explainers in exact groundtruth explanation identification.
arXiv Detail & Related papers (2024-07-12T15:56:33Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks [12.97622530614215]
We show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
arXiv Detail & Related papers (2021-07-22T23:42:23Z) - GNNLens: A Visual Analytics Approach for Prediction Error Diagnosis of
Graph Neural Networks [42.222552078920216]
Graph Neural Networks (GNNs) aim to extend deep learning techniques to graph data.
GNNs behave like a black box with their details hidden from model developers and users.
It is therefore difficult to diagnose possible errors of GNNs.
This paper fills the research gap with an interactive visual analysis tool, GNNLens, to assist model developers and users in understanding and analyzing GNNs.
arXiv Detail & Related papers (2020-11-22T16:09:08Z) - Understanding Graph Isomorphism Network for rs-fMRI Functional
Connectivity Analysis [49.05541693243502]
We develop a framework for analyzing fMRI data using the Graph Isomorphism Network (GIN)
One of the important contributions of this paper is the observation that the GIN is a dual representation of convolutional neural network (CNN) in the graph space.
We exploit CNN-based saliency map techniques for the GNN, which we tailor to the proposed GIN with one-hot encoding.
arXiv Detail & Related papers (2020-01-10T23:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.