Explainable Spatio-Temporal Graph Neural Networks
- URL: http://arxiv.org/abs/2310.17149v1
- Date: Thu, 26 Oct 2023 04:47:28 GMT
- Title: Explainable Spatio-Temporal Graph Neural Networks
- Authors: Jiabin Tang and Lianghao Xia and Chao Huang
- Abstract summary: We propose an Explainable Spatio-Temporal Graph Neural Networks (STGNN) framework that enhances STGNNs with inherent explainability.
Our framework integrates a unified-temporal graph attention network with a positional information fusion layer as the STG encoder and decoder.
We demonstrate that STExplainer outperforms state-of-the-art baselines in terms of predictive accuracy and explainability metrics.
- Score: 16.313146933922752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatio-temporal graph neural networks (STGNNs) have gained popularity as a
powerful tool for effectively modeling spatio-temporal dependencies in diverse
real-world urban applications, including intelligent transportation and public
safety. However, the black-box nature of STGNNs limits their interpretability,
hindering their application in scenarios related to urban resource allocation
and policy formulation. To bridge this gap, we propose an Explainable
Spatio-Temporal Graph Neural Networks (STExplainer) framework that enhances
STGNNs with inherent explainability, enabling them to provide accurate
predictions and faithful explanations simultaneously. Our framework integrates
a unified spatio-temporal graph attention network with a positional information
fusion layer as the STG encoder and decoder, respectively. Furthermore, we
propose a structure distillation approach based on the Graph Information
Bottleneck (GIB) principle with an explainable objective, which is instantiated
by the STG encoder and decoder. Through extensive experiments, we demonstrate
that our STExplainer outperforms state-of-the-art baselines in terms of
predictive accuracy and explainability metrics (i.e., sparsity and fidelity) on
traffic and crime prediction tasks. Furthermore, our model exhibits superior
representation ability in alleviating data missing and sparsity issues. The
implementation code is available at: https://github.com/HKUDS/STExplainer.
Related papers
- DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - ST-MLP: A Cascaded Spatio-Temporal Linear Framework with
Channel-Independence Strategy for Traffic Forecasting [47.74479442786052]
Current research on Spatio-Temporal Graph Neural Networks (STGNNs) often prioritizes complex designs, leading to computational burdens with only minor enhancements in accuracy.
We propose ST-MLP, a concise cascaded temporal-temporal model solely based on Multi-Layer Perceptron (MLP) modules and linear layers.
Empirical results demonstrate that ST-MLP outperforms state-of-the-art STGNNs and other models in terms of accuracy and computational efficiency.
arXiv Detail & Related papers (2023-08-14T23:34:59Z) - Graph Neural Processes for Spatio-Temporal Extrapolation [36.01312116818714]
We study the task of extrapolation-temporal processes that generates data at target locations from surrounding contexts in a graph.
Existing methods either use learning-grained models like Neural Networks or statistical approaches like Gaussian for this task.
We propose Spatio Graph Neural Processes (STGNP), a neural latent variable model which commands these capabilities simultaneously.
arXiv Detail & Related papers (2023-05-30T03:55:37Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Space-Time Graph Neural Networks with Stochastic Graph Perturbations [100.31591011966603]
Space-time graph neural networks (ST-GNNs) learn efficient graph representations of time-varying data.
In this paper we revisit the properties of ST-GNNs and prove that they are stable to graph stabilitys.
Our analysis suggests that ST-GNNs are suitable for transfer learning on time-varying graphs.
arXiv Detail & Related papers (2022-10-28T16:59:51Z) - Graph Neural Networks for Multi-Robot Active Information Acquisition [15.900385823366117]
A team of mobile robots, communicating through an underlying graph, estimates a hidden state expressing a phenomenon of interest.
Existing approaches are either not scalable, unable to handle dynamic phenomena or not robust to changes in the communication graph.
We propose an Information-aware Graph Block Network (I-GBNet) that aggregates information over the graph representation and provides sequential-decision making in a distributed manner.
arXiv Detail & Related papers (2022-09-24T21:45:06Z) - Spatio-Temporal Latent Graph Structure Learning for Traffic Forecasting [6.428566223253948]
We propose a new traffic forecasting framework--S-Temporal Latent Graph Structure Learning networks (ST-LGSL)
The model employs a graph based on Multilayer perceptron and K-Nearest Neighbor, which learns the latent graph topological information from the entire data.
With the dependencies-kNN based on ground-truth adjacency matrix and similarity metric in kNN, ST-LGSL aggregates the top focusing on geography and node similarity.
arXiv Detail & Related papers (2022-02-25T10:02:49Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.