Out-of-Distribution Generalized Dynamic Graph Neural Network with
Disentangled Intervention and Invariance Promotion
- URL: http://arxiv.org/abs/2311.14255v2
- Date: Fri, 8 Mar 2024 06:25:50 GMT
- Title: Out-of-Distribution Generalized Dynamic Graph Neural Network with
Disentangled Intervention and Invariance Promotion
- Authors: Zeyang Zhang, Xin Wang, Ziwei Zhang, Haoyang Li, Wenwu Zhu
- Abstract summary: Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph and temporal dynamics.
Existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs.
- Score: 61.751257172868186
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive
abilities by exploiting graph structural and temporal dynamics. However, the
existing DyGNNs fail to handle distribution shifts, which naturally exist in
dynamic graphs, mainly because the patterns exploited by DyGNNs may be variant
with respect to labels under distribution shifts. In this paper, we propose
Disentangled Intervention-based Dynamic graph Attention networks with
Invariance Promotion (I-DIDA) to handle spatio-temporal distribution shifts in
dynamic graphs by discovering and utilizing invariant patterns, i.e.,
structures and features whose predictive abilities are stable across
distribution shifts. Specifically, we first propose a disentangled
spatio-temporal attention network to capture the variant and invariant
patterns. By utilizing the disentangled patterns, we design a spatio-temporal
intervention mechanism to create multiple interventional distributions and an
environment inference module to infer the latent spatio-temporal environments,
and minimize the variance of predictions among these intervened distributions
and environments, so that our model can make predictions based on invariant
patterns with stable predictive abilities under distribution shifts. Extensive
experiments demonstrate the superiority of our method over state-of-the-art
baselines under distribution shifts. Our work is the first study of
spatio-temporal distribution shifts in dynamic graphs, to the best of our
knowledge.
Related papers
- Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph [24.44321658238713]
Graph Neural Networks (GNNs) are widely used for node classification tasks but often fail to generalize when training and test nodes come from different distributions.
We introduce the Topology-Aware Dynamic Reweighting (TAR) framework, which dynamically adjusts sample weights through gradient flow in the Wasserstein space during training.
Our framework's superiority is demonstrated through standard testing on four graph OOD datasets and three class-imbalanced node classification datasets.
arXiv Detail & Related papers (2024-06-03T07:32:05Z) - Graphs Generalization under Distribution Shifts [11.963958151023732]
We introduce a novel framework, namely Graph Learning Invariant Domain genERation (GLIDER)
Our model outperforms baseline methods on node-level OOD generalization across domains in distribution shift on node features and topological structures simultaneously.
arXiv Detail & Related papers (2024-03-25T00:15:34Z) - Spectral Invariant Learning for Dynamic Graphs under Distribution Shifts [57.19908334882441]
Dynamic graph neural networks (DyGNNs) currently struggle with handling distribution shifts that are inherent in dynamic graphs.
We propose to study distribution shifts on dynamic graphs in the spectral domain for the first time.
arXiv Detail & Related papers (2024-03-08T04:07:23Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Temporal Domain Generalization with Drift-Aware Dynamic Neural Network [12.483886657900525]
We propose a Temporal Domain Generalization with Drift-Aware Dynamic Neural Network (DRAIN) framework.
Specifically, we formulate the problem into a Bayesian framework that jointly models the relation between data and model dynamics.
It captures the temporal drift of model parameters and data distributions and can predict models in the future without the presence of future data.
arXiv Detail & Related papers (2022-05-21T20:01:31Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.