Environment-Aware Dynamic Graph Learning for Out-of-Distribution
Generalization
- URL: http://arxiv.org/abs/2311.11114v1
- Date: Sat, 18 Nov 2023 16:31:10 GMT
- Title: Environment-Aware Dynamic Graph Learning for Out-of-Distribution
Generalization
- Authors: Haonan Yuan, Qingyun Sun, Xingcheng Fu, Ziwei Zhang, Cheng Ji, Hao
Peng, Jianxin Li
- Abstract summary: We study the out-of-distribution (OOD) generalization on dynamic graphs from the environment learning perspective.
We propose a Environment-Aware dynamic Graph LEarning (EAGLE) framework for OOD generalization by modeling complex environments and exploiting novel graph-temporalvariant patterns.
To the best of our knowledge, we are the first to study OOD generalization on dynamic graphs from the environment learning perspective.
- Score: 41.58330883016538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic graph neural networks (DGNNs) are increasingly pervasive in
exploiting spatio-temporal patterns on dynamic graphs. However, existing works
fail to generalize under distribution shifts, which are common in real-world
scenarios. As the generation of dynamic graphs is heavily influenced by latent
environments, investigating their impacts on the out-of-distribution (OOD)
generalization is critical. However, it remains unexplored with the following
two major challenges: (1) How to properly model and infer the complex
environments on dynamic graphs with distribution shifts? (2) How to discover
invariant patterns given inferred spatio-temporal environments? To solve these
challenges, we propose a novel Environment-Aware dynamic Graph LEarning (EAGLE)
framework for OOD generalization by modeling complex coupled environments and
exploiting spatio-temporal invariant patterns. Specifically, we first design
the environment-aware EA-DGNN to model environments by multi-channel
environments disentangling. Then, we propose an environment instantiation
mechanism for environment diversification with inferred distributions. Finally,
we discriminate spatio-temporal invariant patterns for out-of-distribution
prediction by the invariant pattern recognition mechanism and perform
fine-grained causal interventions node-wisely with a mixture of instantiated
environment samples. Experiments on real-world and synthetic dynamic graph
datasets demonstrate the superiority of our method against state-of-the-art
baselines under distribution shifts. To the best of our knowledge, we are the
first to study OOD generalization on dynamic graphs from the environment
learning perspective.
Related papers
- SPARTAN: A Sparse Transformer Learning Local Causation [63.29645501232935]
Causal structures play a central role in world models that flexibly adapt to changes in the environment.
We present the SPARse TrANsformer World model (SPARTAN), a Transformer-based world model that learns local causal structures between entities in a scene.
By applying sparsity regularisation on the attention pattern between object-factored tokens, SPARTAN identifies sparse local causal models that accurately predict future object states.
arXiv Detail & Related papers (2024-11-11T11:42:48Z) - IENE: Identifying and Extrapolating the Node Environment for Out-of-Distribution Generalization on Graphs [10.087216264788097]
We propose IENE, an OOD generalization method on graphs based on node-level environmental identification and extrapolation techniques.
It strengthens the model's ability to extract invariance from two granularities simultaneously, leading to improved generalization.
arXiv Detail & Related papers (2024-06-02T14:43:56Z) - Improving out-of-distribution generalization in graphs via hierarchical semantic environments [5.481047026874547]
We propose a novel approach to generate hierarchical environments for each graph.
We introduce a new learning objective that guides our model to learn the diversity of environments within the same hierarchy.
Our framework achieves up to 1.29% and 2.83% improvement over the best baselines on IC50 and EC50 prediction tasks, respectively.
arXiv Detail & Related papers (2024-03-04T07:03:10Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Out-of-Distribution Generalized Dynamic Graph Neural Network with
Disentangled Intervention and Invariance Promotion [61.751257172868186]
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph and temporal dynamics.
Existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs.
arXiv Detail & Related papers (2023-11-24T02:42:42Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - LEADS: Learning Dynamical Systems that Generalize Across Environments [12.024388048406587]
We propose LEADS, a novel framework that leverages the commonalities and discrepancies among known environments to improve model generalization.
We show that this new setting can exploit knowledge extracted from environment-dependent data and improves generalization for both known and novel environments.
arXiv Detail & Related papers (2021-06-08T17:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.