Graphs Generalization under Distribution Shifts
- URL: http://arxiv.org/abs/2403.16334v1
- Date: Mon, 25 Mar 2024 00:15:34 GMT
- Title: Graphs Generalization under Distribution Shifts
- Authors: Qin Tian, Wenjun Wang, Chen Zhao, Minglai Shao, Wang Zhang, Dong Li,
- Abstract summary: We introduce a novel framework, namely Graph Learning Invariant Domain genERation (GLIDER)
Our model outperforms baseline methods on node-level OOD generalization across domains in distribution shift on node features and topological structures simultaneously.
- Score: 11.963958151023732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional machine learning methods heavily rely on the independent and identically distribution assumption, which imposes limitations when the test distribution deviates from the training distribution. To address this crucial issue, out-of-distribution (OOD) generalization, which aims to achieve satisfactory generalization performance when faced with unknown distribution shifts, has made a significant process. However, the OOD method for graph-structured data currently lacks clarity and remains relatively unexplored due to two primary challenges. Firstly, distribution shifts on graphs often occur simultaneously on node attributes and graph topology. Secondly, capturing invariant information amidst diverse distribution shifts proves to be a formidable challenge. To overcome these obstacles, in this paper, we introduce a novel framework, namely Graph Learning Invariant Domain genERation (GLIDER). The goal is to (1) diversify variations across domains by modeling the potential seen or unseen variations of attribute distribution and topological structure and (2) minimize the discrepancy of the variation in a representation space where the target is to predict semantic labels. Extensive experiment results indicate that our model outperforms baseline methods on node-level OOD generalization across domains in distribution shift on node features and topological structures simultaneously.
Related papers
- GDDA: Semantic OOD Detection on Graphs under Covariate Shift via Score-Based Diffusion Models [8.562907330207716]
Out-of-distribution (OOD) detection poses a significant challenge for Graph Neural Networks (GNNs)
Most existing OOD detection methods on graphs primarily focus on identifying instances in test data domains.
In this work, we address both types of shifts simultaneously and introduce a novel challenge for OOD detection on graphs.
arXiv Detail & Related papers (2024-10-23T03:05:33Z) - Topology-Aware Dynamic Reweighting for Distribution Shifts on Graph [24.44321658238713]
Graph Neural Networks (GNNs) are widely used for node classification tasks but often fail to generalize when training and test nodes come from different distributions.
We introduce the Topology-Aware Dynamic Reweighting (TAR) framework, which dynamically adjusts sample weights through gradient flow in the Wasserstein space during training.
Our framework's superiority is demonstrated through standard testing on four graph OOD datasets and three class-imbalanced node classification datasets.
arXiv Detail & Related papers (2024-06-03T07:32:05Z) - Out-of-Distribution Generalized Dynamic Graph Neural Network with
Disentangled Intervention and Invariance Promotion [61.751257172868186]
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph and temporal dynamics.
Existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs.
arXiv Detail & Related papers (2023-11-24T02:42:42Z) - Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously [28.24666589680547]
We introduce a simple but effective approach that aims to learn a fair and invariant classifier.
By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains.
This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns.
arXiv Detail & Related papers (2023-11-23T05:52:00Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Evaluating Robustness and Uncertainty of Graph Models Under Structural
Distributional Shifts [43.40315460712298]
In node-level problems of graph learning, distributional shifts can be especially complex.
We propose a general approach for inducing diverse distributional shifts based on graph structure.
We show that simple models often outperform more sophisticated methods on the considered structural shifts.
arXiv Detail & Related papers (2023-02-27T15:25:21Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.