Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution
Generalization
- URL: http://arxiv.org/abs/2312.10988v1
- Date: Mon, 18 Dec 2023 07:26:56 GMT
- Title: Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution
Generalization
- Authors: Tianrui Jia, Haoyang Li, Cheng Yang, Tao Tao, Chuan Shi
- Abstract summary: We propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy.
Our method significantly outperforms state-of-the-art under various distribution shifts.
- Score: 51.913685334368104
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph neural networks (GNNs) have been demonstrated to perform well in graph
representation learning, but always lacking in generalization capability when
tackling out-of-distribution (OOD) data. Graph invariant learning methods,
backed by the invariance principle among defined multiple environments, have
shown effectiveness in dealing with this issue. However, existing methods
heavily rely on well-predefined or accurately generated environment partitions,
which are hard to be obtained in practice, leading to sub-optimal OOD
generalization performances. In this paper, we propose a novel graph invariant
learning method based on invariant and variant patterns co-mixup strategy,
which is capable of jointly generating mixed multiple environments and
capturing invariant patterns from the mixed graph data. Specifically, we first
adopt a subgraph extractor to identify invariant subgraphs. Subsequently, we
design one novel co-mixup strategy, i.e., jointly conducting environment Mixup
and invariant Mixup. For the environment Mixup, we mix the variant
environment-related subgraphs so as to generate sufficiently diverse multiple
environments, which is important to guarantee the quality of the graph
invariant learning. For the invariant Mixup, we mix the invariant subgraphs,
further encouraging to capture invariant patterns behind graphs while getting
rid of spurious correlations for OOD generalization. We demonstrate that the
proposed environment Mixup and invariant Mixup can mutually promote each other.
Extensive experiments on both synthetic and real-world datasets demonstrate
that our method significantly outperforms state-of-the-art under various
distribution shifts.
Related papers
- GeoMix: Towards Geometry-Aware Data Augmentation [76.09914619612812]
Mixup has shown considerable success in mitigating the challenges posed by limited labeled data in image classification.
We propose Geometric Mixup (GeoMix), a simple and interpretable Mixup approach leveraging in-place graph editing.
arXiv Detail & Related papers (2024-07-15T12:58:04Z) - Improving out-of-distribution generalization in graphs via hierarchical semantic environments [5.481047026874547]
We propose a novel approach to generate hierarchical environments for each graph.
We introduce a new learning objective that guides our model to learn the diversity of environments within the same hierarchy.
Our framework achieves up to 1.29% and 2.83% improvement over the best baselines on IC50 and EC50 prediction tasks, respectively.
arXiv Detail & Related papers (2024-03-04T07:03:10Z) - Does Invariant Graph Learning via Environment Augmentation Learn
Invariance? [39.08988313527199]
Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs.
We develop a set of minimal assumptions, including variation sufficiency and variation consistency, for feasible invariant graph learning.
We show that extracting the maximally invariant subgraph to the proxy predictions provably identifies the underlying invariant subgraph for successful OOD generalization.
arXiv Detail & Related papers (2023-10-29T14:57:37Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation [15.977241867213516]
Diffusion models based on permutation-equivariant networks can learn permutation-invariant distributions for graph data.
We propose a non-invariant diffusion model, called $textitSwinGNN$, which employs an efficient edge-to-edge 2-WL message passing network.
arXiv Detail & Related papers (2023-07-04T10:58:42Z) - Mind the Label Shift of Augmentation-based Graph OOD Generalization [88.32356432272356]
LiSA exploits textbfLabel-textbfinvariant textbfSubgraphs of the training graphs to construct textbfAugmented environments.
LiSA generates diverse augmented environments with a consistent predictive relationship.
Experiments on node-level and graph-level OOD benchmarks show that LiSA achieves impressive generalization performance with different GNN backbones.
arXiv Detail & Related papers (2023-03-27T00:08:45Z) - Decorr: Environment Partitioning for Invariant Learning and OOD Generalization [10.799855921851332]
Invariant learning methods are aimed at identifying a consistent predictor across multiple environments.
When environments aren't inherent in the data, practitioners must define them manually.
This environment partitioning affects invariant learning's efficacy but remains underdiscussed.
In this paper, we suggest partitioning the dataset into several environments by isolating low-correlation data subsets.
arXiv Detail & Related papers (2022-11-18T06:49:35Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.