Does Invariant Graph Learning via Environment Augmentation Learn
Invariance?
- URL: http://arxiv.org/abs/2310.19035v1
- Date: Sun, 29 Oct 2023 14:57:37 GMT
- Title: Does Invariant Graph Learning via Environment Augmentation Learn
Invariance?
- Authors: Yongqiang Chen, Yatao Bian, Kaiwen Zhou, Binghui Xie, Bo Han, James
Cheng
- Abstract summary: Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs.
We develop a set of minimal assumptions, including variation sufficiency and variation consistency, for feasible invariant graph learning.
We show that extracting the maximally invariant subgraph to the proxy predictions provably identifies the underlying invariant subgraph for successful OOD generalization.
- Score: 39.08988313527199
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Invariant graph representation learning aims to learn the invariance among
data from different environments for out-of-distribution generalization on
graphs. As the graph environment partitions are usually expensive to obtain,
augmenting the environment information has become the de facto approach.
However, the usefulness of the augmented environment information has never been
verified. In this work, we find that it is fundamentally impossible to learn
invariant graph representations via environment augmentation without additional
assumptions. Therefore, we develop a set of minimal assumptions, including
variation sufficiency and variation consistency, for feasible invariant graph
learning. We then propose a new framework Graph invAriant Learning Assistant
(GALA). GALA incorporates an assistant model that needs to be sensitive to
graph environment changes or distribution shifts. The correctness of the proxy
predictions by the assistant model hence can differentiate the variations in
spurious subgraphs. We show that extracting the maximally invariant subgraph to
the proxy predictions provably identifies the underlying invariant subgraph for
successful OOD generalization under the established minimal assumptions.
Extensive experiments on datasets including DrugOOD with various graph
distribution shifts confirm the effectiveness of GALA.
Related papers
- Mitigating Graph Covariate Shift via Score-based Out-of-distribution Augmentation [16.59129444793973]
Distribution shifts between training and testing datasets significantly impair the model performance on graph learning.
We introduce a novel approach using score-based graph generation strategies that synthesize unseen environmental features while preserving the validity and stable features of overall graph patterns.
arXiv Detail & Related papers (2024-10-23T02:09:02Z) - Invariant Graph Learning Meets Information Bottleneck for Out-of-Distribution Generalization [9.116601683256317]
In this work, we propose a novel framework, called Invariant Graph Learning based on Information bottleneck theory (InfoIGL)
Specifically, InfoIGL introduces a redundancy filter to compress task-irrelevant information related to environmental factors.
Experiments on both synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance under OOD generalization.
arXiv Detail & Related papers (2024-08-03T07:38:04Z) - Discovering Invariant Neighborhood Patterns for Heterophilic Graphs [32.315495035666636]
We propose a novel Invariant Neighborhood Pattern Learning (INPL) to alleviate the distribution shifts problem on non-homophilous graphs.
We show that INPL could achieve state-of-the-art performance for learning on large non-homophilous graphs.
arXiv Detail & Related papers (2024-03-15T02:25:45Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution
Generalization [51.913685334368104]
We propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy.
Our method significantly outperforms state-of-the-art under various distribution shifts.
arXiv Detail & Related papers (2023-12-18T07:26:56Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Unleashing the Power of Graph Data Augmentation on Covariate
Distribution Shift [50.98086766507025]
We propose a simple-yet-effective data augmentation strategy, Adversarial Invariant Augmentation (AIA)
AIA aims to extrapolate and generate new environments, while concurrently preserving the original stable features during the augmentation process.
arXiv Detail & Related papers (2022-11-05T07:55:55Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.