Does Invariant Graph Learning via Environment Augmentation Learn
Invariance?
- URL: http://arxiv.org/abs/2310.19035v1
- Date: Sun, 29 Oct 2023 14:57:37 GMT
- Title: Does Invariant Graph Learning via Environment Augmentation Learn
Invariance?
- Authors: Yongqiang Chen, Yatao Bian, Kaiwen Zhou, Binghui Xie, Bo Han, James
Cheng
- Abstract summary: Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs.
We develop a set of minimal assumptions, including variation sufficiency and variation consistency, for feasible invariant graph learning.
We show that extracting the maximally invariant subgraph to the proxy predictions provably identifies the underlying invariant subgraph for successful OOD generalization.
- Score: 39.08988313527199
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Invariant graph representation learning aims to learn the invariance among
data from different environments for out-of-distribution generalization on
graphs. As the graph environment partitions are usually expensive to obtain,
augmenting the environment information has become the de facto approach.
However, the usefulness of the augmented environment information has never been
verified. In this work, we find that it is fundamentally impossible to learn
invariant graph representations via environment augmentation without additional
assumptions. Therefore, we develop a set of minimal assumptions, including
variation sufficiency and variation consistency, for feasible invariant graph
learning. We then propose a new framework Graph invAriant Learning Assistant
(GALA). GALA incorporates an assistant model that needs to be sensitive to
graph environment changes or distribution shifts. The correctness of the proxy
predictions by the assistant model hence can differentiate the variations in
spurious subgraphs. We show that extracting the maximally invariant subgraph to
the proxy predictions provably identifies the underlying invariant subgraph for
successful OOD generalization under the established minimal assumptions.
Extensive experiments on datasets including DrugOOD with various graph
distribution shifts confirm the effectiveness of GALA.
Related papers
- Generative Risk Minimization for Out-of-Distribution Generalization on Graphs [71.48583448654522]
We propose an innovative framework, named Generative Risk Minimization (GRM), designed to generate an invariant subgraph for each input graph to be classified, instead of extraction.
We conduct extensive experiments across a variety of real-world graph datasets for both node-level and graph-level OOD generalization.
arXiv Detail & Related papers (2025-02-11T21:24:13Z) - A Unified Invariant Learning Framework for Graph Classification [25.35939628738617]
Invariant learning aims to recognize stable features in graph data for classification.
We introduce the Unified Invariant Learning framework for graph classification.
We present both theoretical and empirical evidence to confirm our method's ability to recognize superior stable features.
arXiv Detail & Related papers (2025-01-22T02:45:21Z) - diffIRM: A Diffusion-Augmented Invariant Risk Minimization Framework for Spatiotemporal Prediction over Graphs [6.677219861416146]
Intemporal prediction over graphs (GSTP) is challenging, because real-world data suffers from the Out-of-Distribution (OOD) problem.
In this study, we propose a diffusion-augmented invariant risk minimization (diffIRM) framework that combines these two principles.
arXiv Detail & Related papers (2024-12-31T06:45:47Z) - Invariant Graph Learning Meets Information Bottleneck for Out-of-Distribution Generalization [9.116601683256317]
In this work, we propose a novel framework, called Invariant Graph Learning based on Information bottleneck theory (InfoIGL)
Specifically, InfoIGL introduces a redundancy filter to compress task-irrelevant information related to environmental factors.
Experiments on both synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance under OOD generalization.
arXiv Detail & Related papers (2024-08-03T07:38:04Z) - Discovering Invariant Neighborhood Patterns for Heterophilic Graphs [32.315495035666636]
We propose a novel Invariant Neighborhood Pattern Learning (INPL) to alleviate the distribution shifts problem on non-homophilous graphs.
We show that INPL could achieve state-of-the-art performance for learning on large non-homophilous graphs.
arXiv Detail & Related papers (2024-03-15T02:25:45Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution
Generalization [51.913685334368104]
We propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy.
Our method significantly outperforms state-of-the-art under various distribution shifts.
arXiv Detail & Related papers (2023-12-18T07:26:56Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Unleashing the Power of Graph Data Augmentation on Covariate
Distribution Shift [50.98086766507025]
We propose a simple-yet-effective data augmentation strategy, Adversarial Invariant Augmentation (AIA)
AIA aims to extrapolate and generate new environments, while concurrently preserving the original stable features during the augmentation process.
arXiv Detail & Related papers (2022-11-05T07:55:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.