Joint Learning of Label and Environment Causal Independence for Graph
Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2306.01103v3
- Date: Tue, 31 Oct 2023 18:40:49 GMT
- Title: Joint Learning of Label and Environment Causal Independence for Graph
Out-of-Distribution Generalization
- Authors: Shurui Gui, Meng Liu, Xiner Li, Youzhi Luo, Shuiwang Ji
- Abstract summary: We propose to incorporate label and environment causal independence (LECI) to fully make use of label and environment information.
LECI significantly outperforms prior methods on both synthetic and real-world datasets.
- Score: 60.4169201192582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We tackle the problem of graph out-of-distribution (OOD) generalization.
Existing graph OOD algorithms either rely on restricted assumptions or fail to
exploit environment information in training data. In this work, we propose to
simultaneously incorporate label and environment causal independence (LECI) to
fully make use of label and environment information, thereby addressing the
challenges faced by prior methods on identifying causal and invariant
subgraphs. We further develop an adversarial training strategy to jointly
optimize these two properties for causal subgraph discovery with theoretical
guarantees. Extensive experiments and analysis show that LECI significantly
outperforms prior methods on both synthetic and real-world datasets,
establishing LECI as a practical and effective solution for graph OOD
generalization.
Our code is available at https://github.com/divelab/LECI.
Related papers
- Improving Graph Out-of-distribution Generalization on Real-world Data [25.328653597674197]
This paper presents the theorems of environment-label dependency and mutable rationale invariance.
Based on analytic investigations, a novel variational inference based method named Probability Dependency on Environments and Rationales for OOD Graphs on Real-world Data'' is introduced.
arXiv Detail & Related papers (2024-07-14T13:48:25Z) - Empowering Graph Invariance Learning with Deep Spurious Infomax [27.53568333416706]
We introduce a novel graph invariance learning paradigm, which induces a robust and general inductive bias.
EQuAD shows stable and enhanced performance across different degrees of bias in synthetic datasets and challenging real-world datasets up to $31.76%$.
arXiv Detail & Related papers (2024-07-13T14:18:47Z) - IENE: Identifying and Extrapolating the Node Environment for Out-of-Distribution Generalization on Graphs [10.087216264788097]
We propose IENE, an OOD generalization method on graphs based on node-level environmental identification and extrapolation techniques.
It strengthens the model's ability to extract invariance from two granularities simultaneously, leading to improved generalization.
arXiv Detail & Related papers (2024-06-02T14:43:56Z) - Graph Out-of-Distribution Generalization via Causal Intervention [74.77883794668324]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - MARIO: Model Agnostic Recipe for Improving OOD Generalization of Graph
Contrastive Learning [18.744939223003673]
We investigate the problem of out-of-distribution (OOD) generalization for unsupervised learning methods on graph data.
We propose a underlineModel-underlineAgnostic underlineRecipe for underlineImproving underlineOOD generalizability.
arXiv Detail & Related papers (2023-07-24T18:05:22Z) - Graph Structure and Feature Extrapolation for Out-of-Distribution Generalization [54.64375566326931]
Out-of-distribution (OOD) generalization deals with the prevalent learning scenario where test distribution shifts from training distribution.
We propose to achieve graph OOD generalization with the novel design of non-Euclidean-space linear extrapolation.
Our design tailors OOD samples for specific shifts without corrupting underlying causal mechanisms.
arXiv Detail & Related papers (2023-06-13T18:46:28Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Invariance Principle Meets Out-of-Distribution Generalization on Graphs [66.04137805277632]
Complex nature of graphs thwarts the adoption of the invariance principle for OOD generalization.
domain or environment partitions, which are often required by OOD methods, can be expensive to obtain for graphs.
We propose a novel framework to explicitly model this process using a contrastive strategy.
arXiv Detail & Related papers (2022-02-11T04:38:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.