Decorr: Environment Partitioning for Invariant Learning and OOD Generalization
- URL: http://arxiv.org/abs/2211.10054v2
- Date: Wed, 22 May 2024 08:34:24 GMT
- Title: Decorr: Environment Partitioning for Invariant Learning and OOD Generalization
- Authors: Yufan Liao, Qi Wu, Zhaodi Wu, Xing Yan,
- Abstract summary: Invariant learning methods are aimed at identifying a consistent predictor across multiple environments.
When environments aren't inherent in the data, practitioners must define them manually.
This environment partitioning affects invariant learning's efficacy but remains underdiscussed.
In this paper, we suggest partitioning the dataset into several environments by isolating low-correlation data subsets.
- Score: 10.799855921851332
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Invariant learning methods, aimed at identifying a consistent predictor across multiple environments, are gaining prominence in out-of-distribution (OOD) generalization. Yet, when environments aren't inherent in the data, practitioners must define them manually. This environment partitioning--algorithmically segmenting the training dataset into environments--crucially affects invariant learning's efficacy but remains underdiscussed. Proper environment partitioning could broaden the applicability of invariant learning and enhance its performance. In this paper, we suggest partitioning the dataset into several environments by isolating low-correlation data subsets. Through experiments with synthetic and real data, our Decorr method demonstrates superior performance in combination with invariant learning. Decorr mitigates the issue of spurious correlations, aids in identifying stable predictors, and broadens the applicability of invariant learning methods.
Related papers
- Graph Invariant Learning with Subgraph Co-mixup for Out-Of-Distribution
Generalization [51.913685334368104]
We propose a novel graph invariant learning method based on invariant and variant patterns co-mixup strategy.
Our method significantly outperforms state-of-the-art under various distribution shifts.
arXiv Detail & Related papers (2023-12-18T07:26:56Z) - Towards Fair Disentangled Online Learning for Changing Environments [28.207499975916324]
We argue that changing environments in online learning can be attributed to partial changes in learned parameters that are specific to environments.
We propose a novel algorithm under the assumption that data collected at each time can be disentangled with two representations.
A novel regret is proposed in which it takes a mixed form of dynamic and static regret metrics followed by a fairness-aware long-term constraint.
arXiv Detail & Related papers (2023-05-31T19:04:16Z) - Conformal Inference for Invariant Risk Minimization [12.049545417799125]
The application of machine learning models can be significantly impeded by the occurrence of distributional shifts.
One way to tackle this problem is to use invariant learning, such as invariant risk minimization (IRM), to acquire an invariant representation.
This paper develops methods for obtaining distribution-free prediction regions to describe uncertainty estimates for invariant representations.
arXiv Detail & Related papers (2023-05-22T03:48:38Z) - A step towards the applicability of algorithms based on invariant causal
learning on observational data [0.0]
In this paper, we show how to apply Invariant Causal Prediction (ICP) efficiently integrated with causal discovery methods.
We also show how to apply ICP efficiently integrated with our method for causal discovery.
arXiv Detail & Related papers (2023-04-05T08:15:57Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Unleashing the Power of Graph Data Augmentation on Covariate
Distribution Shift [50.98086766507025]
We propose a simple-yet-effective data augmentation strategy, Adversarial Invariant Augmentation (AIA)
AIA aims to extrapolate and generate new environments, while concurrently preserving the original stable features during the augmentation process.
arXiv Detail & Related papers (2022-11-05T07:55:55Z) - Differentiable Invariant Causal Discovery [106.87950048845308]
Learning causal structure from observational data is a fundamental challenge in machine learning.
This paper proposes Differentiable Invariant Causal Discovery (DICD) to avoid learning spurious edges and wrong causal directions.
Extensive experiments on synthetic and real-world datasets verify that DICD outperforms state-of-the-art causal discovery methods up to 36% in SHD.
arXiv Detail & Related papers (2022-05-31T09:29:07Z) - ZIN: When and How to Learn Invariance by Environment Inference? [24.191152823045385]
Invariant learning methods have proposed to learn robust and invariant models based on environment partition.
We show that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional information.
We propose a framework to jointly learn environment partition and invariant representation, assisted by additional auxiliary information.
arXiv Detail & Related papers (2022-03-11T10:00:33Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers [59.06169363181417]
Predict then Interpolate (PI) is an algorithm for learning correlations that are stable across environments.
We prove that by interpolating the distributions of the correct predictions and the wrong predictions, we can uncover an oracle distribution where the unstable correlation vanishes.
arXiv Detail & Related papers (2021-05-26T15:37:48Z) - Environment Inference for Invariant Learning [9.63004099102596]
We propose EIIL, a framework for domain-invariant learning that incorporates Environment Inference.
We show that EIIL outperforms invariant learning methods on the CMNIST benchmark without using environment labels.
We also establish connections between EIIL and algorithmic fairness, which enables EIIL to improve accuracy and calibration in a fair prediction problem.
arXiv Detail & Related papers (2020-10-14T17:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.