Meta-Learned Invariant Risk Minimization
- URL: http://arxiv.org/abs/2103.12947v1
- Date: Wed, 24 Mar 2021 02:52:48 GMT
- Title: Meta-Learned Invariant Risk Minimization
- Authors: Jun-Hyun Bae, Inchul Choi, Minho Lee
- Abstract summary: Empirical Risk Minimization (ERM) based machine learning algorithms have suffered from weak generalization performance on data obtained from out-of-distribution (OOD)
In this paper, we propose a novel meta-learning based approach for IRM.
We show that our algorithm not only has better OOD generalization performance than IRMv1 and all IRM variants, but also addresses the weakness of IRMv1 with improved stability.
- Score: 12.6484257912092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Empirical Risk Minimization (ERM) based machine learning algorithms have
suffered from weak generalization performance on data obtained from
out-of-distribution (OOD). To address this problem, Invariant Risk Minimization
(IRM) objective was suggested to find invariant optimal predictor which is less
affected by the changes in data distribution. However, even with such progress,
IRMv1, the practical formulation of IRM, still shows performance degradation
when there are not enough training data, and even fails to generalize to OOD,
if the number of spurious correlations is larger than the number of
environments. In this paper, to address such problems, we propose a novel
meta-learning based approach for IRM. In this method, we do not assume the
linearity of classifier for the ease of optimization, and solve ideal bi-level
IRM objective with Model-Agnostic Meta-Learning (MAML) framework. Our method is
more robust to the data with spurious correlations and can provide an invariant
optimal classifier even when data from each distribution are scarce. In
experiments, we demonstrate that our algorithm not only has better OOD
generalization performance than IRMv1 and all IRM variants, but also addresses
the weakness of IRMv1 with improved stability.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - On the Performance of Empirical Risk Minimization with Smoothed Data [59.3428024282545]
Empirical Risk Minimization (ERM) is able to achieve sublinear error whenever a class is learnable with iid data.
We show that ERM is able to achieve sublinear error whenever a class is learnable with iid data.
arXiv Detail & Related papers (2024-02-22T21:55:41Z) - What Is Missing in IRM Training and Evaluation? Challenges and Solutions [41.56612265456626]
Invariant risk minimization (IRM) has received increasing attention as a way to acquire environment-agnostic data representations and predictions.
Recent works have found that the optimality of the originally-proposed IRM optimization (IRM) may be compromised in practice.
We identify and resolve three practical limitations in IRM training and evaluation.
arXiv Detail & Related papers (2023-03-04T07:06:24Z) - The Missing Invariance Principle Found -- the Reciprocal Twin of
Invariant Risk Minimization [7.6146285961466]
In Risk Minimization (IRM) can fail to generalize poorly to out-of-distribution (OOD) data.
We show that MRI-v1 can guarantee invariant predictors given sufficient environments.
We also demonstrate that MRI strongly out-performs IRM and achieves a near-optimal OOD in image-based problems.
arXiv Detail & Related papers (2022-05-29T00:14:51Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Heterogeneous Risk Minimization [25.5458915855661]
Invariant learning methods for out-of-distribution generalization have been proposed by leveraging multiple training environments to find invariant relationships.
Modern datasets are assembled by merging data from multiple sources without explicit source labels.
We propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship.
arXiv Detail & Related papers (2021-05-09T02:51:36Z) - Fairness and Robustness in Invariant Learning: A Case Study in Toxicity
Classification [13.456851070400024]
Invariant Risk Minimization (IRM) is a domain generalization algorithm that employs a causal discovery inspired method to find robust predictors.
We show that IRM achieves better out-of-distribution accuracy and fairness than Empirical Risk Minimization (ERM) methods.
arXiv Detail & Related papers (2020-11-12T16:42:14Z) - Empirical or Invariant Risk Minimization? A Sample Complexity
Perspective [49.43806345820883]
It is unclear when in-variant risk generalization (IRM) should be preferred over the widely-employed empirical risk minimization (ERM) framework.
We find that depending on the type of data generation mechanism, the two approaches might have very different finite sample and behavior.
We further investigate how different factors -- the number of environments, complexity of the model, and IRM penalty weight -- impact the sample complexity of IRM in relation to its distance from the OOD solutions.
arXiv Detail & Related papers (2020-10-30T17:55:30Z) - The Risks of Invariant Risk Minimization [52.7137956951533]
Invariant Risk Minimization is an objective based on the idea for learning deep, invariant features of data.
We present the first analysis of classification under the IRM objective--as well as these recently proposed alternatives--under a fairly natural and general model.
We show that IRM can fail catastrophically unless the test data are sufficiently similar to the training distribution--this is precisely the issue that it was intended to solve.
arXiv Detail & Related papers (2020-10-12T14:54:32Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.