Minimax Regret Optimization for Robust Machine Learning under
Distribution Shift
- URL: http://arxiv.org/abs/2202.05436v1
- Date: Fri, 11 Feb 2022 04:17:22 GMT
- Title: Minimax Regret Optimization for Robust Machine Learning under
Distribution Shift
- Authors: Alekh Agarwal and Tong Zhang
- Abstract summary: We consider learning scenarios where the learned model is evaluated under an unknown test distribution.
We show that the DRO formulation does not guarantee uniformly small regret under distribution shift.
We propose an alternative method called Minimax Regret Optimization (MRO)
- Score: 38.30154154957721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider learning scenarios where the learned model is
evaluated under an unknown test distribution which potentially differs from the
training distribution (i.e. distribution shift). The learner has access to a
family of weight functions such that the test distribution is a reweighting of
the training distribution under one of these functions, a setting typically
studied under the name of Distributionally Robust Optimization (DRO). We
consider the problem of deriving regret bounds in the classical learning theory
setting, and require that the resulting regret bounds hold uniformly for all
potential test distributions. We show that the DRO formulation does not
guarantee uniformly small regret under distribution shift. We instead propose
an alternative method called Minimax Regret Optimization (MRO), and show that
under suitable conditions this method achieves uniformly low regret across all
test distributions. We also adapt our technique to have stronger guarantees
when the test distributions are heterogeneous in their similarity to the
training data. Given the widespead optimization of worst case risks in current
approaches to robust machine learning, we believe that MRO can be a strong
alternative to address distribution shift scenarios.
Related papers
- Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Learning Against Distributional Uncertainty: On the Trade-off Between
Robustness and Specificity [24.874664446700272]
This paper studies a new framework that unifies the three approaches and that addresses the two challenges mentioned above.
The properties (e.g., consistency and normalities), non-asymptotic properties (e.g., unbiasedness and error bound), and a Monte-Carlo-based solution method of the proposed model are studied.
arXiv Detail & Related papers (2023-01-31T11:33:18Z) - Distributionally Robust Multiclass Classification and Applications in
Deep Image Classifiers [9.979945269265627]
We develop a Distributionally Robust Optimization (DRO) formulation for Multiclass Logistic Regression (MLR)
We demonstrate reductions in test error rate by up to 83.5% and loss by up to 91.3% compared with baseline methods, by adopting a novel random training method.
arXiv Detail & Related papers (2022-10-15T05:09:28Z) - Learnable Distribution Calibration for Few-Shot Class-Incremental
Learning [122.2241120474278]
Few-shot class-incremental learning (FSCIL) faces challenges of memorizing old class distributions and estimating new class distributions given few training samples.
We propose a learnable distribution calibration (LDC) approach, with the aim to systematically solve these two challenges using a unified framework.
arXiv Detail & Related papers (2022-10-01T09:40:26Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.