Omnipredictors for Constrained Optimization
- URL: http://arxiv.org/abs/2209.07463v2
- Date: Thu, 16 Feb 2023 18:30:22 GMT
- Title: Omnipredictors for Constrained Optimization
- Authors: Lunjia Hu, Inbal Livni-Navon, Omer Reingold, Chutong Yang
- Abstract summary: We show how to obtain omnipredictors for constrained optimization problems, relying on appropriate variants of multicalibration.
We also investigate the implications of this notion when the constraints used are so-called group fairness notions.
- Score: 5.969079530101132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion of omnipredictors (Gopalan, Kalai, Reingold, Sharan and Wieder
ITCS 2021), suggested a new paradigm for loss minimization. Rather than
learning a predictor based on a known loss function, omnipredictors can easily
be post-processed to minimize any one of a rich family of loss functions
compared with the loss of hypotheses in a class $\mathcal C$. It has been shown
that such omnipredictors exist and are implied (for all convex and Lipschitz
loss functions) by the notion of multicalibration from the algorithmic fairness
literature. In this paper, we introduce omnipredictors for constrained
optimization and study their complexity and implications. The notion that we
introduce allows the learner to be unaware of the loss function that will be
later assigned as well as the constraints that will be later imposed, as long
as the subpopulations that are used to define these constraints are known. We
show how to obtain omnipredictors for constrained optimization problems,
relying on appropriate variants of multicalibration. We also investigate the
implications of this notion when the constraints used are so-called group
fairness notions.
Related papers
- Swap Agnostic Learning, or Characterizing Omniprediction via
Multicalibration [6.91367883100748]
We introduce and study Swap Agnostic Learning.
Despite the strength of the adversary, we demonstrate the feasibility Swap Agnostic Learning for any convex loss.
arXiv Detail & Related papers (2023-02-13T22:25:53Z) - Loss Minimization through the Lens of Outcome Indistinguishability [11.709566373491619]
We present a new perspective on convex loss and the recent notion of Omniprediction.
By design, Loss OI implies omniprediction in a direct and intuitive manner.
We show that Loss OI for the important set of losses arising from Generalized Models, without requiring full multicalibration.
arXiv Detail & Related papers (2022-10-16T22:25:27Z) - A Dimensionality Reduction Method for Finding Least Favorable Priors
with a Focus on Bregman Divergence [108.28566246421742]
This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension.
In order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
arXiv Detail & Related papers (2022-02-23T16:22:28Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Lazy Lagrangians with Predictions for Online Learning [24.18464455081512]
We consider the general problem of online convex optimization with time-varying additive constraints.
A novel primal-dual algorithm is designed by combining a Follow-The-Regularized-Leader iteration with prediction-adaptive dynamic steps.
Our work extends the FTRL framework for this constrained OCO setting and outperforms the respective state-of-the-art greedy-based solutions.
arXiv Detail & Related papers (2022-01-08T21:49:10Z) - Omnipredictors [19.735769148626588]
Loss minimization is a dominant paradigm in machine learning.
We introduce the notion of an ($mathcalL,mathcalC$)-omnipredictor, which could be used to optimize any loss in a family.
We show that such "loss-oblivious'' learning is feasible through a connection to multicalibration.
arXiv Detail & Related papers (2021-09-11T23:28:49Z) - Adapting to Misspecification in Contextual Bandits [82.55565343668246]
We introduce a new family of oracle-efficient algorithms for $varepsilon$-misspecified contextual bandits.
We obtain the first algorithm that achieves the optimal $O(dsqrtT + varepsilonsqrtdT)$ regret bound for unknown misspecification level.
arXiv Detail & Related papers (2021-07-12T21:30:41Z) - Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds
for Episodic Reinforcement Learning [50.44564503645015]
We provide improved gap-dependent regret bounds for reinforcement learning in finite episodic Markov decision processes.
We prove tighter upper regret bounds for optimistic algorithms and accompany them with new information-theoretic lower bounds for a large class of MDPs.
arXiv Detail & Related papers (2021-07-02T20:36:05Z) - PAC-Bayes unleashed: generalisation bounds with unbounded losses [12.078257783674923]
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions.
This extends the relevance and applicability of the PAC-Bayes learning framework.
arXiv Detail & Related papers (2020-06-12T15:55:46Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - Excursion Search for Constrained Bayesian Optimization under a Limited
Budget of Failures [62.41541049302712]
We propose a novel decision maker grounded in control theory that controls the amount of risk we allow in the search as a function of a given budget of failures.
Our algorithm uses the failures budget more efficiently in a variety of optimization experiments, and generally achieves lower regret, than state-of-the-art methods.
arXiv Detail & Related papers (2020-05-15T09:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.