Sample Efficient Omniprediction and Downstream Swap Regret for Non-Linear Losses
- URL: http://arxiv.org/abs/2502.12564v1
- Date: Tue, 18 Feb 2025 06:01:52 GMT
- Title: Sample Efficient Omniprediction and Downstream Swap Regret for Non-Linear Losses
- Authors: Jiuyao Lu, Aaron Roth, Mirah Shi,
- Abstract summary: "Decision swap regret" generalizes prediction for downstream swap regret and omniprediction.
We also give algorithms for obtaining it for arbitrary multi-dimensional Lipschitz loss functions in online adversarial settings.
- Score: 4.509643050721454
- License:
- Abstract: We define "decision swap regret" which generalizes both prediction for downstream swap regret and omniprediction, and give algorithms for obtaining it for arbitrary multi-dimensional Lipschitz loss functions in online adversarial settings. We also give sample complexity bounds in the batch setting via an online-to-batch reduction. When applied to omniprediction, our algorithm gives the first polynomial sample-complexity bounds for Lipschitz loss functions -- prior bounds either applied only to linear loss (or binary outcomes) or scaled exponentially with the error parameter even under the assumption that the loss functions were convex. When applied to prediction for downstream regret, we give the first algorithm capable of guaranteeing swap regret bounds for all downstream agents with non-linear loss functions over a multi-dimensional outcome space: prior work applied only to linear loss functions, modeling risk neutral agents. Our general bounds scale exponentially with the dimension of the outcome space, but we give improved regret and sample complexity bounds for specific families of multidimensional functions of economic interest: constant elasticity of substitution (CES), Cobb-Douglas, and Leontief utility functions.
Related papers
- Refined Risk Bounds for Unbounded Losses via Transductive Priors [58.967816314671296]
We revisit the sequential variants of linear regression with the squared loss, classification problems with hinge loss, and logistic regression.
Our key tools are based on the exponential weights algorithm with carefully chosen transductive priors.
arXiv Detail & Related papers (2024-10-29T00:01:04Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Generalization Analysis for Contrastive Representation Learning [80.89690821916653]
Existing generalization error bounds depend linearly on the number $k$ of negative examples.
We establish novel generalization bounds for contrastive learning which do not depend on $k$, up to logarithmic terms.
arXiv Detail & Related papers (2023-02-24T01:03:56Z) - Omnipredictors for Constrained Optimization [5.969079530101132]
We show how to obtain omnipredictors for constrained optimization problems, relying on appropriate variants of multicalibration.
We also investigate the implications of this notion when the constraints used are so-called group fairness notions.
arXiv Detail & Related papers (2022-09-15T17:04:49Z) - Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for
Full-Batch GD [31.80268332522017]
We provide sharp path-dependent and excess error guarantees for the full-batch Gradient Decent (GD) for smooth losses (possibly non-Lipschitz)
Our full-batch generalization error and excess risk bounds are significantly tighter than the existing bounds for GD, when the loss is smooth (but possibly non-Lipschitz)
arXiv Detail & Related papers (2022-04-26T17:05:57Z) - Sparsest Univariate Learning Models Under Lipschitz Constraint [31.28451181040038]
We propose continuous-domain formulations for one-dimensional regression problems.
We control the Lipschitz constant explicitly using a user-defined upper-bound.
We show that both problems admit global minimizers that are continuous and piecewise-linear.
arXiv Detail & Related papers (2021-12-27T07:03:43Z) - Reinforcement Learning in Linear MDPs: Constant Regret and
Representation Selection [136.4014229319618]
We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure.
We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function.
arXiv Detail & Related papers (2021-10-27T22:07:08Z) - Universal Online Convex Optimization Meets Second-order Bounds [74.0120666722487]
We propose a simple strategy for universal online convex optimization.
The key idea is to construct a set of experts to process the original online functions, and deploy a meta-algorithm over the linearized losses.
In this way, we can plug in off-the-shelf online solvers as black-box experts to deliver problem-dependent regret bounds.
arXiv Detail & Related papers (2021-05-08T11:43:49Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.