Accelerated Rates between Stochastic and Adversarial Online Convex
Optimization
- URL: http://arxiv.org/abs/2303.03272v1
- Date: Mon, 6 Mar 2023 16:41:57 GMT
- Title: Accelerated Rates between Stochastic and Adversarial Online Convex
Optimization
- Authors: Sarah Sachs, Hedi Hadiji, Tim van Erven, Cristobal Guzman
- Abstract summary: We establish novel regret bounds for online convex optimization in a setting that interpolates between i.i.d. and fully adversarial losses.
In the fully i.i.d. case, our regret bounds match the rates one would expect from results in acceleration, and we also recover the optimalally accelerated rates via online-to-batch conversion.
- Score: 2.628557920905129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic and adversarial data are two widely studied settings in online
learning. But many optimization tasks are neither i.i.d. nor fully adversarial,
which makes it of fundamental interest to get a better theoretical
understanding of the world between these extremes. In this work we establish
novel regret bounds for online convex optimization in a setting that
interpolates between stochastic i.i.d. and fully adversarial losses. By
exploiting smoothness of the expected losses, these bounds replace a dependence
on the maximum gradient length by the variance of the gradients, which was
previously known only for linear losses. In addition, they weaken the i.i.d.
assumption by allowing, for example, adversarially poisoned rounds, which were
previously considered in the related expert and bandit settings. In the fully
i.i.d. case, our regret bounds match the rates one would expect from results in
stochastic acceleration, and we also recover the optimal stochastically
accelerated rates via online-to-batch conversion. In the fully adversarial case
our bounds gracefully deteriorate to match the minimax regret. We further
provide lower bounds showing that our regret upper bounds are tight for all
intermediate regimes in terms of the stochastic variance and the adversarial
variation of the loss gradients.
Related papers
- LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Distributionally Robust Optimization with Bias and Variance Reduction [9.341215359733601]
We show that Prospect, a gradient-based algorithm, enjoys linear convergence for smooth regularized losses.
We also show that Prospect can converge 2-3$times$ faster than baselines such as gradient-based methods.
arXiv Detail & Related papers (2023-10-21T00:03:54Z) - Improving Adaptive Online Learning Using Refined Discretization [44.646191058243645]
We study unconstrained Online Linear Optimization with Lipschitz losses.
Motivated by the pursuit of instance optimality, we propose a new algorithm.
Central to these results is a continuous time approach to online learning.
arXiv Detail & Related papers (2023-09-27T21:54:52Z) - Expressive Losses for Verified Robustness via Convex Combinations [67.54357965665676]
We study the relationship between the over-approximation coefficient and performance profiles across different expressive losses.
We show that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-05-23T12:20:29Z) - Distributed Online Non-convex Optimization with Composite Regret [31.53784277195043]
We propose a novel composite regret with a new network regret for distributed online general bounds losses.
To our knowledge, is the first regret bound for distributed online nonlinear learning.
arXiv Detail & Related papers (2022-09-21T04:16:33Z) - Regret Minimization and Convergence to Equilibria in General-sum Markov
Games [57.568118148036376]
We present the first algorithm for learning in general-sum Markov games that provides sublinear regret guarantees when executed by all agents.
Our algorithm is decentralized, computationally efficient, and does not require any communication between agents.
arXiv Detail & Related papers (2022-07-28T16:27:59Z) - Between Stochastic and Adversarial Online Convex Optimization: Improved
Regret Bounds via Smoothness [2.628557920905129]
We establish novel regret bounds for online convex optimization in a setting that interpolates between i.i.d. and fully adversarial losses.
To accomplish this goal, we introduce two key quantities associated with the loss sequence, that we call the cumulative variance and the adversarial variation.
In the fully i.i.d. case, our bounds match the rates one would expect from results in acceleration, and in the fully adversarial case they gracefully deteriorate to match the minimax regret.
arXiv Detail & Related papers (2022-02-15T16:39:33Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - Minimax Optimal Quantile and Semi-Adversarial Regret via
Root-Logarithmic Regularizers [31.102181563065844]
Quantile (and, more generally, KL) regret bounds relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data.
More recently, the semi-adversarial paradigm (Bilodeau, Negrea, and Roy 2020) provides an alternative relaxation of adversarial online learning by considering data that may be neither fully adversarial nor adversarial.
We achieve the minimax optimal regret in both paradigms using FTRL with separate, novel, root-logarithmic regularizers, both of which can be interpreted as yielding variants of NormalHedge.
arXiv Detail & Related papers (2021-10-27T22:38:52Z) - High-probability Bounds for Non-Convex Stochastic Optimization with
Heavy Tails [55.561406656549686]
We consider non- Hilbert optimization using first-order algorithms for which the gradient estimates may have tails.
We show that a combination of gradient, momentum, and normalized gradient descent convergence to critical points in high-probability with best-known iteration for smooth losses.
arXiv Detail & Related papers (2021-06-28T00:17:01Z) - Online nonparametric regression with Sobolev kernels [99.12817345416846]
We derive the regret upper bounds on the classes of Sobolev spaces $W_pbeta(mathcalX)$, $pgeq 2, beta>fracdp$.
The upper bounds are supported by the minimax regret analysis, which reveals that in the cases $beta> fracd2$ or $p=infty$ these rates are (essentially) optimal.
arXiv Detail & Related papers (2021-02-06T15:05:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.