Online Convex Optimization with a Separation Oracle
- URL: http://arxiv.org/abs/2410.02476v2
- Date: Mon, 7 Oct 2024 17:15:37 GMT
- Title: Online Convex Optimization with a Separation Oracle
- Authors: Zakaria Mhammedi,
- Abstract summary: We introduce a new projection-free algorithm for Online Convex Optimization (OCO) with a state-of-the-art regret guarantee.
Our algorithm achieves a regret bound of $widetildeO(sqrtdT + kappa d)$, while requiring only $widetildeO(1) calls to a separation oracle per round.
- Score: 10.225358400539719
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we introduce a new projection-free algorithm for Online Convex Optimization (OCO) with a state-of-the-art regret guarantee among separation-based algorithms. Existing projection-free methods based on the classical Frank-Wolfe algorithm achieve a suboptimal regret bound of $O(T^{3/4})$, while more recent separation-based approaches guarantee a regret bound of $O(\kappa \sqrt{T})$, where $\kappa$ denotes the asphericity of the feasible set, defined as the ratio of the radii of the containing and contained balls. However, for ill-conditioned sets, $\kappa$ can be arbitrarily large, potentially leading to poor performance. Our algorithm achieves a regret bound of $\widetilde{O}(\sqrt{dT} + \kappa d)$, while requiring only $\widetilde{O}(1)$ calls to a separation oracle per round. Crucially, the main term in the bound, $\widetilde{O}(\sqrt{d T})$, is independent of $\kappa$, addressing the limitations of previous methods. Additionally, as a by-product of our analysis, we recover the $O(\kappa \sqrt{T})$ regret bound of existing OCO algorithms with a more straightforward analysis and improve the regret bound for projection-free online exp-concave optimization. Finally, for constrained stochastic convex optimization, we achieve a state-of-the-art convergence rate of $\widetilde{O}(\sigma/\sqrt{T} + \kappa d/T)$, where $\sigma$ represents the noise in the stochastic gradients, while requiring only $\widetilde{O}(1)$ calls to a separation oracle per iteration.
Related papers
- p-Mean Regret for Stochastic Bandits [52.828710025519996]
We introduce a simple, unified UCB-based algorithm that achieves novel $p$-mean regret bounds.
Our framework encompasses both average cumulative regret and Nash regret as special cases.
arXiv Detail & Related papers (2024-12-14T08:38:26Z) - Optimal and Efficient Algorithms for Decentralized Online Convex Optimization [51.00357162913229]
Decentralized online convex optimization (D-OCO) is designed to minimize a sequence of global loss functions using only local computations and communications.
We develop a novel D-OCO algorithm that can reduce the regret bounds for convex and strongly convex functions to $tildeO(nrho-1/4sqrtT)$ and $tildeO(nrho-1/2log T)$.
Our analysis reveals that the projection-free variant can achieve $O(nT3/4)$ and $O(n
arXiv Detail & Related papers (2024-02-14T13:44:16Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Accelerating Inexact HyperGradient Descent for Bilevel Optimization [84.00488779515206]
We present a method for solving general non-strongly-concave bilevel optimization problems.
Our results also improve upon the existing complexity for finding second-order stationary points in non-strongly-concave problems.
arXiv Detail & Related papers (2023-06-30T20:36:44Z) - Projection-free Online Exp-concave Optimization [21.30065439295409]
We present an LOO-based ONS-style algorithm, which using overall $O(T)$ calls to a LOO, guarantees in worst case regret bounded by $widetildeO(n2/3T2/3)$.
Our algorithm is most interesting in an important and plausible low-dimensional data scenario.
arXiv Detail & Related papers (2023-02-09T18:58:05Z) - Exploiting the Curvature of Feasible Sets for Faster Projection-Free
Online Learning [8.461907111368628]
We develop new efficient projection-free algorithms for Online Convex Optimization (OCO)
We develop an OCO algorithm that makes two calls to an LO Oracle per round and achieves the near-optimal $widetildeO(sqrtT)$ regret.
We also present an algorithm for general convex sets that makes $widetilde O(d)$ expected number of calls to an LO Oracle per round.
arXiv Detail & Related papers (2022-05-23T17:13:46Z) - Iterative Hard Thresholding with Adaptive Regularization: Sparser
Solutions Without Sacrificing Runtime [17.60502131429094]
We propose a simple modification to the iterative hard thresholding algorithm, which recoversally sparser solutions as a function of the condition number.
Our proposed algorithm, regularized IHT, returns a solution with sparsity $O(skappa)$.
Our algorithm significantly improves over ARHT which also finds a solution of sparsity $O(skappa)$.
arXiv Detail & Related papers (2022-04-11T19:33:15Z) - New Projection-free Algorithms for Online Convex Optimization with
Adaptive Regret Guarantees [21.30065439295409]
We present new efficient textitprojection-free algorithms for online convex optimization (OCO)
Our algorithms are based on the textitonline gradient descent algorithm with a novel and efficient approach to computing so-called textitinfeasible projections
We present algorithms which, using overall $O(T)$ calls to the separation oracle, guarantee $O(sqrtT)$ adaptive regret and $O(T3/4)$ adaptive expected regret.
arXiv Detail & Related papers (2022-02-09T20:56:16Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.