Exploiting the Curvature of Feasible Sets for Faster Projection-Free
Online Learning
- URL: http://arxiv.org/abs/2205.11470v1
- Date: Mon, 23 May 2022 17:13:46 GMT
- Title: Exploiting the Curvature of Feasible Sets for Faster Projection-Free
Online Learning
- Authors: Zakaria Mhammedi
- Abstract summary: We develop new efficient projection-free algorithms for Online Convex Optimization (OCO)
We develop an OCO algorithm that makes two calls to an LO Oracle per round and achieves the near-optimal $widetildeO(sqrtT)$ regret.
We also present an algorithm for general convex sets that makes $widetilde O(d)$ expected number of calls to an LO Oracle per round.
- Score: 8.461907111368628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we develop new efficient projection-free algorithms for Online
Convex Optimization (OCO). Online Gradient Descent (OGD) is an example of a
classical OCO algorithm that guarantees the optimal $O(\sqrt{T})$ regret bound.
However, OGD and other projection-based OCO algorithms need to perform a
Euclidean projection onto the feasible set $\mathcal{C}\subset \mathbb{R}^d$
whenever their iterates step outside $\mathcal{C}$. For various sets of
interests, this projection step can be computationally costly, especially when
the ambient dimension is large. This has motivated the development of
projection-free OCO algorithms that swap Euclidean projections for often much
cheaper operations such as Linear Optimization (LO). However, state-of-the-art
LO-based algorithms only achieve a suboptimal $O(T^{3/4})$ regret for general
OCO. In this paper, we leverage recent results in parameter-free Online
Learning, and develop an OCO algorithm that makes two calls to an LO Oracle per
round and achieves the near-optimal $\widetilde{O}(\sqrt{T})$ regret whenever
the feasible set is strongly convex. We also present an algorithm for general
convex sets that makes $\widetilde O(d)$ expected number of calls to an LO
Oracle per round and guarantees a $\widetilde O(T^{2/3})$ regret, improving on
the previous best $O(T^{3/4})$. We achieve the latter by approximating any
convex set $\mathcal{C}$ by a strongly convex one, where LO can be performed
using $\widetilde {O}(d)$ expected number of calls to an LO Oracle for
$\mathcal{C}$.
Related papers
- Online Convex Optimization with a Separation Oracle [10.225358400539719]
We introduce a new projection-free algorithm for Online Convex Optimization (OCO) with a state-of-the-art regret guarantee.
Our algorithm achieves a regret bound of $widetildeO(sqrtdT + kappa d)$, while requiring only $widetildeO(1) calls to a separation oracle per round.
arXiv Detail & Related papers (2024-10-03T13:35:08Z) - Nearly Optimal Regret for Decentralized Online Convex Optimization [53.433398074919]
Decentralized online convex optimization (D-OCO) aims to minimize a sequence of global loss functions using only local computations and communications.
We develop novel D-OCO algorithms that can respectively reduce the regret bounds for convex and strongly convex functions.
Our algorithms are nearly optimal in terms of $T$, $n$, and $rho$.
arXiv Detail & Related papers (2024-02-14T13:44:16Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Projection-Free Online Convex Optimization via Efficient Newton
Iterations [10.492474737007722]
This paper presents new projection-free algorithms for Online Convex Optimization (OCO) over a convex domain $mathcalK subset mathbbRd$.
arXiv Detail & Related papers (2023-06-19T18:48:53Z) - A One-Sample Decentralized Proximal Algorithm for Non-Convex Stochastic
Composite Optimization [10.762749887051546]
We propose two-time scale algorithms: ProxDAS-A and Proxcal$DASA-GT.
Unlike prior work, our algorithms achieve comparable complexity without requiring large batch sizes, more complex per-it operations, or stronger assumptions.
arXiv Detail & Related papers (2023-02-20T05:16:18Z) - Projection-free Online Exp-concave Optimization [21.30065439295409]
We present an LOO-based ONS-style algorithm, which using overall $O(T)$ calls to a LOO, guarantees in worst case regret bounded by $widetildeO(n2/3T2/3)$.
Our algorithm is most interesting in an important and plausible low-dimensional data scenario.
arXiv Detail & Related papers (2023-02-09T18:58:05Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - New Projection-free Algorithms for Online Convex Optimization with
Adaptive Regret Guarantees [21.30065439295409]
We present new efficient textitprojection-free algorithms for online convex optimization (OCO)
Our algorithms are based on the textitonline gradient descent algorithm with a novel and efficient approach to computing so-called textitinfeasible projections
We present algorithms which, using overall $O(T)$ calls to the separation oracle, guarantee $O(sqrtT)$ adaptive regret and $O(T3/4)$ adaptive expected regret.
arXiv Detail & Related papers (2022-02-09T20:56:16Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.