Revisiting Projection-free Online Learning: the Strongly Convex Case
- URL: http://arxiv.org/abs/2010.07572v2
- Date: Tue, 23 Feb 2021 15:28:57 GMT
- Title: Revisiting Projection-free Online Learning: the Strongly Convex Case
- Authors: Dan Garber and Ben Kretzu
- Abstract summary: Projection-free optimization algorithms are mostly based on the classical Frank-Wolfe method.
Online Frank-Wolfe method achieves a faster rate of $O(T2/3)$ on strongly convex functions.
- Score: 21.30065439295409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Projection-free optimization algorithms, which are mostly based on the
classical Frank-Wolfe method, have gained significant interest in the machine
learning community in recent years due to their ability to handle convex
constraints that are popular in many applications, but for which computing
projections is often computationally impractical in high-dimensional settings,
and hence prohibit the use of most standard projection-based methods. In
particular, a significant research effort was put on projection-free methods
for online learning. In this paper we revisit the Online Frank-Wolfe (OFW)
method suggested by Hazan and Kale \cite{Hazan12} and fill a gap that has been
left unnoticed for several years: OFW achieves a faster rate of $O(T^{2/3})$ on
strongly convex functions (as opposed to the standard $O(T^{3/4})$ for convex
but not strongly convex functions), where $T$ is the sequence length. This is
somewhat surprising since it is known that for offline optimization, in
general, strong convexity does not lead to faster rates for Frank-Wolfe. We
also revisit the bandit setting under strong convexity and prove a similar
bound of $\tilde O(T^{2/3})$ (instead of $O(T^{3/4})$ without strong
convexity). Hence, in the current state-of-affairs, the best projection-free
upper-bounds for the full-information and bandit settings with strongly convex
and nonsmooth functions match up to logarithmic factors in $T$.
Related papers
- Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity [50.25258834153574]
We focus on the class of (strongly) convex $(L0)$-smooth functions and derive new convergence guarantees for several existing methods.
In particular, we derive improved convergence rates for Gradient Descent with smoothnessed Gradient Clipping and for Gradient Descent with Polyak Stepsizes.
arXiv Detail & Related papers (2024-09-23T13:11:37Z) - Two-Timescale Gradient Descent Ascent Algorithms for Nonconvex Minimax Optimization [77.3396841985172]
We provide a unified analysis of two-timescale gradient ascent (TTGDA) for solving structured non minimax optimization problems.
Our contribution is to design TTGDA algorithms are effective beyond the setting.
arXiv Detail & Related papers (2024-08-21T20:14:54Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features [65.64276393443346]
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints.
We present two new variants of the algorithms for minimization of the finite-sum gradient.
arXiv Detail & Related papers (2023-04-23T20:05:09Z) - Improved Dynamic Regret for Online Frank-Wolfe [54.690867216880356]
We investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization.
In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization.
arXiv Detail & Related papers (2023-02-11T07:19:51Z) - Projection-free Adaptive Regret with Membership Oracles [31.422532403048738]
Most iterative algorithms require the computation of projections onto convex sets, which can be computationally expensive.
Recent work by GK22 gave sublinear adaptive regret guarantees with projection free algorithms based on the Frank Wolfe approach.
We give projection-free algorithms that are based on a different technique, inspired by Mhammedi22, that replaces projections by set-membership computations.
arXiv Detail & Related papers (2022-11-22T23:53:06Z) - Frank Wolfe Meets Metric Entropy [0.0]
We provide a technique for establishing domain specific and easy-to-estimate lower bounds for Frank-Wolfe.
We show that a dimension-free linear upper bound must fail not only in the worst case, but in the emph average case.
We also establish this phenomenon for the nuclear norm ball.
arXiv Detail & Related papers (2022-05-17T21:23:36Z) - Projection-free Online Learning over Strongly Convex Sets [24.517908972536432]
We study the special case of online learning over strongly convex sets, for which we first prove that OFW can enjoy a better regret bound of $O(T2/3)$ for general convex losses.
We show that it achieves a regret bound of $O(sqrtT)$ over general convex sets and a better regret bound of $O(sqrtT)$ over strongly convex sets.
arXiv Detail & Related papers (2020-10-16T05:42:50Z) - Projection Efficient Subgradient Method and Optimal Nonsmooth
Frank-Wolfe Method [54.93433440034386]
We find a feasible $epsilon$-suboptimal solution using only $O(epsilon-1)$ PO calls and optimal $O(epsilon-2)$ FO calls.
Our experiments confirm that these methods achieve significant speedups over the state-of-the-art, for a problem with costly PO and LMO calls.
arXiv Detail & Related papers (2020-10-05T08:16:56Z) - Faster Projection-free Online Learning [34.96927532439896]
We give an efficient projection-free algorithm that guarantees $T2/3$ regret for general online convex optimization.
Our algorithm is derived using the Follow-the-Perturbed-Leader method and is analyzed using an online primal-dual framework.
arXiv Detail & Related papers (2020-01-30T21:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.