Projection-Free Online Convex Optimization via Efficient Newton
Iterations
- URL: http://arxiv.org/abs/2306.11121v1
- Date: Mon, 19 Jun 2023 18:48:53 GMT
- Title: Projection-Free Online Convex Optimization via Efficient Newton
Iterations
- Authors: Khashayar Gatmiry and Zakaria Mhammedi
- Abstract summary: This paper presents new projection-free algorithms for Online Convex Optimization (OCO) over a convex domain $mathcalK subset mathbbRd$.
- Score: 10.492474737007722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents new projection-free algorithms for Online Convex
Optimization (OCO) over a convex domain $\mathcal{K} \subset \mathbb{R}^d$.
Classical OCO algorithms (such as Online Gradient Descent) typically need to
perform Euclidean projections onto the convex set $\cK$ to ensure feasibility
of their iterates. Alternative algorithms, such as those based on the
Frank-Wolfe method, swap potentially-expensive Euclidean projections onto
$\mathcal{K}$ for linear optimization over $\mathcal{K}$. However, such
algorithms have a sub-optimal regret in OCO compared to projection-based
algorithms. In this paper, we look at a third type of algorithms that output
approximate Newton iterates using a self-concordant barrier for the set of
interest. The use of a self-concordant barrier automatically ensures
feasibility without the need for projections. However, the computation of the
Newton iterates requires a matrix inverse, which can still be expensive. As our
main contribution, we show how the stability of the Newton iterates can be
leveraged to compute the inverse Hessian only a vanishing fraction of the
rounds, leading to a new efficient projection-free OCO algorithm with a
state-of-the-art regret bound.
Related papers
- Nonsmooth Projection-Free Optimization with Functional Constraints [12.20060970177791]
This paper presents a subgradient-based algorithm for constrained nonsmooth convex computation.
Our proposed algorithm can handle nonsmooth problems with general convex functional inequality constraints.
Similar performance is observed when deterministic subgradients are replaced with subgradients.
arXiv Detail & Related papers (2023-11-18T23:06:33Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Projection-free Adaptive Regret with Membership Oracles [31.422532403048738]
Most iterative algorithms require the computation of projections onto convex sets, which can be computationally expensive.
Recent work by GK22 gave sublinear adaptive regret guarantees with projection free algorithms based on the Frank Wolfe approach.
We give projection-free algorithms that are based on a different technique, inspired by Mhammedi22, that replaces projections by set-membership computations.
arXiv Detail & Related papers (2022-11-22T23:53:06Z) - Exploiting the Curvature of Feasible Sets for Faster Projection-Free
Online Learning [8.461907111368628]
We develop new efficient projection-free algorithms for Online Convex Optimization (OCO)
We develop an OCO algorithm that makes two calls to an LO Oracle per round and achieves the near-optimal $widetildeO(sqrtT)$ regret.
We also present an algorithm for general convex sets that makes $widetilde O(d)$ expected number of calls to an LO Oracle per round.
arXiv Detail & Related papers (2022-05-23T17:13:46Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Efficient Projection-Free Online Convex Optimization with Membership
Oracle [11.745866777357566]
We present a new reduction that turns any algorithm A defined on a Euclidean ball to an algorithm on a constrained set C contained within the ball.
Our reduction requires O(T log T) calls to a Membership Oracle on C after T rounds, and no linear optimization on C is needed.
arXiv Detail & Related papers (2021-11-10T17:22:29Z) - Newton-LESS: Sparsification without Trade-offs for the Sketched Newton
Update [88.73437209862891]
In second-order optimization, a potential bottleneck can be computing the Hessian matrix of the optimized function at every iteration.
We show that the Gaussian sketching matrix can be drastically sparsified, significantly reducing the computational cost of sketching.
We prove that Newton-LESS enjoys nearly the same problem-independent local convergence rate as Gaussian embeddings.
arXiv Detail & Related papers (2021-07-15T17:33:05Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Efficient Projection-Free Algorithms for Saddle Point Problems [39.88460595129901]
We study projection-free algorithms for convex-strongly-concave saddle point problems with complicated constraints.
Our method combines Conditional Gradient Sliding with Mirror-Prox and shows that it only requires $tildeO (1/sqrtepsilon)$ evaluations and $tildeO (1/epsilon2)$ linear optimizations in the batch setting.
arXiv Detail & Related papers (2020-10-21T15:05:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.