Second-order Conditional Gradient Sliding
- URL: http://arxiv.org/abs/2002.08907v3
- Date: Mon, 28 Aug 2023 21:10:38 GMT
- Title: Second-order Conditional Gradient Sliding
- Authors: Alejandro Carderera and Sebastian Pokutta
- Abstract summary: We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
- Score: 79.66739383117232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constrained second-order convex optimization algorithms are the method of
choice when a high accuracy solution to a problem is needed, due to their local
quadratic convergence. These algorithms require the solution of a constrained
quadratic subproblem at every iteration. We present the \emph{Second-Order
Conditional Gradient Sliding} (SOCGS) algorithm, which uses a projection-free
algorithm to solve the constrained quadratic subproblems inexactly. When the
feasible region is a polytope the algorithm converges quadratically in primal
gap after a finite number of linearly convergent iterations. Once in the
quadratic regime the SOCGS algorithm requires $\mathcal{O}(\log(\log
1/\varepsilon))$ first-order and Hessian oracle calls and $\mathcal{O}(\log
(1/\varepsilon) \log(\log1/\varepsilon))$ linear minimization oracle calls to
achieve an $\varepsilon$-optimal solution. This algorithm is useful when the
feasible region can only be accessed efficiently through a linear optimization
oracle, and computing first-order information of the function, although
possible, is costly.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - A Fully Parameter-Free Second-Order Algorithm for Convex-Concave Minimax Problems with Optimal Iteration Complexity [2.815239177328595]
We propose a Lipschitz-free cubic regularization (LF-CR) algorithm for solving the convex-concave minimax optimization problem.
We also propose a fully parameter-free cubic regularization (FF-CR) algorithm that does not require any parameters of the problem.
To the best of our knowledge, the proposed FF-CR algorithm is the first completely parameter-free second-order algorithm for solving convex-concave minimax optimization problems.
arXiv Detail & Related papers (2024-07-04T01:46:07Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - Near-Optimal Nonconvex-Strongly-Convex Bilevel Optimization with Fully
First-Order Oracles [14.697733592222658]
We show that a first-order method can converge at the near-optimal $tilde mathcalO(epsilon-2)$ rate as second-order methods.
Our analysis further leads to simple first-order algorithms that achieve similar convergence rates for finding second-order stationary points.
arXiv Detail & Related papers (2023-06-26T17:07:54Z) - Linear Query Approximation Algorithms for Non-monotone Submodular
Maximization under Knapsack Constraint [16.02833173359407]
This work introduces two constant factor approximation algorithms with linear query complexity for non-monotone submodular over a ground set of size $n$ subject to a knapsack constraint.
$mathsfDLA$ is a deterministic algorithm that provides an approximation factor of $6+epsilon$ while $mathsfRLA$ is a randomized algorithm with an approximation factor of $4+epsilon$.
arXiv Detail & Related papers (2023-05-17T15:27:33Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - A Projection-free Algorithm for Constrained Stochastic Multi-level
Composition Optimization [12.096252285460814]
We propose a projection-free conditional gradient-type algorithm for composition optimization.
We show that the number of oracles and the linear-minimization oracle required by the proposed algorithm, are of order $mathcalO_T(epsilon-2)$ and $mathcalO_T(epsilon-3)$ respectively.
arXiv Detail & Related papers (2022-02-09T06:05:38Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.