A Newton-CG based augmented Lagrangian method for finding a second-order
stationary point of nonconvex equality constrained optimization with
complexity guarantees
- URL: http://arxiv.org/abs/2301.03139v1
- Date: Mon, 9 Jan 2023 01:39:46 GMT
- Title: A Newton-CG based augmented Lagrangian method for finding a second-order
stationary point of nonconvex equality constrained optimization with
complexity guarantees
- Authors: Chuan He, Zhaosong Lu and Ting Kei Pong
- Abstract summary: We first propose a new Newton-CG method for finding an approximate SOSP of unconstrained optimization.
We then propose a NewtonCG based augmented Lagrangian approximate SOSP of non equality constrained optimization.
Preliminary results also demonstrate the superiority of our proposed methods in this paper.
- Score: 0.5013248430919224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we consider finding a second-order stationary point (SOSP) of
nonconvex equality constrained optimization when a nearly feasible point is
known. In particular, we first propose a new Newton-CG method for finding an
approximate SOSP of unconstrained optimization and show that it enjoys a
substantially better complexity than the Newton-CG method [56]. We then propose
a Newton-CG based augmented Lagrangian (AL) method for finding an approximate
SOSP of nonconvex equality constrained optimization, in which the proposed
Newton-CG method is used as a subproblem solver. We show that under a
generalized linear independence constraint qualification (GLICQ), our AL method
enjoys a total inner iteration complexity of $\widetilde{\cal
O}(\epsilon^{-7/2})$ and an operation complexity of $\widetilde{\cal
O}(\epsilon^{-7/2}\min\{n,\epsilon^{-3/4}\})$ for finding an
$(\epsilon,\sqrt{\epsilon})$-SOSP of nonconvex equality constrained
optimization with high probability, which are significantly better than the
ones achieved by the proximal AL method [60]. Besides, we show that it has a
total inner iteration complexity of $\widetilde{\cal O}(\epsilon^{-11/2})$ and
an operation complexity of $\widetilde{\cal
O}(\epsilon^{-11/2}\min\{n,\epsilon^{-5/4}\})$ when the GLICQ does not hold. To
the best of our knowledge, all the complexity results obtained in this paper
are new for finding an approximate SOSP of nonconvex equality constrained
optimization with high probability. Preliminary numerical results also
demonstrate the superiority of our proposed methods over the ones in [56,60].
Related papers
- Self-concordant Smoothing for Large-Scale Convex Composite Optimization [0.0]
We introduce a notion of self-concordant smoothing for minimizing the sum of two convex functions, one of which is smooth and the other may be nonsmooth.
We prove the convergence of two resulting algorithms: Prox-N-SCORE, a proximal Newton algorithm and Prox-GGN-SCORE, a proximal generalized Gauss-Newton algorithm.
arXiv Detail & Related papers (2023-09-04T19:47:04Z) - Accelerating Inexact HyperGradient Descent for Bilevel Optimization [84.00488779515206]
We present a method for solving general non-strongly-concave bilevel optimization problems.
Our results also improve upon the existing complexity for finding second-order stationary points in non-strongly-concave problems.
arXiv Detail & Related papers (2023-06-30T20:36:44Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - A Newton-CG based barrier method for finding a second-order stationary
point of nonconvex conic optimization with complexity guarantees [0.5922488908114022]
We consider finding an approximate second-order stationary point (SOSP) that minimizes a twice differentiable function intersection.
Our method is not only iteration, but also achieves $mincal O(epsilon/2) which matches the best known complexity.
arXiv Detail & Related papers (2022-07-12T17:21:45Z) - Accelerated first-order methods for convex optimization with locally
Lipschitz continuous gradient [0.0]
We first consider unconstrained convex optimization with Lipschitz continuous gradient (LLCG) and propose accelerated proximal gradient (APG) methods for solving it.
The proposed APG methods are equipped with a verifiable termination criterion and enjoy an operation complexity of $cal O(varepsilon-1/2log varepsilon-1)$.
Preliminary numerical results are presented to demonstrate the performance of our proposed methods.
arXiv Detail & Related papers (2022-06-02T10:34:26Z) - Second-Order Information in Non-Convex Stochastic Optimization: Power
and Limitations [54.42518331209581]
We find an algorithm which finds.
epsilon$-approximate stationary point (with $|nabla F(x)|le epsilon$) using.
$(epsilon,gamma)$surimate random random points.
Our lower bounds here are novel even in the noiseless case.
arXiv Detail & Related papers (2020-06-24T04:41:43Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.