Incremental Quasi-Newton Methods with Faster Superlinear Convergence
Rates
- URL: http://arxiv.org/abs/2402.02359v1
- Date: Sun, 4 Feb 2024 05:54:51 GMT
- Title: Incremental Quasi-Newton Methods with Faster Superlinear Convergence
Rates
- Authors: Zhuanghua Liu and Luo Luo and Bryan Kian Hsiang Low
- Abstract summary: We consider the finite-sum optimization problem, where each component function is strongly convex and has Lipschitz continuous gradient and Hessian.
The recently proposed incremental quasi-Newton method is based on BFGS update and achieves a local superlinear convergence rate.
This paper proposes a more efficient quasi-Newton method by incorporating the symmetric rank-1 update into the incremental framework.
- Score: 50.36933471975506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the finite-sum optimization problem, where each component
function is strongly convex and has Lipschitz continuous gradient and Hessian.
The recently proposed incremental quasi-Newton method is based on BFGS update
and achieves a local superlinear convergence rate that is dependent on the
condition number of the problem. This paper proposes a more efficient
quasi-Newton method by incorporating the symmetric rank-1 update into the
incremental framework, which results in the condition-number-free local
superlinear convergence rate. Furthermore, we can boost our method by applying
the block update on the Hessian approximation, which leads to an even faster
local convergence rate. The numerical experiments show the proposed methods
significantly outperform the baseline methods.
Related papers
- Online Learning Guided Quasi-Newton Methods with Global Non-Asymptotic Convergence [20.766358513158206]
We prove a global convergence rate of $O(min1/k,sqrtd/k1.25)$ in terms of the duality gap.
These results are the first global convergence results to demonstrate a provable advantage of a quasi-Newton method over the extragradient method.
arXiv Detail & Related papers (2024-10-03T16:08:16Z) - Incremental Gauss--Newton Methods with Superlinear Convergence Rates [16.92437325972209]
We introduce a novel Incremental Gauss--Newton (IGN) method within explicit superlinear convergence rate.
In particular, we formulate our problem by the nonlinear least squares with finite-sum structure.
We also provide a mini-batch extension to our IGN method that obtains an even faster superlinear convergence rate.
arXiv Detail & Related papers (2024-07-03T15:26:34Z) - Stochastic Newton Proximal Extragradient Method [18.47705532817026]
We propose a novel Newton extragradient method that improves these bounds.
We accomplish this by extending the Hybrid Proximal Extragradient (HPE) framework.
arXiv Detail & Related papers (2024-06-03T16:06:23Z) - Online Learning Guided Curvature Approximation: A Quasi-Newton Method
with Global Non-Asymptotic Superlinear Convergence [22.286753988825048]
We present the first globally convergent quasi-Newton method with an explicit non-asymptotic superlinear convergence rate.
Unlike classical quasi-Newton methods, we build our algorithm upon the hybrid proximal extragradient method.
arXiv Detail & Related papers (2023-02-16T20:58:09Z) - Hessian Averaging in Stochastic Newton Methods Achieves Superlinear
Convergence [69.65563161962245]
We consider a smooth and strongly convex objective function using a Newton method.
We show that there exists a universal weighted averaging scheme that transitions to local convergence at an optimal stage.
arXiv Detail & Related papers (2022-04-20T07:14:21Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Acceleration Methods [57.202881673406324]
We first use quadratic optimization problems to introduce two key families of acceleration methods.
We discuss momentum methods in detail, starting with the seminal work of Nesterov.
We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates.
arXiv Detail & Related papers (2021-01-23T17:58:25Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Non-asymptotic Superlinear Convergence of Standard Quasi-Newton Methods [26.328847475942894]
We prove the non-asymptotic superlinear convergence rate of the Broyden class of quasi-Newton algorithms.
Our results are first to provide a non-asymptotic superlinear convergence rate for quasi-Newton methods.
arXiv Detail & Related papers (2020-03-30T16:42:41Z) - Complexity Guarantees for Polyak Steps with Momentum [76.97851351276165]
We study a class of methods, based on Polyak steps, where this knowledge is substituted by that of the optimal value, $f_*$.
We first show slightly improved convergence bounds than previously known for the classical case of simple gradient descent with Polyak steps, we then derive an accelerated gradient method with Polyak steps and momentum, along with convergence guarantees.
arXiv Detail & Related papers (2020-02-03T17:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.