Riemannian accelerated gradient methods via extrapolation
- URL: http://arxiv.org/abs/2208.06619v1
- Date: Sat, 13 Aug 2022 10:31:09 GMT
- Title: Riemannian accelerated gradient methods via extrapolation
- Authors: Andi Han, Bamdev Mishra, Pratik Jawanpuria, Junbin Gao
- Abstract summary: We show when the iterates are generated from the gradient descent method, the accelerated scheme achieves the optimal convergence rateally.
Our experiments verify the practical benefit of the novel acceleration strategy.
- Score: 40.23168342389821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a simple acceleration scheme for Riemannian
gradient methods by extrapolating iterates on manifolds. We show when the
iterates are generated from Riemannian gradient descent method, the accelerated
scheme achieves the optimal convergence rate asymptotically and is
computationally more favorable than the recently proposed Riemannian Nesterov
accelerated gradient methods. Our experiments verify the practical benefit of
the novel acceleration strategy.
Related papers
- Incremental Quasi-Newton Methods with Faster Superlinear Convergence
Rates [50.36933471975506]
We consider the finite-sum optimization problem, where each component function is strongly convex and has Lipschitz continuous gradient and Hessian.
The recently proposed incremental quasi-Newton method is based on BFGS update and achieves a local superlinear convergence rate.
This paper proposes a more efficient quasi-Newton method by incorporating the symmetric rank-1 update into the incremental framework.
arXiv Detail & Related papers (2024-02-04T05:54:51Z) - Acceleration and Implicit Regularization in Gaussian Phase Retrieval [5.484345596034159]
In this setting, we prove that methods with Polyak or Nesterov momentum implicit regularization ensures a nice convex descent.
Experimental evidence shows that these methods converge faster than gradient descent in practice.
arXiv Detail & Related papers (2023-11-21T04:10:03Z) - A Variational Perspective on High-Resolution ODEs [10.036727981085223]
We consider unconstrained minimization of convex functions using forced Euler-Lagrange equation.
We obtain a faster convergence rate for gradient norm minimization using Nesterov's accelerated gradient method.
arXiv Detail & Related papers (2023-11-03T16:00:40Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Accelerated gradient methods for nonconvex optimization: Escape
trajectories from strict saddle points and convergence to local minima [9.66353475649122]
This paper considers the problem.
of understanding convex behavior of a general general of accelerated gradient methods on.
non-aptotic functions.
It shows that Nesterov's accelerated method (NAG) with variable momentum avoids strict saddle points almost surely.
arXiv Detail & Related papers (2023-07-13T19:11:07Z) - A Riemannian Accelerated Proximal Extragradient Framework and its
Implications [52.31775617527208]
We revisit the emphAccelerated Hybrid Proximal Extragradient (A-HPE) method of citetmonteiro2013accelerated, a powerful framework for obtaining accelerated Euclidean methods.
arXiv Detail & Related papers (2021-11-04T11:32:20Z) - Acceleration Methods [57.202881673406324]
We first use quadratic optimization problems to introduce two key families of acceleration methods.
We discuss momentum methods in detail, starting with the seminal work of Nesterov.
We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates.
arXiv Detail & Related papers (2021-01-23T17:58:25Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.