Randomized Gradient Descents on Riemannian Manifolds: Almost Sure Convergence to Global Minima in and beyond Quantum Optimization
- URL: http://arxiv.org/abs/2405.12039v1
- Date: Mon, 20 May 2024 14:06:45 GMT
- Title: Randomized Gradient Descents on Riemannian Manifolds: Almost Sure Convergence to Global Minima in and beyond Quantum Optimization
- Authors: Emanuel Malvetti, Christian Arenz, Gunther Dirr, Thomas Schulte-Herbrüggen,
- Abstract summary: We study randomization of the tangent space directions of gradient flows for minimizing smooth cost functions.
We prove that convergence to local optima can be obtained almost surely despite the existence of saddle points.
We discuss the time required by the algorithm to pass a saddle point in a simple two-dimensional setting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyze the convergence properties of gradient descent algorithms on Riemannian manifolds. We study randomization of the tangent space directions of Riemannian gradient flows for minimizing smooth cost functions (of Morse--Bott type) to obtain convergence to local optima. We prove that through randomly projecting Riemannian gradients according to the Haar measure, convergence to local optima can be obtained almost surely despite the existence of saddle points. As an application we consider ground state preparation through quantum optimization over the unitary group. In this setting one can efficiently approximate the Haar-random projections by implementing unitary 2-designs on quantum computers. We prove that the respective algorithm almost surely converges to the global minimum that corresponds to the ground state of a desired Hamiltonian. Finally, we discuss the time required by the algorithm to pass a saddle point in a simple two-dimensional setting.
Related papers
- Riemannian Bilevel Optimization [35.42472057648458]
We focus in particular on batch and gradient-based methods, with the explicit goal of avoiding second-order information.
We propose and analyze $mathrmRF2SA$, a method that leverages first-order gradient information.
We provide explicit convergence rates for reaching $epsilon$-stationary points under various setups.
arXiv Detail & Related papers (2024-05-22T20:49:01Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Zeroth-order Riemannian Averaging Stochastic Approximation Algorithms [19.99781875916751]
We show that textttZo-RASA achieves optimal sample complexities for generating $epsilon$-approximation first-order stationary solutions.
We improve the algorithm's practicality by using geometrics and vector transport, instead of exponential mappings and parallel transports.
arXiv Detail & Related papers (2023-09-25T20:13:36Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Sample Complexity for Quadratic Bandits: Hessian Dependent Bounds and
Optimal Algorithms [64.10576998630981]
We show the first tight characterization of the optimal Hessian-dependent sample complexity.
A Hessian-independent algorithm universally achieves the optimal sample complexities for all Hessian instances.
The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions.
arXiv Detail & Related papers (2023-06-21T17:03:22Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Stochastic Zeroth-order Riemannian Derivative Estimation and
Optimization [15.78743548731191]
We propose an oracle version of the Gaussian smoothing function to overcome the difficulty of non-linearity of manifold non-linearity.
We demonstrate the applicability of our algorithms by results and real-world applications on black-box stiffness control for robotics and black-box attacks to neural networks.
arXiv Detail & Related papers (2020-03-25T06:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.