A diffusion-map-based algorithm for gradient computation on manifolds
and applications
- URL: http://arxiv.org/abs/2108.06988v5
- Date: Mon, 5 Jun 2023 08:09:57 GMT
- Title: A diffusion-map-based algorithm for gradient computation on manifolds
and applications
- Authors: Alvaro Almeida Gomez, Ant\^onio J. Silva Neto, Jorge P. Zubelli
- Abstract summary: We recover the gradient of a given function defined on interior points of a Riemannian submanifold in the Euclidean space.
This approach is based on the estimates of the Laplace-Beltrami operator proposed in the diffusion-maps theory.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We recover the Riemannian gradient of a given function defined on interior
points of a Riemannian submanifold in the Euclidean space based on a sample of
function evaluations at points in the submanifold. This approach is based on
the estimates of the Laplace-Beltrami operator proposed in the diffusion-maps
theory. The Riemannian gradient estimates do not involve differential terms.
Analytical convergence results of the Riemannian gradient expansion are proved.
We apply the Riemannian gradient estimate in a gradient-based algorithm
providing a derivative-free optimization method. We test and validate several
applications, including tomographic reconstruction from an unknown random angle
distribution, and the sphere packing problem in dimensions 2 and 3.
Related papers
- Generalizing Stochastic Smoothing for Differentiation and Gradient Estimation [59.86921150579892]
We deal with the problem of gradient estimation for differentiable relaxations of algorithms, operators, simulators, and other non-differentiable functions.
We develop variance reduction strategies for differentiable sorting and ranking, differentiable shortest-paths on graphs, differentiable rendering for pose estimation, as well as differentiable cryo-ET simulations.
arXiv Detail & Related papers (2024-10-10T17:10:00Z) - Stochastic Modified Flows for Riemannian Stochastic Gradient Descent [0.6445605125467574]
We show that RSGD can be approximated by the solution to the RSMF driven by an infinite-dimensional Wiener process.
The RSGD is build using the concept of a retraction map, that is, a cost efficient approximation of the exponential map.
We prove quantitative bounds for the weak error of the diffusion approximation under assumptions on the retraction map, the geometry of the manifold, and the random estimators of the gradient.
arXiv Detail & Related papers (2024-02-02T14:29:38Z) - Decentralized Riemannian Conjugate Gradient Method on the Stiefel
Manifold [59.73080197971106]
This paper presents a first-order conjugate optimization method that converges faster than the steepest descent method.
It aims to achieve global convergence over the Stiefel manifold.
arXiv Detail & Related papers (2023-08-21T08:02:16Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - A Riemannian smoothing steepest descent method for non-Lipschitz
optimization on submanifolds [15.994495285950293]
A smoothing steepest descent method is proposed to minimize a non-Lipschitz function on submanifolds.
We prove any point of the sequence generated by the smoothing function is a limiting point of the original non-Lipschitz problem.
Numerical experiments are conducted to demonstrate the efficiency of the proposed method.
arXiv Detail & Related papers (2021-04-09T05:38:28Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Nearest Neighbour Based Estimates of Gradients: Sharp Nonasymptotic
Bounds and Applications [0.6445605125467573]
gradient estimation is of crucial importance in statistics and learning theory.
We consider here the classic regression setup, where a real valued square integrable r.v. $Y$ is to be predicted.
We prove nonasymptotic bounds improving upon those obtained for alternative estimation methods.
arXiv Detail & Related papers (2020-06-26T15:19:43Z) - Stochastic Zeroth-order Riemannian Derivative Estimation and
Optimization [15.78743548731191]
We propose an oracle version of the Gaussian smoothing function to overcome the difficulty of non-linearity of manifold non-linearity.
We demonstrate the applicability of our algorithms by results and real-world applications on black-box stiffness control for robotics and black-box attacks to neural networks.
arXiv Detail & Related papers (2020-03-25T06:58:19Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.