From the Greene--Wu Convolution to Gradient Estimation over Riemannian
Manifolds
- URL: http://arxiv.org/abs/2108.07406v1
- Date: Tue, 17 Aug 2021 02:16:15 GMT
- Title: From the Greene--Wu Convolution to Gradient Estimation over Riemannian
Manifolds
- Authors: Tianyu Wang, Yifeng Huang and Didong Li
- Abstract summary: Greene and Wu introduced a convolution, known as Greene-Wu (GW) convolution.
In this paper, we introduce a reformulation of the GW convolution.
Also enabled by our new reformulation, an improved method for gradient estimation is introduced.
- Score: 9.173528450234906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over a complete Riemannian manifold of finite dimension, Greene and Wu
introduced a convolution, known as Greene-Wu (GW) convolution. In this paper,
we introduce a reformulation of the GW convolution. Using our reformulation,
many properties of the GW convolution can be easily derived, including a new
formula for how the curvature of the space would affect the curvature of the
function through the GW convolution. Also enabled by our new reformulation, an
improved method for gradient estimation over Riemannian manifolds is
introduced. Theoretically, our gradient estimation method improves the order of
estimation error from $O \left( \left( n + 3 \right)^{3/2} \right)$ to $O
\left( n^{3/2} \right)$, where $n$ is the dimension of the manifold.
Empirically, our method outperforms the best existing method for gradient
estimation over Riemannian manifolds, as evidenced by thorough experimental
evaluations.
Related papers
- Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Extragradient Type Methods for Riemannian Variational Inequality Problems [25.574847201669144]
We show that the average-rate convergence of both REG and RPEG is $Oleft(frac1Tright)$, aligning with in the 2020ean case.
Results are enabled by addressing judiciously the holonomy effect so additional observations can be reduced.
arXiv Detail & Related papers (2023-09-25T14:08:02Z) - Decentralized Riemannian Conjugate Gradient Method on the Stiefel
Manifold [59.73080197971106]
This paper presents a first-order conjugate optimization method that converges faster than the steepest descent method.
It aims to achieve global convergence over the Stiefel manifold.
arXiv Detail & Related papers (2023-08-21T08:02:16Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Riemannian accelerated gradient methods via extrapolation [40.23168342389821]
We show when the iterates are generated from the gradient descent method, the accelerated scheme achieves the optimal convergence rateally.
Our experiments verify the practical benefit of the novel acceleration strategy.
arXiv Detail & Related papers (2022-08-13T10:31:09Z) - A Riemannian Accelerated Proximal Extragradient Framework and its
Implications [52.31775617527208]
We revisit the emphAccelerated Hybrid Proximal Extragradient (A-HPE) method of citetmonteiro2013accelerated, a powerful framework for obtaining accelerated Euclidean methods.
arXiv Detail & Related papers (2021-11-04T11:32:20Z) - Projective Manifold Gradient Layer for Deep Rotation Regression [49.85464297105456]
Regressing rotations on SO(3) manifold using deep neural networks is an important yet unsolved problem.
We propose a manifold-aware gradient that directly backpropagates into deep network weights.
arXiv Detail & Related papers (2021-10-22T08:34:15Z) - A diffusion-map-based algorithm for gradient computation on manifolds
and applications [0.0]
We recover the gradient of a given function defined on interior points of a Riemannian submanifold in the Euclidean space.
This approach is based on the estimates of the Laplace-Beltrami operator proposed in the diffusion-maps theory.
arXiv Detail & Related papers (2021-08-16T09:35:22Z) - On the Convergence of Gradient Descent in GANs: MMD GAN As a Gradient
Flow [26.725412498545385]
We show that a parametric kernelized gradient flow mimics the min-max game in gradient regularized $mathrmMMD$ GAN.
We then derive an explicit condition which ensures that gradient descent on the space of the generator in regularized $mathrmMMD$ GAN is globally convergent to the target distribution.
arXiv Detail & Related papers (2020-11-04T16:55:00Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.