Approximating the Riemannian Metric from Point Clouds via Manifold
Moving Least Squares
- URL: http://arxiv.org/abs/2007.09885v2
- Date: Fri, 20 Nov 2020 06:29:20 GMT
- Title: Approximating the Riemannian Metric from Point Clouds via Manifold
Moving Least Squares
- Authors: Barak Sober, Robert Ravier, Ingrid Daubechies
- Abstract summary: We present a naive algorithm that yields approximate geodesic distances with a rate of convergence $ O(h) $ provable approximations.
We show the potential and the robustness to noise of the proposed method on some numerical simulations.
- Score: 2.2774471443318753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The approximation of both geodesic distances and shortest paths on point
cloud sampled from an embedded submanifold $\mathcal{M}$ of Euclidean space has
been a long-standing challenge in computational geometry. Given a sampling
resolution parameter $ h $, state-of-the-art discrete methods yield $ O(h) $
provable approximations. In this paper, we investigate the convergence of such
approximations made by Manifold Moving Least-Squares (Manifold-MLS), a method
that constructs an approximating manifold $\mathcal{M}^h$ using information
from a given point cloud that was developed by Sober \& Levin in 2019. In this
paper, we show that provided that $\mathcal{M}\in C^{k}$ and closed (i.e.
$\mathcal{M}$ is a compact manifold without boundary) the Riemannian metric of
$ \mathcal{M}^h $ approximates the Riemannian metric of $ \mathcal{M}, $.
Explicitly, given points $ p_1, p_2 \in \mathcal{M}$ with geodesic distance $
\rho_{\mathcal{M}}(p_1, p_2) $, we show that their corresponding points $
p_1^h, p_2^h \in \mathcal{M}^h$ have a geodesic distance of $
\rho_{\mathcal{M}^h}(p_1^h,p_2^h) = \rho_{\mathcal{M}}(p_1, p_2)(1 +
O(h^{k-1})) $ (i.e., the Manifold-MLS is nearly an isometry). We then use this
result, as well as the fact that $ \mathcal{M}^h $ can be sampled with any
desired resolution, to devise a naive algorithm that yields approximate
geodesic distances with a rate of convergence $ O(h^{k-1}) $. We show the
potential and the robustness to noise of the proposed method on some numerical
simulations.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Efficient Continual Finite-Sum Minimization [52.5238287567572]
We propose a key twist into the finite-sum minimization, dubbed as continual finite-sum minimization.
Our approach significantly improves upon the $mathcalO(n/epsilon)$ FOs that $mathrmStochasticGradientDescent$ requires.
We also prove that there is no natural first-order method with $mathcalOleft(n/epsilonalpharight)$ complexity gradient for $alpha 1/4$, establishing that the first-order complexity of our method is nearly tight.
arXiv Detail & Related papers (2024-06-07T08:26:31Z) - Efficient Sampling on Riemannian Manifolds via Langevin MCMC [51.825900634131486]
We study the task efficiently sampling from a Gibbs distribution $d pi* = eh d vol_g$ over aian manifold $M$ via (geometric) Langevin MCMC.
Our results apply to general settings where $pi*$ can be non exponential and $Mh$ can have negative Ricci curvature.
arXiv Detail & Related papers (2024-02-15T22:59:14Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Accelerated Methods for Riemannian Min-Max Optimization Ensuring Bounded
Geometric Penalties [21.141544548229774]
We study the form $min_x max_y f(x, y) where $mathcalN$ are Hadamard.
We show global interest accelerated by reducing gradient convergence constants.
arXiv Detail & Related papers (2023-05-25T15:43:07Z) - Metricizing the Euclidean Space towards Desired Distance Relations in
Point Clouds [1.2366208723499545]
We attack unsupervised learning algorithms, specifically $k$-Means and density-based (DBSCAN) clustering algorithms.
We show that the results of clustering algorithms may not generally be trustworthy, unless there is a standardized and fixed prescription to use a specific distance function.
arXiv Detail & Related papers (2022-11-07T16:37:29Z) - Local approximation of operators [0.0]
We study the problem of determining the degree of approximation of a non-linear operator between metric spaces $mathfrakX$ and $mathfrakY$.
We establish constructive methods to do this efficiently, i.e., with the constants involved in the estimates on the approximation on $mathbbSd$ being $mathcalO(d1/6)$.
arXiv Detail & Related papers (2022-02-13T19:28:34Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Non-Parametric Estimation of Manifolds from Noisy Data [1.0152838128195467]
We consider the problem of estimating a $d$ dimensional sub-manifold of $mathbbRD$ from a finite set of noisy samples.
We show that the estimation yields rates of convergence of $n-frack2k + d$ for the point estimation and $n-frack-12k + d$ for the estimation of tangent space.
arXiv Detail & Related papers (2021-05-11T02:29:33Z) - Data-driven Efficient Solvers for Langevin Dynamics on Manifold in High
Dimensions [12.005576001523515]
We study the Langevin dynamics of a physical system with manifold structure $mathcalMsubsetmathbbRp$
We leverage the corresponding Fokker-Planck equation on the manifold $mathcalN$ in terms of the reaction coordinates $mathsfy$.
We propose an implementable, unconditionally stable, data-driven finite volume scheme for this Fokker-Planck equation.
arXiv Detail & Related papers (2020-05-22T16:55:38Z) - On the Complexity of Minimizing Convex Finite Sums Without Using the
Indices of the Individual Functions [62.01594253618911]
We exploit the finite noise structure of finite sums to derive a matching $O(n2)$-upper bound under the global oracle model.
Following a similar approach, we propose a novel adaptation of SVRG which is both emphcompatible with oracles, and achieves complexity bounds of $tildeO(n2+nsqrtL/mu)log (1/epsilon)$ and $O(nsqrtL/epsilon)$, for $mu>0$ and $mu=0$
arXiv Detail & Related papers (2020-02-09T03:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.