Strong uniform convergence of Laplacians of random geometric and
directed kNN graphs on compact manifolds
- URL: http://arxiv.org/abs/2212.10287v1
- Date: Tue, 20 Dec 2022 14:31:06 GMT
- Title: Strong uniform convergence of Laplacians of random geometric and
directed kNN graphs on compact manifolds
- Authors: H\'el\`ene Gu\'erin and Dinh-Toan Nguyen and Viet-Chi Tran
- Abstract summary: We study the almost sure uniform convergence of this operator to the diffusive Laplace-Beltrami operator when $n$ tends to infinity.
This work extends known results of the past 15 years.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consider $n$ points independently sampled from a density $p$ of class
$\mathcal{C}^2$ on a smooth compact $d$-dimensional sub-manifold $\mathcal{M}$
of $\mathbb{R}^m$, and consider the generator of a random walk visiting these
points according to a transition kernel $K$. We study the almost sure uniform
convergence of this operator to the diffusive Laplace-Beltrami operator when
$n$ tends to infinity. This work extends known results of the past 15 years. In
particular, our result does not require the kernel $K$ to be continuous, which
covers the cases of walks exploring $k$NN-random and geometric graphs, and
convergence rates are given. The distance between the random walk generator and
the limiting operator is separated into several terms: a statistical term,
related to the law of large numbers, is treated with concentration tools and an
approximation term that we control with tools from differential geometry. The
convergence of $k$NN Laplacians is detailed.
Related papers
- Further Understanding of a Local Gaussian Process Approximation: Characterising Convergence in the Finite Regime [1.3518297878940662]
We show that common choices of kernel functions for a highly accurate and massively scalable GPnn regression model exhibit gradual convergence to behaviour as dataset-size $n$ increases.
Similar bounds can be found under model misspecification and combined to give overall rates of convergence of both MSE and an important calibration metric.
arXiv Detail & Related papers (2024-04-09T10:47:01Z) - Efficient Sampling on Riemannian Manifolds via Langevin MCMC [51.825900634131486]
We study the task efficiently sampling from a Gibbs distribution $d pi* = eh d vol_g$ over aian manifold $M$ via (geometric) Langevin MCMC.
Our results apply to general settings where $pi*$ can be non exponential and $Mh$ can have negative Ricci curvature.
arXiv Detail & Related papers (2024-02-15T22:59:14Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - A Nearly Tight Bound for Fitting an Ellipsoid to Gaussian Random Points [50.90125395570797]
This nearly establishes a conjecture ofciteSaundersonCPW12, within logarithmic factors.
The latter conjecture has attracted significant attention over the past decade, due to its connections to machine learning and sum-of-squares lower bounds for certain statistical problems.
arXiv Detail & Related papers (2022-12-21T17:48:01Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - How many moments does MMD compare? [7.919213739992465]
mathcalF-1$ acts on smooth functions in the same way as an integral operator associated with $K$.
We show that kernels defined by pseudo-differential operators are able to approximate uniformly any continuous Mercer kernel on a compact set.
arXiv Detail & Related papers (2021-06-27T16:44:17Z) - Non-Parametric Estimation of Manifolds from Noisy Data [1.0152838128195467]
We consider the problem of estimating a $d$ dimensional sub-manifold of $mathbbRD$ from a finite set of noisy samples.
We show that the estimation yields rates of convergence of $n-frack2k + d$ for the point estimation and $n-frack-12k + d$ for the estimation of tangent space.
arXiv Detail & Related papers (2021-05-11T02:29:33Z) - From Smooth Wasserstein Distance to Dual Sobolev Norm: Empirical
Approximation and Statistical Applications [18.618590805279187]
We show that $mathsfW_p(sigma)$ is controlled by a $pth order smooth dual Sobolev $mathsfd_p(sigma)$.
We derive the limit distribution of $sqrtnmathsfd_p(sigma)(hatmu_n,mu)$ in all dimensions $d$, when $mu$ is sub-Gaussian.
arXiv Detail & Related papers (2021-01-11T17:23:24Z) - Fast Convergence of Langevin Dynamics on Manifold: Geodesics meet
Log-Sobolev [31.57723436316983]
One approach to sample from a high dimensional distribution matrix for some function is the Langevin Algorithm.
Our work generalizes the results of [53] where $mathRn$ is defined on af$ rather than $bbRn$.
arXiv Detail & Related papers (2020-10-11T15:02:12Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.