Eigen-convergence of Gaussian kernelized graph Laplacian by manifold
heat interpolation
- URL: http://arxiv.org/abs/2101.09875v1
- Date: Mon, 25 Jan 2021 03:22:18 GMT
- Title: Eigen-convergence of Gaussian kernelized graph Laplacian by manifold
heat interpolation
- Authors: Xiuyuan Cheng, Nan Wu
- Abstract summary: We study the spectral convergence of graph Laplacian to the Laplace-Beltrami operator.
Data are uniformly sampled on a $d$-dimensional manifold.
We prove new point-wise and Dirichlet form convergence rates for the density-corrected graph Laplacian.
- Score: 16.891059233061767
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This work studies the spectral convergence of graph Laplacian to the
Laplace-Beltrami operator when the graph affinity matrix is constructed from
$N$ random samples on a $d$-dimensional manifold embedded in a possibly high
dimensional space. By analyzing Dirichlet form convergence and constructing
candidate approximate eigenfunctions via convolution with manifold heat kernel,
we prove that, with Gaussian kernel, one can set the kernel bandwidth parameter
$\epsilon \sim (\log N/ N)^{1/(d/2+2)}$ such that the eigenvalue convergence
rate is $N^{-1/(d/2+2)}$ and the eigenvector convergence in 2-norm has rate
$N^{-1/(d+4)}$; When $\epsilon \sim N^{-1/(d/2+3)}$, both eigenvalue and
eigenvector rates are $N^{-1/(d/2+3)}$. These rates are up to a $\log N$ factor
and proved for finitely many low-lying eigenvalues. The result holds for
un-normalized and random-walk graph Laplacians when data are uniformly sampled
on the manifold, as well as the density-corrected graph Laplacian (where the
affinity matrix is normalized by the degree matrix from both sides) with
non-uniformly sampled data. As an intermediate result, we prove new point-wise
and Dirichlet form convergence rates for the density-corrected graph Laplacian.
Numerical results are provided to verify the theory.
Related papers
- Improved convergence rate of kNN graph Laplacians [11.93971616098517]
General class of $k$NN graph where the graph affinity is $W_ij = epsilon-d/2 .
We prove the point-wise convergence of the $k$NN graph Laplacian to the limiting manifold operator.
arXiv Detail & Related papers (2024-10-30T17:01:00Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Efficient Signed Graph Sampling via Balancing & Gershgorin Disc Perfect
Alignment [51.74913666829224]
We show that for datasets with strong inherent anti-correlations, a suitable graph contains both positive and negative edge weights.
We propose a linear-time signed graph sampling method centered on the concept of balanced signed graphs.
Experimental results show that our signed graph sampling method outperformed existing fast sampling schemes noticeably on various datasets.
arXiv Detail & Related papers (2022-08-18T09:19:01Z) - Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise [10.418647759223965]
Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis.
We prove the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates.
When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency.
arXiv Detail & Related papers (2022-06-22T21:08:24Z) - Structure Learning in Graphical Models from Indirect Observations [17.521712510832558]
This paper considers learning of the graphical structure of a $p$-dimensional random vector $X in Rp$ using both parametric and non-parametric methods.
Under mild conditions, we show that our graph-structure estimator can obtain the correct structure.
arXiv Detail & Related papers (2022-05-06T19:24:44Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Random Geometric Graphs on Euclidean Balls [2.28438857884398]
We consider a latent space model for random graphs where a node $i$ is associated to a random latent point $X_i$ on the Euclidean unit ball.
For certain link functions, the model considered here generates graphs with degree distribution that have tails with a power-law-type distribution.
arXiv Detail & Related papers (2020-10-26T17:21:57Z) - Learning Sparse Graph Laplacian with K Eigenvector Prior via Iterative
GLASSO and Projection [58.5350491065936]
We consider a structural assumption on the graph Laplacian matrix $L$.
The first $K$ eigenvectors of $L$ are pre-selected, e.g., based on domain-specific criteria.
We design an efficient hybrid graphical lasso/projection algorithm to compute the most suitable graph Laplacian matrix $L* in H_u+$ given $barC$.
arXiv Detail & Related papers (2020-10-25T18:12:50Z) - Lipschitz regularity of graph Laplacians on random data clouds [1.2891210250935146]
We prove high probability interior and global Lipschitz estimates for solutions of graph Poisson equations.
Our results can be used to show that graph Laplacian eigenvectors are, with high probability, essentially Lipschitz regular with constants depending explicitly on their corresponding eigenvalues.
arXiv Detail & Related papers (2020-07-13T20:43:19Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.