Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise
- URL: http://arxiv.org/abs/2206.11386v3
- Date: Wed, 17 Jul 2024 22:24:56 GMT
- Title: Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise
- Authors: Xiuyuan Cheng, Boris Landa,
- Abstract summary: Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis.
We prove the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates.
When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency.
- Score: 10.418647759223965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis and can be computed efficiently by Sinkhorn-Knopp (SK) iterations. This paper proves the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates, when $n$ data points are i.i.d. sampled from a general $d$-dimensional manifold embedded in a possibly high-dimensional space. Under certain joint limit of $n \to \infty$ and kernel bandwidth $\epsilon \to 0$, the point-wise convergence rate of the graph Laplacian operator (under 2-norm) is proved to be $ O( n^{-1/(d/2+3)})$ at finite large $n$ up to log factors, achieved at the scaling of $\epsilon \sim n^{-1/(d/2+3)} $. When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency which matches the rate for clean manifold data plus an additional term proportional to the boundedness of the inner-products of the noise vectors among themselves and with data vectors. Motivated by our analysis, which suggests that not exact bi-stochastic normalization but an approximate one will achieve the same consistency rate, we propose an approximate and constrained matrix scaling problem that can be solved by SK iterations with early termination. Numerical experiments support our theoretical results and show the robustness of bi-stochastically normalized graph Laplacian to high-dimensional outlier noise.
Related papers
- High-probability Convergence Bounds for Nonlinear Stochastic Gradient Descent Under Heavy-tailed Noise [59.25598762373543]
We show that wetailed high-prob convergence guarantees of learning on streaming data in the presence of heavy-tailed noise.
We demonstrate analytically and that $ta$ can be used to the preferred choice of setting for a given problem.
arXiv Detail & Related papers (2023-10-28T18:53:41Z) - Rates of Convergence for Regression with the Graph Poly-Laplacian [3.222802562733786]
Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser.
We consider graph poly-Laplacian regularisation in a fully supervised, non-parametric, noise corrupted, regression problem.
arXiv Detail & Related papers (2022-09-06T08:59:15Z) - Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [116.89941263390769]
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $min_mathbfxmax_mathbfyF(mathbfx) + H(mathbfx,mathbfy)$, where one has access to first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.
We present a emphaccelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragrad
arXiv Detail & Related papers (2022-06-17T06:10:20Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Convergence Rates of Stochastic Gradient Descent under Infinite Noise
Variance [14.06947898164194]
Heavy tails emerge in gradient descent (SGD) in various scenarios.
We provide convergence guarantees for SGD under a state-dependent and heavy-tailed noise with a potentially infinite variance.
Our results indicate that even under heavy-tailed noise with infinite variance, SGD can converge to the global optimum.
arXiv Detail & Related papers (2021-02-20T13:45:11Z) - Eigen-convergence of Gaussian kernelized graph Laplacian by manifold
heat interpolation [16.891059233061767]
We study the spectral convergence of graph Laplacian to the Laplace-Beltrami operator.
Data are uniformly sampled on a $d$-dimensional manifold.
We prove new point-wise and Dirichlet form convergence rates for the density-corrected graph Laplacian.
arXiv Detail & Related papers (2021-01-25T03:22:18Z) - Testing correlation of unlabeled random graphs [18.08210501570919]
We study the problem of detecting the edge correlation between two random graphs with $n$ unlabeled nodes.
This is formalized as a hypothesis testing problem, where under the null hypothesis, the two graphs are independently generated.
Under the alternative, the two graphs are edge-correlated under some latent node correspondence, but have the same marginal distributions as the null.
arXiv Detail & Related papers (2020-08-23T19:19:45Z) - Does the $\ell_1$-norm Learn a Sparse Graph under Laplacian Constrained
Graphical Models? [13.572602792770288]
We consider the problem of learning a graph under the Laplacian constrained Gaussian graphical models.
We show that a large regularization parameter will surprisingly lead to a complete graph, i.e., every edge connected by an edge.
arXiv Detail & Related papers (2020-06-26T12:06:10Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.