Rates of Convergence for Regression with the Graph Poly-Laplacian
- URL: http://arxiv.org/abs/2209.02305v1
- Date: Tue, 6 Sep 2022 08:59:15 GMT
- Title: Rates of Convergence for Regression with the Graph Poly-Laplacian
- Authors: Nicol\'as Garc\'ia Trillos, Ryan Murray, Matthew Thorpe
- Abstract summary: Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser.
We consider graph poly-Laplacian regularisation in a fully supervised, non-parametric, noise corrupted, regression problem.
- Score: 3.222802562733786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the (special) smoothing spline problem one considers a variational problem
with a quadratic data fidelity penalty and Laplacian regularisation. Higher
order regularity can be obtained via replacing the Laplacian regulariser with a
poly-Laplacian regulariser. The methodology is readily adapted to graphs and
here we consider graph poly-Laplacian regularisation in a fully supervised,
non-parametric, noise corrupted, regression problem. In particular, given a
dataset $\{x_i\}_{i=1}^n$ and a set of noisy labels
$\{y_i\}_{i=1}^n\subset\mathbb{R}$ we let $u_n:\{x_i\}_{i=1}^n\to\mathbb{R}$ be
the minimiser of an energy which consists of a data fidelity term and an
appropriately scaled graph poly-Laplacian term. When $y_i = g(x_i)+\xi_i$, for
iid noise $\xi_i$, and using the geometric random graph, we identify (with high
probability) the rate of convergence of $u_n$ to $g$ in the large data limit
$n\to\infty$. Furthermore, our rate, up to logarithms, coincides with the known
rate of convergence in the usual smoothing spline model.
Related papers
- Data subsampling for Poisson regression with pth-root-link [53.63838219437508]
We develop and analyze data subsampling techniques for Poisson regression.
In particular, we consider the Poisson generalized linear model with ID- and square root-link functions.
arXiv Detail & Related papers (2024-10-30T10:09:05Z) - Convergence rates for Poisson learning to a Poisson equation with measure data [2.7961972519572447]
We prove discrete to continuum convergence rates for Poisson Learning, a graph-based semi-supervised learning algorithm.
We show how to regularize the graph Poisson equation via mollification with the graph heat kernel.
We obtain convergence rates that scale up to logarithmic factors, like $O(varepsilonfrac1d+2)$ for general data distributions, and $O(varepsilonfrac2-sigmad+4)$ for uniformly distributed data.
arXiv Detail & Related papers (2024-07-09T11:54:34Z) - Hypergraph $p$-Laplacian regularization on point clouds for data interpolation [3.79830302036482]
Hypergraphs are widely used to model higher-order relations in data.
We define the $varepsilon_n$-ball hypergraph and the $k_n$-nearest neighbor hypergraph on a point cloud.
We prove the variational consistency between the hypergraph $p$-Laplacian regularization and the $p$-Laplacian regularization in a semi-supervised setting.
arXiv Detail & Related papers (2024-05-02T09:17:32Z) - High-probability Convergence Bounds for Nonlinear Stochastic Gradient Descent Under Heavy-tailed Noise [59.25598762373543]
We show that wetailed high-prob convergence guarantees of learning on streaming data in the presence of heavy-tailed noise.
We demonstrate analytically and that $ta$ can be used to the preferred choice of setting for a given problem.
arXiv Detail & Related papers (2023-10-28T18:53:41Z) - Efficient Signed Graph Sampling via Balancing & Gershgorin Disc Perfect
Alignment [51.74913666829224]
We show that for datasets with strong inherent anti-correlations, a suitable graph contains both positive and negative edge weights.
We propose a linear-time signed graph sampling method centered on the concept of balanced signed graphs.
Experimental results show that our signed graph sampling method outperformed existing fast sampling schemes noticeably on various datasets.
arXiv Detail & Related papers (2022-08-18T09:19:01Z) - Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise [10.418647759223965]
Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis.
We prove the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates.
When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency.
arXiv Detail & Related papers (2022-06-22T21:08:24Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Minimax Optimal Regression over Sobolev Spaces via Laplacian
Regularization on Neighborhood Graphs [25.597646488273558]
We study the statistical properties of Laplacian smoothing, a graph-based approach to nonparametric regression.
We prove that Laplacian smoothing is manifold-adaptive.
arXiv Detail & Related papers (2021-06-03T01:20:41Z) - Accelerated Gradient Tracking over Time-varying Graphs for Decentralized Optimization [59.65871549878937]
We prove that the practical single loop accelerated gradient tracking needs $O(fracgamma1-sigma_gamma)2sqrtfracLepsilon)$.
Our convergence rates improve significantly over the ones of $O(frac1epsilon5/7)$ and $O(fracLmu)5/7frac1 (1-sigma)1.5logfrac1epsilon)$.
arXiv Detail & Related papers (2021-04-06T15:34:14Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Does the $\ell_1$-norm Learn a Sparse Graph under Laplacian Constrained
Graphical Models? [13.572602792770288]
We consider the problem of learning a graph under the Laplacian constrained Gaussian graphical models.
We show that a large regularization parameter will surprisingly lead to a complete graph, i.e., every edge connected by an edge.
arXiv Detail & Related papers (2020-06-26T12:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.