Convergence rates for Poisson learning to a Poisson equation with measure data
- URL: http://arxiv.org/abs/2407.06783v1
- Date: Tue, 9 Jul 2024 11:54:34 GMT
- Title: Convergence rates for Poisson learning to a Poisson equation with measure data
- Authors: Leon Bungert, Jeff Calder, Max Mihailescu, Kodjo Houssou, Amber Yuan,
- Abstract summary: We prove discrete to continuum convergence rates for Poisson Learning, a graph-based semi-supervised learning algorithm.
We show how to regularize the graph Poisson equation via mollification with the graph heat kernel.
We obtain convergence rates that scale up to logarithmic factors, like $O(varepsilonfrac1d+2)$ for general data distributions, and $O(varepsilonfrac2-sigmad+4)$ for uniformly distributed data.
- Score: 2.7961972519572447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we prove discrete to continuum convergence rates for Poisson Learning, a graph-based semi-supervised learning algorithm that is based on solving the graph Poisson equation with a source term consisting of a linear combination of Dirac deltas located at labeled points and carrying label information. The corresponding continuum equation is a Poisson equation with measure data in a Euclidean domain $\Omega \subset \mathbb{R}^d$. The singular nature of these equations is challenging and requires an approach with several distinct parts: (1) We prove quantitative error estimates when convolving the measure data of a Poisson equation with (approximately) radial function supported on balls. (2) We use quantitative variational techniques to prove discrete to continuum convergence rates on random geometric graphs with bandwidth $\varepsilon>0$ for bounded source terms. (3) We show how to regularize the graph Poisson equation via mollification with the graph heat kernel, and we study fine asymptotics of the heat kernel on random geometric graphs. Combining these three pillars we obtain $L^1$ convergence rates that scale, up to logarithmic factors, like $O(\varepsilon^{\frac{1}{d+2}})$ for general data distributions, and $O(\varepsilon^{\frac{2-\sigma}{d+4}})$ for uniformly distributed data, where $\sigma>0$. These rates are valid with high probability if $\varepsilon\gg\left({\log n}/{n}\right)^q$ where $n$ denotes the number of vertices of the graph and $q \approx \frac{1}{3d}$.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Data subsampling for Poisson regression with pth-root-link [53.63838219437508]
We develop and analyze data subsampling techniques for Poisson regression.
In particular, we consider the Poisson generalized linear model with ID- and square root-link functions.
arXiv Detail & Related papers (2024-10-30T10:09:05Z) - Continuum limit of $p$-biharmonic equations on graphs [3.79830302036482]
The behavior of the solution is investigated when the random geometric graph is considered and the number of data points goes to infinity.
We show that the continuum limit is an appropriately weighted $p$-biharmonic equation with homogeneous Neumann boundary conditions.
arXiv Detail & Related papers (2024-04-30T16:29:44Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Efficient Sampling of Stochastic Differential Equations with Positive
Semi-Definite Models [91.22420505636006]
This paper deals with the problem of efficient sampling from a differential equation, given the drift function and the diffusion matrix.
It is possible to obtain independent and identically distributed (i.i.d.) samples at precision $varepsilon$ with a cost that is $m2 d log (1/varepsilon)$
Our results suggest that as the true solution gets smoother, we can circumvent the curse of dimensionality without requiring any sort of convexity.
arXiv Detail & Related papers (2023-03-30T02:50:49Z) - Rates of Convergence for Regression with the Graph Poly-Laplacian [3.222802562733786]
Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser.
We consider graph poly-Laplacian regularisation in a fully supervised, non-parametric, noise corrupted, regression problem.
arXiv Detail & Related papers (2022-09-06T08:59:15Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Accelerated Gradient Tracking over Time-varying Graphs for Decentralized Optimization [59.65871549878937]
We prove that the practical single loop accelerated gradient tracking needs $O(fracgamma1-sigma_gamma)2sqrtfracLepsilon)$.
Our convergence rates improve significantly over the ones of $O(frac1epsilon5/7)$ and $O(fracLmu)5/7frac1 (1-sigma)1.5logfrac1epsilon)$.
arXiv Detail & Related papers (2021-04-06T15:34:14Z) - Random Geometric Graphs on Euclidean Balls [2.28438857884398]
We consider a latent space model for random graphs where a node $i$ is associated to a random latent point $X_i$ on the Euclidean unit ball.
For certain link functions, the model considered here generates graphs with degree distribution that have tails with a power-law-type distribution.
arXiv Detail & Related papers (2020-10-26T17:21:57Z) - Lipschitz regularity of graph Laplacians on random data clouds [1.2891210250935146]
We prove high probability interior and global Lipschitz estimates for solutions of graph Poisson equations.
Our results can be used to show that graph Laplacian eigenvectors are, with high probability, essentially Lipschitz regular with constants depending explicitly on their corresponding eigenvalues.
arXiv Detail & Related papers (2020-07-13T20:43:19Z) - Rates of Convergence for Laplacian Semi-Supervised Learning with Low
Labeling Rates [3.867363075280544]
We study graph-based Laplacian semi-supervised learning at low labeling rates.
At very low label rates, Laplacian learning becomes degenerate and the solution is roughly constant with spikes at each labeled data point.
arXiv Detail & Related papers (2020-06-04T10:46:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.