Minimax Rates for the Estimation of Eigenpairs of Weighted Laplace-Beltrami Operators on Manifolds
- URL: http://arxiv.org/abs/2506.00171v1
- Date: Fri, 30 May 2025 19:19:25 GMT
- Title: Minimax Rates for the Estimation of Eigenpairs of Weighted Laplace-Beltrami Operators on Manifolds
- Authors: Nicolás García Trillos, Chenghui Li, Raghavendra Venkatraman,
- Abstract summary: We study the problem of estimating eigenpairs of elliptic differential operators from samples of a distribution $rho$ supported on a manifold $M$.<n>We find that eigenpairs of graph Laplacians induce regularity manifold estimators with an error of approximation that, up to logarithmic corrections, matches our lower bounds.
- Score: 7.639886528552829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of estimating eigenpairs of elliptic differential operators from samples of a distribution $\rho$ supported on a manifold $M$. The operators discussed in the paper are relevant in unsupervised learning and in particular are obtained by taking suitable scaling limits of widely used graph Laplacians over data clouds. We study the minimax risk for this eigenpair estimation problem and explore the rates of approximation that can be achieved by commonly used graph Laplacians built from random data. More concretely, assuming that $\rho$ belongs to a certain family of distributions with controlled second derivatives, and assuming that the $d$-dimensional manifold $M$ where $\rho$ is supported has bounded geometry, we prove that the statistical minimax rate for approximating eigenvalues and eigenvectors in the $H^1(M)$-sense is $n^{-2/(d+4)}$, a rate that matches the minimax rate for a closely related density estimation problem. We then revisit the literature studying Laplacians over proximity graphs in the large data limit and prove that, under slightly stronger regularity assumptions on the data generating model, eigenpairs of graph Laplacians induce manifold agnostic estimators with an error of approximation that, up to logarithmic corrections, matches our lower bounds. Our analysis allows us to expand the existing literature on graph-based learning in at least two significant ways: 1) we consider stronger norms to measure the error of approximation than the ones that had been analyzed in the past; 2) our rates of convergence are uniform over a family of smooth distributions and do not just apply to densities with special symmetries, and, as a consequence of our lower bounds, are essentially sharp when the connectivity of the graph is sufficiently high.
Related papers
- Central limit theorems for the eigenvalues of graph Laplacians on data clouds [6.993491018326815]
We consider the Laplacian operator $Delta_n$ associated to an $varepsilon$-sqrt graph over $X_n$.<n>A formal argument allows us to interpret this variance as the dissipation of a suitable energy with respect to the Fisher-Rao geometry.<n>A statistical interpretation of the geometric variance in terms of a Cramer-Rao lower bound for the estimation of the eigenvalues of certain weighted Laplace-Beltrami operator.
arXiv Detail & Related papers (2025-07-24T21:03:20Z) - Statistical Estimation Under Distribution Shift: Wasserstein
Perturbations and Minimax Theory [24.540342159350015]
We focus on Wasserstein distribution shifts, where every data point may undergo a slight perturbation.
We consider perturbations that are either independent or coordinated joint shifts across data points.
We analyze several important statistical problems, including location estimation, linear regression, and non-parametric density estimation.
arXiv Detail & Related papers (2023-08-03T16:19:40Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Variance estimation in graphs with the fused lasso [7.732474038706013]
We develop a linear time estimator for the homoscedastic case that can consistently estimate the variance in general graphs.
We show that our estimator attains minimax rates for the chain and 2D grid graphs when the mean signal has total variation with canonical scaling.
arXiv Detail & Related papers (2022-07-26T03:50:51Z) - Bi-stochastically normalized graph Laplacian: convergence to manifold Laplacian and robustness to outlier noise [10.418647759223965]
Bi-stochastic normalization provides an alternative normalization of graph Laplacians in graph-based data analysis.
We prove the convergence of bi-stochastically normalized graph Laplacian to manifold (weighted-)Laplacian with rates.
When the manifold data are corrupted by outlier noise, we theoretically prove the graph Laplacian point-wise consistency.
arXiv Detail & Related papers (2022-06-22T21:08:24Z) - Robust Linear Predictions: Analyses of Uniform Concentration, Fast Rates
and Model Misspecification [16.0817847880416]
We offer a unified framework that includes a broad variety of linear prediction problems on a Hilbert space.
We show that for misspecification level $epsilon$, these estimators achieve an error rate of $O(maxleft|mathcalO|1/2n-1/2, |mathcalI|1/2n-1 right+epsilon)$, matching the best-known rates in literature.
arXiv Detail & Related papers (2022-01-06T08:51:08Z) - The Performance of the MLE in the Bradley-Terry-Luce Model in
$\ell_{\infty}$-Loss and under General Graph Topologies [76.61051540383494]
We derive novel, general upper bounds on the $ell_infty$ estimation error of the Bradley-Terry-Luce model.
We demonstrate that the derived bounds perform well and in some cases are sharper compared to known results.
arXiv Detail & Related papers (2021-10-20T23:46:35Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Lipschitz regularity of graph Laplacians on random data clouds [1.2891210250935146]
We prove high probability interior and global Lipschitz estimates for solutions of graph Poisson equations.
Our results can be used to show that graph Laplacian eigenvectors are, with high probability, essentially Lipschitz regular with constants depending explicitly on their corresponding eigenvalues.
arXiv Detail & Related papers (2020-07-13T20:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.