Computational Lower Bounds for Graphon Estimation via Low-degree Polynomials
- URL: http://arxiv.org/abs/2308.15728v4
- Date: Mon, 12 Aug 2024 23:05:48 GMT
- Title: Computational Lower Bounds for Graphon Estimation via Low-degree Polynomials
- Authors: Yuetian Luo, Chao Gao,
- Abstract summary: We show that for low-degree estimators, their estimation error rates cannot be significantly better than that of the USVT.
Our results are proved based on the recent development of low-degrees by Schramm and Wein (2022), while we overcome a few key challenges in applying it to the general graphon estimation problem.
- Score: 14.908056172167052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphon estimation has been one of the most fundamental problems in network analysis and has received considerable attention in the past decade. From the statistical perspective, the minimax error rate of graphon estimation has been established by Gao et al (2015) for both stochastic block model and nonparametric graphon estimation. The statistical optimal estimators are based on constrained least squares and have computational complexity exponential in the dimension. From the computational perspective, the best-known polynomial-time estimator is based universal singular value thresholding, but it can only achieve a much slower estimation error rate than the minimax one. The computational optimality of the USVT or the existence of a computational barrier in graphon estimation has been a long-standing open problem. In this work, we provide rigorous evidence for the computational barrier in graphon estimation via low-degree polynomials. Specifically, in SBM graphon estimation, we show that for low-degree polynomial estimators, their estimation error rates cannot be significantly better than that of the USVT under a wide range of parameter regimes and in nonparametric graphon estimation, we show low-degree polynomial estimators achieve estimation error rates strictly slower than the minimax rate. Our results are proved based on the recent development of low-degree polynomials by Schramm and Wein (2022), while we overcome a few key challenges in applying it to the general graphon estimation problem. By leveraging our main results, we also provide a computational lower bound on the clustering error for community detection in SBM with a growing number of communities and this yields a new piece of evidence for the conjectured Kesten-Stigum threshold for efficient community recovery. Finally, we extend our computational lower bounds to sparse graphon estimation and biclustering.
Related papers
- Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Variance estimation in graphs with the fused lasso [7.732474038706013]
We develop a linear time estimator for the homoscedastic case that can consistently estimate the variance in general graphs.
We show that our estimator attains minimax rates for the chain and 2D grid graphs when the mean signal has total variation with canonical scaling.
arXiv Detail & Related papers (2022-07-26T03:50:51Z) - Optimal Estimation and Computational Limit of Low-rank Gaussian Mixtures [12.868722327487752]
We propose a low-rank Gaussian mixture model (LrMM) assuming each matrix-valued observation has a planted low-rank structure.
We prove the minimax optimality of a maximum likelihood estimator which, in general, is computationally infeasible.
Our results reveal multiple phase transitions in the minimax error rates and the statistical-to-computational gap.
arXiv Detail & Related papers (2022-01-22T12:43:25Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Non asymptotic estimation lower bounds for LTI state space models with
Cram\'er-Rao and van Trees [1.14219428942199]
We study the estimation problem for linear time-invariant (LTI) state-space models with Gaussian excitation of an unknown covariance.
We provide non lower bounds for the expected estimation error and the mean square estimation risk of the least square estimator.
Our results extend and improve existing lower bounds to lower bounds in expectation of the mean square estimation risk.
arXiv Detail & Related papers (2021-09-17T15:00:25Z) - Robust W-GAN-Based Estimation Under Wasserstein Contamination [8.87135311567798]
We study several estimation problems under a Wasserstein contamination model and present computationally tractable estimators motivated by generative networks (GANs)
Specifically, we analyze properties of Wasserstein GAN-based estimators for adversarial location estimation, covariance matrix estimation, and linear regression.
Our proposed estimators are minimax optimal in many scenarios.
arXiv Detail & Related papers (2021-01-20T05:15:16Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z) - An Optimal Statistical and Computational Framework for Generalized
Tensor Estimation [10.899518267165666]
This paper describes a flexible framework for low-rank tensor estimation problems.
It includes many important instances from applications in computational imaging, genomics, and network analysis.
arXiv Detail & Related papers (2020-02-26T01:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.