Low-degree lower bounds via almost orthonormal bases
- URL: http://arxiv.org/abs/2509.09353v1
- Date: Thu, 11 Sep 2025 11:07:36 GMT
- Title: Low-degree lower bounds via almost orthonormal bases
- Authors: Alexandra Carpentier, Simone Maria Giancola, Christophe Giraud, Nicolas Verzelen,
- Abstract summary: Low-degrees have emerged as a powerful paradigm for providing evidence of statistical--computational gaps across a variety of high-dimensional statistical models.<n>In this work, we propose a more direct proof strategy.
- Score: 47.83594448785856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-degree polynomials have emerged as a powerful paradigm for providing evidence of statistical--computational gaps across a variety of high-dimensional statistical models [Wein25]. For detection problems -- where the goal is to test a planted distribution $\mathbb{P}'$ against a null distribution $\mathbb{P}$ with independent components -- the standard approach is to bound the advantage using an $\mathbb{L}^2(\mathbb{P})$-orthonormal family of polynomials. However, this method breaks down for estimation tasks or more complex testing problems where $\mathbb{P}$ has some planted structures, so that no simple $\mathbb{L}^2(\mathbb{P})$-orthogonal polynomial family is available. To address this challenge, several technical workarounds have been proposed [SW22,SW25], though their implementation can be delicate. In this work, we propose a more direct proof strategy. Focusing on random graph models, we construct a basis of polynomials that is almost orthonormal under $\mathbb{P}$, in precisely those regimes where statistical--computational gaps arise. This almost orthonormal basis not only yields a direct route to establishing low-degree lower bounds, but also allows us to explicitly identify the polynomials that optimize the low-degree criterion. This, in turn, provides insights into the design of optimal polynomial-time algorithms. We illustrate the effectiveness of our approach by recovering known low-degree lower bounds, and establishing new ones for problems such as hidden subcliques, stochastic block models, and seriation models.
Related papers
- Low-Degree Method Fails to Predict Robust Subspace Recovery [11.09360259927697]
We show that the low-degree method and low-degree moments fail to capture algorithms based on anti-concentration.<n>Our results challenge the universality as a predictor of computational barriers.
arXiv Detail & Related papers (2026-03-03T04:42:13Z) - Sandwiching Polynomials for Geometric Concepts with Low Intrinsic Dimension [23.43080600040766]
We give a new method for constructing low-degree sandwichings that yield greatly improved degree bounds for several fundamental function classes and marginal distributions.<n>Our proof is relatively simple and directly uses the smoothness of the target function's boundary to construct sandwiching Lipschitz functions.
arXiv Detail & Related papers (2026-02-27T16:59:18Z) - Efficient Distribution Learning with Error Bounds in Wasserstein Distance [5.3077298055859545]
Wasserstein distance has emerged as a key metric to quantify distances between probability distributions.<n>We devise a novel framework to approximate an unknown probability distribution $mathbbP$ from a finite set of samples.
arXiv Detail & Related papers (2026-02-08T17:14:30Z) - $p$-Adic Polynomial Regression as Alternative to Neural Network for Approximating $p$-Adic Functions of Many Variables [55.2480439325792]
A regression model is constructed that allows approximating continuous functions with any degree of accuracy.<n>The proposed model can be considered as a simple alternative to possible $p$-adic models based on neural network architecture.
arXiv Detail & Related papers (2025-03-30T15:42:08Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Resilience of Rademacher chaos of low degree [5.3390449198644445]
resilience of Rademacher chaos is the maximum number of adversarial sign-flips that the chaos can sustain.<n>We provide probabilistic lower-bound guarantees for the resilience of Rademacher chaos of arbitrary yet sufficiently low degree.<n>Our results for decoupled Rademacher chaos of order two and that of higher order whilst are established through the same conceptual framework, differ substantially.
arXiv Detail & Related papers (2024-02-16T08:27:55Z) - Multiscale regression on unknown manifolds [13.752772802705978]
We construct low-dimensional coordinates on $mathcalM$ at multiple scales and perform multiscale regression by local fitting.
We analyze the generalization error of our method by proving finite sample bounds in high probability on rich classes of priors.
Our algorithm has quasilinear complexity in the sample size, with constants linear in $D$ and exponential in $d$.
arXiv Detail & Related papers (2021-01-13T15:14:31Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Learning the Hypotheses Space from data: Learning Space and U-curve
Property [0.0]
This paper presents an extension of the classical PAC learning model in which learning problems are modelled not only by a Hypothesis Space $mathcalH$, but also by a Learning Space $mathbbL(mathcalH)$.
Our main contribution is a data driven general learning algorithm to perform regularized Model Selection on $mathbbL(mathcalH)$.
arXiv Detail & Related papers (2020-01-26T22:29:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.