Low-Degree Method Fails to Predict Robust Subspace Recovery
- URL: http://arxiv.org/abs/2603.02594v1
- Date: Tue, 03 Mar 2026 04:42:13 GMT
- Title: Low-Degree Method Fails to Predict Robust Subspace Recovery
- Authors: He Jia, Aravindan Vijayaraghavan,
- Abstract summary: We show that the low-degree method and low-degree moments fail to capture algorithms based on anti-concentration.<n>Our results challenge the universality as a predictor of computational barriers.
- Score: 11.09360259927697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The low-degree polynomial framework has been highly successful in predicting computational versus statistical gaps for high-dimensional problems in average-case analysis and machine learning. This success has led to the low-degree conjecture, which posits that this method captures the power and limitations of efficient algorithms for a wide class of high-dimensional statistical problems. We identify a natural and basic hypothesis testing problem in $\mathbb{R}^n$ which is polynomial time solvable, but for which the low-degree polynomial method fails to predict its computational tractability even up to degree $k=n^{Ω(1)}$. Moreover, the low-degree moments match exactly up to degree $k=O(\sqrt{\log n/\log\log n})$. Our problem is a special case of the well-studied robust subspace recovery problem. The lower bounds suggest that there is no polynomial time algorithm for this problem. In contrast, we give a simple and robust polynomial time algorithm that solves the problem (and noisy variants of it), leveraging anti-concentration properties of the distribution. Our results suggest that the low-degree method and low-degree moments fail to capture algorithms based on anti-concentration, challenging their universality as a predictor of computational barriers.
Related papers
- Learning Intersections of Two Margin Halfspaces under Factorizable Distributions [56.51474048985742]
Learning intersections of halfspaces is a central problem in Computational Learning Theory.<n>Even for just two halfspaces, it remains a major open question whether learning is possible in time.<n>We introduce a novel algorithm that provably circumvents the CSQ hardness barrier.
arXiv Detail & Related papers (2025-11-13T00:28:24Z) - Low-degree lower bounds via almost orthonormal bases [47.83594448785856]
Low-degrees have emerged as a powerful paradigm for providing evidence of statistical--computational gaps across a variety of high-dimensional statistical models.<n>In this work, we propose a more direct proof strategy.
arXiv Detail & Related papers (2025-09-11T11:07:36Z) - Low degree conjecture implies sharp computational thresholds in stochastic block model [3.7873597471903935]
We investigate implications of the (extended) low-degree conjecture (recently formalized in [MW23] in the context of the symmetric block model)<n>We establish that no-time algorithm can weakly recover community labels below the Kesten-Stigum (KS) threshold.<n>Our results provide evidence of a computational-to-statistical gap in learning the parameters of block models.
arXiv Detail & Related papers (2025-02-20T20:21:03Z) - Computational-Statistical Gaps in Reinforcement Learning [23.517741855454044]
We show a reduction from Unique-Sat, where we convert a CNF formula into an MDP Hypothesis with deterministic transitions, constant number of actions and low dimensional linear optimal value functions.
This result also exhibits the first computational-statistical gap in reinforcement learning with linear function approximation.
arXiv Detail & Related papers (2022-02-11T04:48:35Z) - Lattice-Based Methods Surpass Sum-of-Squares in Clustering [98.46302040220395]
Clustering is a fundamental primitive in unsupervised learning.
Recent work has established lower bounds against the class of low-degree methods.
We show that, perhaps surprisingly, this particular clustering model textitdoes not exhibit a statistical-to-computational gap.
arXiv Detail & Related papers (2021-12-07T18:50:17Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent [29.684442397855197]
We study two of the most popular restricted computational models, the statistical query framework and low-degree corollas, in the context of high-dimensional hypothesis testing.
Under mild conditions on the testing problem, the two classes of algorithms are essentially equivalent in power.
Asries, we obtain new statistical query lower bounds for sparse PCA, tensor PCA and several variants of the planted clique problem.
arXiv Detail & Related papers (2020-09-13T22:55:18Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z) - High-Dimensional Robust Mean Estimation via Gradient Descent [73.61354272612752]
We show that the problem of robust mean estimation in the presence of a constant adversarial fraction can be solved by gradient descent.
Our work establishes an intriguing connection between the near non-lemma estimation and robust statistics.
arXiv Detail & Related papers (2020-05-04T10:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.