Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression
- URL: http://arxiv.org/abs/2303.02118v2
- Date: Thu, 6 Jul 2023 16:21:21 GMT
- Title: Statistical-Computational Tradeoffs in Mixed Sparse Linear Regression
- Authors: Gabriel Arpino and Ramji Venkataramanan
- Abstract summary: We show that the problem suffers from a $frackSNR2$-to-$frack2SNR2$ statistical-to-computational gap.
We also analyze a simple thresholding algorithm which, outside of the narrow regime where the problem is hard, solves the associated mixed regression detection problem.
- Score: 20.00109111254507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of mixed sparse linear regression with two
components, where two real $k$-sparse signals $\beta_1, \beta_2$ are to be
recovered from $n$ unlabelled noisy linear measurements. The sparsity is
allowed to be sublinear in the dimension, and additive noise is assumed to be
independent Gaussian with variance $\sigma^2$. Prior work has shown that the
problem suffers from a $\frac{k}{SNR^2}$-to-$\frac{k^2}{SNR^2}$
statistical-to-computational gap, resembling other computationally challenging
high-dimensional inference problems such as Sparse PCA and Robust Sparse Mean
Estimation; here $SNR$ is the signal-to-noise ratio. We establish the existence
of a more extensive computational barrier for this problem through the method
of low-degree polynomials, but show that the problem is computationally hard
only in a very narrow symmetric parameter regime. We identify a smooth
information-computation tradeoff between the sample complexity $n$ and runtime
for any randomized algorithm in this hard regime. Via a simple reduction, this
provides novel rigorous evidence for the existence of a computational barrier
to solving exact support recovery in sparse phase retrieval with sample
complexity $n = \tilde{o}(k^2)$. Our second contribution is to analyze a simple
thresholding algorithm which, outside of the narrow regime where the problem is
hard, solves the associated mixed regression detection problem in $O(np)$ time
with square-root the number of samples and matches the sample complexity
required for (non-mixed) sparse linear regression; this allows the recovery
problem to be subsequently solved by state-of-the-art techniques from the dense
case. As a special case of our results, we show that this simple algorithm is
order-optimal among a large family of algorithms in solving exact signed
support recovery in sparse linear regression.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - Feature Adaptation for Sparse Linear Regression [20.923321050404827]
Sparse linear regression is a central problem in high-dimensional statistics.
We provide an algorithm that adapts to tolerate a small number of approximate dependencies.
Our framework fits into a broader framework of feature adaptation for sparse linear regression.
arXiv Detail & Related papers (2023-05-26T12:53:13Z) - A Conditional Randomization Test for Sparse Logistic Regression in
High-Dimension [36.00360315353985]
emphCRT-logit is an algorithm that combines a variable-distillation step and a decorrelation step.
We provide a theoretical analysis of this procedure, and demonstrate its effectiveness on simulations, along with experiments on large-scale brain-imaging and genomics datasets.
arXiv Detail & Related papers (2022-05-29T09:37:16Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - Efficient Algorithms for Multidimensional Segmented Regression [42.046881924063044]
We study the fundamental problem of fixed design em multidimensional regression.
We provide the first sample and computationally efficient algorithm for this problem in any fixed dimension.
Our algorithm relies on a simple merging iterative approach, which is novel in the multidimensional setting.
arXiv Detail & Related papers (2020-03-24T19:39:34Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.