The Price of Sparsity: Sufficient Conditions for Sparse Recovery using Sparse and Sparsified Measurements
- URL: http://arxiv.org/abs/2509.01809v1
- Date: Mon, 01 Sep 2025 22:26:37 GMT
- Title: The Price of Sparsity: Sufficient Conditions for Sparse Recovery using Sparse and Sparsified Measurements
- Authors: Youssef Chaabouni, David Gamarnik,
- Abstract summary: We consider the problem of recovering the support of a sparse signal using noisy projections.<n>We establish sufficient conditions on the sample size for successful sparse recovery using sparse measurement matrices.
- Score: 1.2604738912025477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of recovering the support of a sparse signal using noisy projections. While extensive work has been done on the dense measurement matrix setting, the sparse setting remains less explored. In this work, we establish sufficient conditions on the sample size for successful sparse recovery using sparse measurement matrices. Bringing together our result with previously known necessary conditions, we discover that, in the regime where $ds/p \rightarrow +\infty$, sparse recovery in the sparse setting exhibits a phase transition at an information-theoretic threshold of $n_{\text{INF}}^{\text{SP}} = \Theta\left(s\log\left(p/s\right)/\log\left(ds/p\right)\right)$, where $p$ denotes the signal dimension, $s$ the number of non-zero components of the signal, and $d$ the expected number of non-zero components per row of measurement. This expression makes the price of sparsity explicit: restricting each measurement to $d$ non-zeros inflates the required sample size by a factor of $\log{s}/\log\left(ds/p\right)$, revealing a precise trade-off between sampling complexity and measurement sparsity. Additionally, we examine the effect of sparsifying an originally dense measurement matrix on sparse signal recovery. We prove in the regime of $s = \alpha p$ and $d = \psi p$ with $\alpha, \psi \in \left(0,1\right)$ and $\psi$ small that a sample of size $n^{\text{Sp-ified}}_{\text{INF}} = \Theta\left(p / \psi^2\right)$ is sufficient for recovery, subject to a certain uniform integrability conjecture, the proof of which is work in progress.
Related papers
- Instance-optimal high-precision shadow tomography with few-copy measurements: A metrological approach [2.956729394666618]
We study the sample complexity of shadow tomography in the high-precision regime.<n>We use possibly adaptive measurements that act on $O(mathrmpolylog(d))$ number of copies of $$ at a time.
arXiv Detail & Related papers (2026-02-04T19:00:00Z) - High-accuracy sampling for diffusion models and log-concave distributions [70.90863485771405]
We present algorithms for diffusion model sampling which obtain $$-error in $mathrmpolylog (1/)$ steps.<n>Our approach also yields the first $mathrmpolylog (1/)$ complexity sampler for general log-concave distributions.
arXiv Detail & Related papers (2026-02-01T17:05:31Z) - Spike-and-Slab Posterior Sampling in High Dimensions [11.458504242206862]
Posterior sampling with the spike-and-slab prior [MB88] is considered the theoretical gold standard method for Bayesian sparse linear regression.<n>We give the first provable algorithms for spike-and-slab posterior sampling that apply for any SNR, and use a measurement count sub in the problem dimension.<n>We extend our result to spike-and-slab posterior sampling with Laplace diffuse densities, achieving similar guarantees when $sigma = O(frac1k)$ is bounded.
arXiv Detail & Related papers (2025-03-04T17:16:07Z) - Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$<n>We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ with a complexity that is not governed by information exponents.
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Robust 1-bit Compressed Sensing with Iterative Hard Thresholding [18.05573480691047]
In 1-bit compressed sensing, the aim is to estimate a $k$-sparse unit vector $xin Sn-1$ within an $epsilon$ error.
In this paper, we study a noisy version where a fraction of the measurements can be flipped, potentially by an adversary.
We show that when up to $tau$-fraction of the sign measurements are incorrect, BIHTally provides an estimate of $x$ within an $tildeO(epsilon+tau)$ error.
arXiv Detail & Related papers (2023-10-12T03:41:32Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Distribution-Independent Regression for Generalized Linear Models with
Oblivious Corruptions [49.69852011882769]
We show the first algorithms for the problem of regression for generalized linear models (GLMs) in the presence of additive oblivious noise.
We present an algorithm that tackles newthis problem in its most general distribution-independent setting.
This is the first newalgorithmic result for GLM regression newwith oblivious noise which can handle more than half the samples being arbitrarily corrupted.
arXiv Detail & Related papers (2023-09-20T21:41:59Z) - Optimal Sketching Bounds for Sparse Linear Regression [116.30196615349226]
We study oblivious sketching for $k$-sparse linear regression under various loss functions such as an $ell_p$ norm, or from a broad class of hinge-like loss functions.
We show that for sparse $ell$vareps regression, there is an oblivious distribution over sketches with $Theta(klog(d/k)/varepsilon2)$ rows, which is tight up to a constant factor.
We also show that sketching $O(mu2 klog(mu n d/varepsilon)/var
arXiv Detail & Related papers (2023-04-05T07:24:19Z) - Sparse Recovery with Shuffled Labels: Statistical Limits and Practical
Estimators [23.313461266708877]
We reconstruct the permutation matrix $bPitrue$ and the sparse signal $bbetatrue$ from shuffled labels.
We show that our proposed estimator can obtain the ground-truth $(bPitrue, supp(bbetatrue))$ under mild conditions.
arXiv Detail & Related papers (2023-03-20T16:14:58Z) - Sparse Signal Detection in Heteroscedastic Gaussian Sequence Models:
Sharp Minimax Rates [1.0309387309011746]
We study the signal detection problem against sparse alternatives, for known sparsity $s$.
We find minimax upper and lower bounds over the minimax separation radius $epsilon*$ and prove that they are always matching.
Our results reveal new phase transitions regarding the behavior of $epsilon*$ with respect to the level of sparsity, to the $Lt$ metric, and to the heteroscedasticity profile of $Sigma$.
arXiv Detail & Related papers (2022-11-15T23:53:39Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - Just Least Squares: Binary Compressive Sampling with Low Generative
Intrinsic Dimension [12.67951378791069]
We consider recovering $n$ dimensional signals from $m$ binary measurements corrupted by noises and sign flips.
Although the binary measurements model is highly nonlinear, we propose a least square decoder.
arXiv Detail & Related papers (2021-11-29T12:06:47Z) - Efficient Statistics for Sparse Graphical Models from Truncated Samples [19.205541380535397]
We focus on two fundamental and classical problems: (i) inference of sparse Gaussian graphical models and (ii) support recovery of sparse linear models.
For sparse linear regression, suppose samples $(bf x,y)$ are generated where $y = bf xtopOmega* + mathcalN(0,1)$ and $(bf x, y)$ is seen only if $y$ belongs to a truncation set $S subseteq mathbbRd$.
arXiv Detail & Related papers (2020-06-17T09:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.