Distributional Hardness Against Preconditioned Lasso via Erasure-Robust
Designs
- URL: http://arxiv.org/abs/2203.02824v1
- Date: Sat, 5 Mar 2022 22:16:05 GMT
- Title: Distributional Hardness Against Preconditioned Lasso via Erasure-Robust
Designs
- Authors: Jonathan A. Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi
- Abstract summary: We show that standard sparse random designs are with high probability robust to adversarial measurement erasures.
This is the first time that partial recoverability of arbitrary sparse signals under erasures has been studied in compressed sensing.
- Score: 22.41443027099101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse linear regression with ill-conditioned Gaussian random designs is
widely believed to exhibit a statistical/computational gap, but there is
surprisingly little formal evidence for this belief, even in the form of
examples that are hard for restricted classes of algorithms. Recent work has
shown that, for certain covariance matrices, the broad class of Preconditioned
Lasso programs provably cannot succeed on polylogarithmically sparse signals
with a sublinear number of samples. However, this lower bound only shows that
for every preconditioner, there exists at least one signal that it fails to
recover successfully. This leaves open the possibility that, for example,
trying multiple different preconditioners solves every sparse linear regression
problem.
In this work, we prove a stronger lower bound that overcomes this issue. For
an appropriate covariance matrix, we construct a single signal distribution on
which any invertibly-preconditioned Lasso program fails with high probability,
unless it receives a linear number of samples.
Surprisingly, at the heart of our lower bound is a new positive result in
compressed sensing. We show that standard sparse random designs are with high
probability robust to adversarial measurement erasures, in the sense that if
$b$ measurements are erased, then all but $O(b)$ of the coordinates of the
signal are still information-theoretically identifiable. To our knowledge, this
is the first time that partial recoverability of arbitrary sparse signals under
erasures has been studied in compressed sensing.
Related papers
- Identifying General Mechanism Shifts in Linear Causal Representations [58.6238439611389]
We consider the linear causal representation learning setting where we observe a linear mixing of $d$ unknown latent factors.
Recent work has shown that it is possible to recover the latent factors as well as the underlying structural causal model over them.
We provide a surprising identifiability result that it is indeed possible, under some very mild standard assumptions, to identify the set of shifted nodes.
arXiv Detail & Related papers (2024-10-31T15:56:50Z) - Non-convex matrix sensing: Breaking the quadratic rank barrier in the sample complexity [11.412228884390784]
We study the problem of reconstructing a low-rank quadratic convex matrix from a few measurements.
We show that factorized gradient with spectral specificity converges to truth with the number of samples.
arXiv Detail & Related papers (2024-08-20T14:09:28Z) - Lasso with Latents: Efficient Estimation, Covariate Rescaling, and
Computational-Statistical Gaps [29.13944209460543]
We propose a natural sparse linear regression setting where strong correlations arise from unobserved latent variables.
In this setting, we analyze the problem caused by strong correlations and design a surprisingly simple fix.
The sample complexity of the resulting "rescaled Lasso" algorithm incurs (in the worst case) quadratic dependence on the sparsity of the underlying signal.
arXiv Detail & Related papers (2024-02-23T16:16:38Z) - Off-the-grid prediction and testing for linear combination of translated features [2.774897240515734]
We consider a model where a signal (discrete or continuous) is observed with an additive Gaussian noise process.
We extend previous prediction results for off-the-grid estimators by taking into account that the scale parameter may vary.
We propose a procedure to test whether the features of the observed signal belong to a given finite collection.
arXiv Detail & Related papers (2022-12-02T13:48:45Z) - WARPd: A linearly convergent first-order method for inverse problems
with approximate sharpness conditions [0.0]
Sharpness conditions directly control the recovery performance of restart schemes for first-order methods.
We provide a first-order method: Weighted, Accelerated and Restarted Primal-dual (WARPd)
Under a generic approximate sharpness condition, WARPd achieves stable linear convergence to the desired vector.
We show how WARPd compares favorably with specialized state-of-the-art methods and is ideally suited for solving large-scale problems.
arXiv Detail & Related papers (2021-10-24T13:19:41Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - One-Bit Compressed Sensing via One-Shot Hard Thresholding [7.594050968868919]
A problem of 1-bit compressed sensing is to estimate a sparse signal from a few binary measurements.
We present a novel and concise analysis that moves away from the widely used non-constrained notion of width.
arXiv Detail & Related papers (2020-07-07T17:28:03Z) - The Generalized Lasso with Nonlinear Observations and Generative Priors [63.541900026673055]
We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
arXiv Detail & Related papers (2020-06-22T16:43:35Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.