Robust 1-bit Compressed Sensing with Iterative Hard Thresholding
- URL: http://arxiv.org/abs/2310.08019v1
- Date: Thu, 12 Oct 2023 03:41:32 GMT
- Title: Robust 1-bit Compressed Sensing with Iterative Hard Thresholding
- Authors: Namiko Matsumoto, Arya Mazumdar
- Abstract summary: In 1-bit compressed sensing, the aim is to estimate a $k$-sparse unit vector $xin Sn-1$ within an $epsilon$ error.
In this paper, we study a noisy version where a fraction of the measurements can be flipped, potentially by an adversary.
We show that when up to $tau$-fraction of the sign measurements are incorrect, BIHTally provides an estimate of $x$ within an $tildeO(epsilon+tau)$ error.
- Score: 18.05573480691047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In 1-bit compressed sensing, the aim is to estimate a $k$-sparse unit vector
$x\in S^{n-1}$ within an $\epsilon$ error (in $\ell_2$) from minimal number of
linear measurements that are quantized to just their signs, i.e., from
measurements of the form $y = \mathrm{Sign}(\langle a, x\rangle).$ In this
paper, we study a noisy version where a fraction of the measurements can be
flipped, potentially by an adversary. In particular, we analyze the Binary
Iterative Hard Thresholding (BIHT) algorithm, a proximal gradient descent on a
properly defined loss function used for 1-bit compressed sensing, in this noisy
setting. It is known from recent results that, with
$\tilde{O}(\frac{k}{\epsilon})$ noiseless measurements, BIHT provides an
estimate within $\epsilon$ error. This result is optimal and universal, meaning
one set of measurements work for all sparse vectors. In this paper, we show
that BIHT also provides better results than all known methods for the noisy
setting. We show that when up to $\tau$-fraction of the sign measurements are
incorrect (adversarial error), with the same number of measurements as before,
BIHT agnostically provides an estimate of $x$ within an
$\tilde{O}(\epsilon+\tau)$ error, maintaining the universality of measurements.
This establishes stability of iterative hard thresholding in the presence of
measurement error. To obtain the result, we use the restricted approximate
invertibility of Gaussian matrices, as well as a tight analysis of the
high-dimensional geometry of the adversarially corrupted measurements.
Related papers
- Binary Iterative Hard Thresholding Converges with Optimal Number of
Measurements for 1-Bit Compressed Sensing [29.570141048369297]
We show that the BIHT algorithm converges with only $tildeO(frackepsilon) measurements.
This is also an example of a linear descent algorithm converging to the correct solution for a non problem.
arXiv Detail & Related papers (2022-07-07T16:52:50Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Consistent Estimation for PCA and Sparse Regression with Oblivious
Outliers [13.244654316770815]
We develop machinery to design efficiently computable and consistent estimators.
For sparse regression, we achieve consistency for optimal sample size $ngsim (klog d)/alpha2$.
In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix.
arXiv Detail & Related papers (2021-11-04T15:59:44Z) - Support Recovery in Universal One-bit Compressed Sensing [54.26691979520478]
One-bit compressed sensing (1bCS) is an extreme-quantized signal acquisition method.
We show that it is possible to universally recover the support with a small number of false positives.
arXiv Detail & Related papers (2021-07-19T18:10:51Z) - Thresholded Lasso Bandit [70.17389393497125]
Thresholded Lasso bandit is an algorithm that estimates the vector defining the reward function as well as its sparse support.
We establish non-asymptotic regret upper bounds scaling as $mathcalO( log d + sqrtT )$ in general, and as $mathcalO( log d + sqrtT )$ under the so-called margin condition.
arXiv Detail & Related papers (2020-10-22T19:14:37Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - One-Bit Compressed Sensing via One-Shot Hard Thresholding [7.594050968868919]
A problem of 1-bit compressed sensing is to estimate a sparse signal from a few binary measurements.
We present a novel and concise analysis that moves away from the widely used non-constrained notion of width.
arXiv Detail & Related papers (2020-07-07T17:28:03Z) - Online Robust Regression via SGD on the l1 loss [19.087335681007477]
We consider the robust linear regression problem in the online setting where we have access to the data in a streaming manner.
We show in this work that the descent on the $ell_O( 1 / (1 - eta)2 n )$ loss converges to the true parameter vector at a $tildeO( 1 / (1 - eta)2 n )$ rate which is independent of the values of the contaminated measurements.
arXiv Detail & Related papers (2020-07-01T11:38:21Z) - The Generalized Lasso with Nonlinear Observations and Generative Priors [63.541900026673055]
We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
arXiv Detail & Related papers (2020-06-22T16:43:35Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.