Binary Iterative Hard Thresholding Converges with Optimal Number of
Measurements for 1-Bit Compressed Sensing
- URL: http://arxiv.org/abs/2207.03427v1
- Date: Thu, 7 Jul 2022 16:52:50 GMT
- Title: Binary Iterative Hard Thresholding Converges with Optimal Number of
Measurements for 1-Bit Compressed Sensing
- Authors: Namiko Matsumoto, Arya Mazumdar
- Abstract summary: We show that the BIHT algorithm converges with only $tildeO(frackepsilon) measurements.
This is also an example of a linear descent algorithm converging to the correct solution for a non problem.
- Score: 29.570141048369297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compressed sensing has been a very successful high-dimensional signal
acquisition and recovery technique that relies on linear operations. However,
the actual measurements of signals have to be quantized before storing or
processing. 1(One)-bit compressed sensing is a heavily quantized version of
compressed sensing, where each linear measurement of a signal is reduced to
just one bit: the sign of the measurement. Once enough of such measurements are
collected, the recovery problem in 1-bit compressed sensing aims to find the
original signal with as much accuracy as possible. The recovery problem is
related to the traditional "halfspace-learning" problem in learning theory.
For recovery of sparse vectors, a popular reconstruction method from 1-bit
measurements is the binary iterative hard thresholding (BIHT) algorithm. The
algorithm is a simple projected sub-gradient descent method, and is known to
converge well empirically, despite the nonconvexity of the problem. The
convergence property of BIHT was not theoretically justified, except with an
exorbitantly large number of measurements (i.e., a number of measurement
greater than $\max\{k^{10}, 24^{48}, k^{3.5}/\epsilon\}$, where $k$ is the
sparsity, $\epsilon$ denotes the approximation error, and even this expression
hides other factors). In this paper we show that the BIHT algorithm converges
with only $\tilde{O}(\frac{k}{\epsilon})$ measurements. Note that, this
dependence on $k$ and $\epsilon$ is optimal for any recovery method in 1-bit
compressed sensing. With this result, to the best of our knowledge, BIHT is the
only practical and efficient (polynomial time) algorithm that requires the
optimal number of measurements in all parameters (both $k$ and $\epsilon$).
This is also an example of a gradient descent algorithm converging to the
correct solution for a nonconvex problem, under suitable structural conditions.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization [48.84672493756553]
We propose a query-efficient and accurate estimator for gradients that uses only $Obig(slogfrac dsbig)$ queries per step.
Our proposed GraCe generalizes the Indyk--Price--Woodruff (IPW) algorithm in compressed sensing from linear measurements to nonlinear functions.
arXiv Detail & Related papers (2024-05-27T03:52:53Z) - Robust 1-bit Compressed Sensing with Iterative Hard Thresholding [18.05573480691047]
In 1-bit compressed sensing, the aim is to estimate a $k$-sparse unit vector $xin Sn-1$ within an $epsilon$ error.
In this paper, we study a noisy version where a fraction of the measurements can be flipped, potentially by an adversary.
We show that when up to $tau$-fraction of the sign measurements are incorrect, BIHTally provides an estimate of $x$ within an $tildeO(epsilon+tau)$ error.
arXiv Detail & Related papers (2023-10-12T03:41:32Z) - Improved Support Recovery in Universal One-bit Compressed Sensing [37.5349071806395]
One-bit compressed (1bCS) is an extremely quantized signal acquisition method.
The focus of this paper is support recovery, which often also computationally facilitate approximate signal recovery.
We show that the first type of recovery is possible with $tildeO(k/epsilon)$ measurements, while the later type of recovery, more challenging, is possible with $tildeO(maxk/epsilon,k3/2)$ measurements.
arXiv Detail & Related papers (2022-10-29T17:43:14Z) - Support Recovery in Universal One-bit Compressed Sensing [54.26691979520478]
One-bit compressed sensing (1bCS) is an extreme-quantized signal acquisition method.
We show that it is possible to universally recover the support with a small number of false positives.
arXiv Detail & Related papers (2021-07-19T18:10:51Z) - One-Bit Compressed Sensing via One-Shot Hard Thresholding [7.594050968868919]
A problem of 1-bit compressed sensing is to estimate a sparse signal from a few binary measurements.
We present a novel and concise analysis that moves away from the widely used non-constrained notion of width.
arXiv Detail & Related papers (2020-07-07T17:28:03Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Differentially Quantized Gradient Methods [53.3186247068836]
We show that Differentially Quantized Gradient Descent (DQ-GD) attains a linear contraction factor of $maxsigma_mathrmGD, rhon 2-R$.
No algorithm within a certain class can converge faster than $maxsigma_mathrmGD, 2-R$.
arXiv Detail & Related papers (2020-02-06T20:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.