Support Recovery in Universal One-bit Compressed Sensing
- URL: http://arxiv.org/abs/2107.09091v1
- Date: Mon, 19 Jul 2021 18:10:51 GMT
- Title: Support Recovery in Universal One-bit Compressed Sensing
- Authors: Arya Mazumdar, Soumyabrata Pal
- Abstract summary: One-bit compressed sensing (1bCS) is an extreme-quantized signal acquisition method.
We show that it is possible to universally recover the support with a small number of false positives.
- Score: 54.26691979520478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One-bit compressed sensing (1bCS) is an extreme-quantized signal acquisition
method that has been widely studied in the past decade. In 1bCS, linear samples
of a high dimensional signal are quantized to only one bit per sample (sign of
the measurement). Assuming the original signal vector to be sparse, existing
results either aim to find the support of the vector, or approximate the signal
within an $\epsilon$-ball. The focus of this paper is support recovery, which
often also computationally facilitates approximate signal recovery. A universal
measurement matrix for 1bCS refers to one set of measurements that work for all
sparse signals. With universality, it is known that $\tilde{\Theta}(k^2)$ 1bCS
measurements are necessary and sufficient for support recovery (where $k$
denotes the sparsity). In this work, we show that it is possible to universally
recover the support with a small number of false positives with
$\tilde{O}(k^{3/2})$ measurements. If the dynamic range of the signal vector is
known, then with a different technique, this result can be improved to only
$\tilde{O}(k)$ measurements. Further results on support recovery are also
provided.
Related papers
- Robust 1-bit Compressed Sensing with Iterative Hard Thresholding [18.05573480691047]
In 1-bit compressed sensing, the aim is to estimate a $k$-sparse unit vector $xin Sn-1$ within an $epsilon$ error.
In this paper, we study a noisy version where a fraction of the measurements can be flipped, potentially by an adversary.
We show that when up to $tau$-fraction of the sign measurements are incorrect, BIHTally provides an estimate of $x$ within an $tildeO(epsilon+tau)$ error.
arXiv Detail & Related papers (2023-10-12T03:41:32Z) - Model-adapted Fourier sampling for generative compressed sensing [7.130302992490975]
We study generative compressed sensing when the measurement matrix is randomly subsampled from a unitary matrix.
We construct a model-adapted sampling strategy with an improved sample complexity of $textitO(kd| boldsymbolalpha|_22)$ measurements.
arXiv Detail & Related papers (2023-10-08T03:13:16Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Improved Support Recovery in Universal One-bit Compressed Sensing [37.5349071806395]
One-bit compressed (1bCS) is an extremely quantized signal acquisition method.
The focus of this paper is support recovery, which often also computationally facilitate approximate signal recovery.
We show that the first type of recovery is possible with $tildeO(k/epsilon)$ measurements, while the later type of recovery, more challenging, is possible with $tildeO(maxk/epsilon,k3/2)$ measurements.
arXiv Detail & Related papers (2022-10-29T17:43:14Z) - Binary Iterative Hard Thresholding Converges with Optimal Number of
Measurements for 1-Bit Compressed Sensing [29.570141048369297]
We show that the BIHT algorithm converges with only $tildeO(frackepsilon) measurements.
This is also an example of a linear descent algorithm converging to the correct solution for a non problem.
arXiv Detail & Related papers (2022-07-07T16:52:50Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - A Precise Performance Analysis of Support Vector Regression [105.94855998235232]
We study the hard and soft support vector regression techniques applied to a set of $n$ linear measurements.
Our results are then used to optimally tune the parameters intervening in the design of hard and soft support vector regression algorithms.
arXiv Detail & Related papers (2021-05-21T14:26:28Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Hadamard Wirtinger Flow for Sparse Phase Retrieval [24.17778927729799]
We consider the problem of reconstructing an $n$-dimensional $k$-sparse signal from a set of noiseless magnitude-only measurements.
Formulating the problem as an unregularized empirical risk minimization task, we study the sample complexity performance of gradient descent with Hadamard parametrization.
We numerically investigate the performance of HWF at convergence and show that, while not requiring any explicit form of regularization nor knowledge of $k$, HWF adapts to the signal sparsity and reconstructs sparse signals with fewer measurements than existing gradient based methods.
arXiv Detail & Related papers (2020-06-01T16:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.