Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors
- URL: http://arxiv.org/abs/2108.03570v1
- Date: Sun, 8 Aug 2021 05:28:06 GMT
- Title: Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors
- Authors: Zhaoqiang Liu, Subhroshekhar Ghosh, Jun Han, Jonathan Scarlett
- Abstract summary: We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
- Score: 54.936314353063494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In 1-bit compressive sensing, each measurement is quantized to a single bit,
namely the sign of a linear function of an unknown vector, and the goal is to
accurately recover the vector. While it is most popular to assume a standard
Gaussian sensing matrix for 1-bit compressive sensing, using structured sensing
matrices such as partial Gaussian circulant matrices is of significant
practical importance due to their faster matrix operations. In this paper, we
provide recovery guarantees for a correlation-based optimization algorithm for
robust 1-bit compressive sensing with randomly signed partial Gaussian
circulant matrices and generative models. Under suitable assumptions, we match
guarantees that were previously only known to hold for i.i.d.~Gaussian matrices
that require significantly more computation. We make use of a practical
iterative algorithm, and perform numerical experiments on image datasets to
corroborate our theoretical results.
Related papers
- Optimal Quantization for Matrix Multiplication [35.007966885532724]
We present a universal quantizer based on nested lattices with an explicit guarantee of approximation error for any (non-random) pair of matrices $A$, $B$ in terms of only Frobenius norms $|A|_F, |B|_F$ and $|Atop B|_F$.
arXiv Detail & Related papers (2024-10-17T17:19:48Z) - A Majorization-Minimization Gauss-Newton Method for 1-Bit Matrix Completion [15.128477070895055]
We propose a novel method for 1-bit matrix completion called Majorization-Minimization Gauss-Newton (MMGN)
Our method is based on the majorization-minimization principle, which converts the original optimization problem into a sequence of standard low-rank matrix completion problems.
arXiv Detail & Related papers (2023-04-27T03:16:52Z) - A Structured Sparse Neural Network and Its Matrix Calculations Algorithm [0.0]
We introduce a nonsymmetric, tridiagonal matrix with offdiagonal sparse entries and offset sub and super-diagonals.
For the cases where the matrix inverse does not exist, a least square type pseudoinverse is provided.
Results show significant improvement in computational costs specially when the size of matrix increases.
arXiv Detail & Related papers (2022-07-02T19:38:48Z) - A Weighted Generalized Coherence Approach for Sensing Matrix Design [2.3478438171452014]
We propose generalizations of the well-known mutual coherence criterion for optimizing sensing matrices.
An algorithm is also presented to solve the optimization problems.
arXiv Detail & Related papers (2021-10-06T10:44:21Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - A generalization of the randomized singular value decomposition [2.538209532048867]
We generalize the theory of randomized SVD to multivariable Gaussian vectors, allowing one to incorporate prior knowledge of $A$ into the algorithm.
We construct a new covariance kernel for GPs, based on weighted Jacobi algorithms, which allows us to rapidly sample the GP and control the smoothness of the randomly generated functions.
arXiv Detail & Related papers (2021-05-27T10:39:37Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.