Compressed sensing of low-rank plus sparse matrices
- URL: http://arxiv.org/abs/2007.09457v2
- Date: Wed, 27 Apr 2022 12:55:06 GMT
- Title: Compressed sensing of low-rank plus sparse matrices
- Authors: Jared Tanner and Simon Vary
- Abstract summary: This manuscript develops similar guarantees showing that $mtimes n$ that can be expressed as the sum of a rank-rparse matrix and a $s-sparse matrix can be recovered by computationally tractable methods.
Results are shown for synthetic problems, dynamic-foreground/static separation, multispectral imaging, and Robust PCA.
- Score: 3.8073142980733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a
flexible model capturing global and local features in data popularized as
Robust PCA (Candes et al., 2011; Chandrasekaran et al., 2009). Compressed
sensing, matrix completion, and their variants (Eldar and Kutyniok, 2012;
Foucart and Rauhut, 2013) have established that data satisfying low complexity
models can be efficiently measured and recovered from a number of measurements
proportional to the model complexity rather than the ambient dimension. This
manuscript develops similar guarantees showing that $m\times n$ matrices that
can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be
recovered by computationally tractable methods from
$\mathcal{O}(r(m+n-r)+s)\log(mn/s)$ linear measurements. More specifically, we
establish that the low-rank plus sparse matrix set is closed provided the
incoherence of the low-rank component is upper bounded as
$\mu<\sqrt{mn}/(r\sqrt{s})$, and subsequently, the restricted isometry
constants for the aforementioned matrices remain bounded independent of problem
size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ remain fixed. Additionally, we
show that semidefinite programming and two hard threshold gradient descent
algorithms, NIHT and NAHT, converge to the measured matrix provided the
measurement operator's RIC's are sufficiently small. These results also
provably solve convex and non-convex formulation of Robust PCA with the
asymptotically optimal fraction of corruptions $\alpha=\mathcal{O}\left(1/(\mu
r) \right)$, where $s = \alpha^2 mn$, and improve the previously best known
guarantees by not requiring that the fraction of corruptions is spread in every
column and row by being upper bounded by $\alpha$. Numerical experiments
illustrating these results are shown for synthetic problems,
dynamic-foreground/static-background separation, and multispectral imaging.
Related papers
- One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - On the well-spread property and its relation to linear regression [4.619541348328937]
We show that consistent recovery of the parameter vector in a robust linear regression model is information-theoretically impossible.
We show that it is possible to efficiently certify whether a given $n$-by-$d$ matrix is well-spread if the number of observations is quadratic in the ambient dimension.
arXiv Detail & Related papers (2022-06-16T11:17:44Z) - Computationally Efficient Approximations for Matrix-based Renyi's
Entropy [33.72108955447222]
Recently developed matrix based Renyi's entropy enables measurement of information in data.
computation of such quantity involves the trace operator on a PSD matrix $G$ to power $alpha$(i.e., $tr(Galpha)$.
We present computationally efficient approximations to this new entropy functional that can reduce its complexity to even significantly less than $O(n2)$.
arXiv Detail & Related papers (2021-12-27T14:59:52Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Unique sparse decomposition of low rank matrices [17.037882881652617]
We find a unique decomposition of a low rank matrixYin mathbbRrtimes n$.
We prove that up to some $Yin mathRrtimes n$ is a sparse-wise decomposition of $Xin mathbbRrtimes n$.
arXiv Detail & Related papers (2021-06-14T20:05:59Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Information-Theoretic Limits for the Matrix Tensor Product [8.206394018475708]
This paper studies a high-dimensional inference problem involving the matrix tensor product of random matrices.
On the technical side, this paper introduces some new techniques for the analysis of high-dimensional matrix-preserving signals.
arXiv Detail & Related papers (2020-05-22T17:03:48Z) - Relative Error Bound Analysis for Nuclear Norm Regularized Matrix Completion [101.83262280224729]
We develop a relative error bound for nuclear norm regularized matrix completion.
We derive a relative upper bound for recovering the best low-rank approximation of the unknown matrix.
arXiv Detail & Related papers (2015-04-26T13:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.