The Average-Case Time Complexity of Certifying the Restricted Isometry
Property
- URL: http://arxiv.org/abs/2005.11270v3
- Date: Thu, 22 Apr 2021 16:00:12 GMT
- Title: The Average-Case Time Complexity of Certifying the Restricted Isometry
Property
- Authors: Yunzi Ding, Dmitriy Kunisky, Alexander S. Wein, Afonso S. Bandeira
- Abstract summary: In compressed sensing, the restricted isometry property (RIP) on $M times N$ sensing matrices guarantees efficient reconstruction of sparse vectors.
We investigate the exact average-case time complexity of certifying the RIP property for $Mtimes N$ matrices with i.i.d. $mathcalN(0,1/M)$ entries.
- Score: 66.65353643599899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In compressed sensing, the restricted isometry property (RIP) on $M \times N$
sensing matrices (where $M < N$) guarantees efficient reconstruction of sparse
vectors. A matrix has the $(s,\delta)$-$\mathsf{RIP}$ property if behaves as a
$\delta$-approximate isometry on $s$-sparse vectors. It is well known that an
$M\times N$ matrix with i.i.d. $\mathcal{N}(0,1/M)$ entries is
$(s,\delta)$-$\mathsf{RIP}$ with high probability as long as $s\lesssim
\delta^2 M/\log N$. On the other hand, most prior works aiming to
deterministically construct $(s,\delta)$-$\mathsf{RIP}$ matrices have failed
when $s \gg \sqrt{M}$. An alternative way to find an RIP matrix could be to
draw a random gaussian matrix and certify that it is indeed RIP. However, there
is evidence that this certification task is computationally hard when $s \gg
\sqrt{M}$, both in the worst case and the average case.
In this paper, we investigate the exact average-case time complexity of
certifying the RIP property for $M\times N$ matrices with i.i.d.
$\mathcal{N}(0,1/M)$ entries, in the "possible but hard" regime $\sqrt{M} \ll
s\lesssim M/\log N$. Based on analysis of the low-degree likelihood ratio, we
give rigorous evidence that subexponential runtime $N^{\tilde\Omega(s^2/M)}$ is
required, demonstrating a smooth tradeoff between the maximum tolerated
sparsity and the required computational power. This lower bound is essentially
tight, matching the runtime of an existing algorithm due to Koiran and Zouzias.
Our hardness result allows $\delta$ to take any constant value in $(0,1)$,
which captures the relevant regime for compressed sensing. This improves upon
the existing average-case hardness result of Wang, Berthet, and Plan, which is
limited to $\delta = o(1)$.
Related papers
- Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Optimal Estimator for Linear Regression with Shuffled Labels [17.99906229036223]
This paper considers the task of linear regression with shuffled labels.
$mathbf Y in mathbb Rntimes m, mathbf Pi in mathbb Rntimes p, mathbf B in mathbb Rptimes m$, and $mathbf Win mathbb Rntimes m$, respectively.
arXiv Detail & Related papers (2023-10-02T16:44:47Z) - Matrix Completion in Almost-Verification Time [37.61139884826181]
We provide an algorithm which completes $mathbfM$ on $99%$ of rows and columns.
We show how to boost this partial completion guarantee to a full matrix completion algorithm.
arXiv Detail & Related papers (2023-08-07T15:24:49Z) - A Nearly-Optimal Bound for Fast Regression with $\ell_\infty$ Guarantee [16.409210914237086]
Given a matrix $Ain mathbbRntimes d$ and a tensor $bin mathbbRn$, we consider the regression problem with $ell_infty$ guarantees.
We show that in order to obtain such $ell_infty$ guarantee for $ell$ regression, one has to use sketching matrices that are dense.
We also develop a novel analytical framework for $ell_infty$ guarantee regression that utilizes the Oblivious Coordinate-wise Embedding (OCE) property
arXiv Detail & Related papers (2023-02-01T05:22:40Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - Low-Rank Approximation with $1/\epsilon^{1/3}$ Matrix-Vector Products [58.05771390012827]
We study iterative methods based on Krylov subspaces for low-rank approximation under any Schatten-$p$ norm.
Our main result is an algorithm that uses only $tildeO(k/sqrtepsilon)$ matrix-vector products.
arXiv Detail & Related papers (2022-02-10T16:10:41Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Learning a Latent Simplex in Input-Sparsity Time [58.30321592603066]
We consider the problem of learning a latent $k$-vertex simplex $KsubsetmathbbRdtimes n$, given access to $AinmathbbRdtimes n$.
We show that the dependence on $k$ in the running time is unnecessary given a natural assumption about the mass of the top $k$ singular values of $A$.
arXiv Detail & Related papers (2021-05-17T16:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.