Optimal Exact Matrix Completion Under new Parametrization
- URL: http://arxiv.org/abs/2002.02431v3
- Date: Sat, 5 Mar 2022 00:32:39 GMT
- Title: Optimal Exact Matrix Completion Under new Parametrization
- Authors: Ilqar Ramazanli, Barnabas Poczos
- Abstract summary: We study the problem of exact completion for $m times n$ sized matrix of rank $r$ with the adaptive sampling method.
We propose matrix completion algorithms that exactly recovers the target matrix.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of exact completion for $m \times n$ sized matrix of
rank $r$ with the adaptive sampling method.
We introduce a relation of the exact completion problem with the sparsest
vector of column and row spaces (which we call \textit{sparsity-number} here).
Using this relation, we propose matrix completion algorithms that exactly
recovers the target matrix. These algorithms are superior to previous works in
two important ways. First, our algorithms exactly recovers $\mu_0$-coherent
column space matrices by probability at least $1 - \epsilon$ using much smaller
observations complexity than $\mathcal{O}(\mu_0 rn
\mathrm{log}\frac{r}{\epsilon})$ the state of art. Specifically, many of the
previous adaptive sampling methods require to observe the entire matrix when
the column space is highly coherent. However, we show that our method is still
able to recover this type of matrices by observing a small fraction of entries
under many scenarios. Second, we propose an exact completion algorithm, which
requires minimal pre-information as either row or column space is not being
highly coherent. At the end of the paper, we provide experimental results that
illustrate the strength of the algorithms proposed here.
Related papers
- Model-free Low-Rank Reinforcement Learning via Leveraged Entry-wise Matrix Estimation [48.92318828548911]
We present LoRa-PI (Low-Rank Policy Iteration), a model-free learning algorithm alternating between policy improvement and policy evaluation steps.
LoRa-PI learns an $varepsilon$-optimal policy using $widetildeO(S+Aover mathrmpoly (1-gamma)varepsilon2)$ samples where $S$ denotes the number of states (resp. actions) and $gamma$ the discount factor.
arXiv Detail & Related papers (2024-10-30T20:22:17Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Nonparametric Matrix Estimation with One-Sided Covariates [5.457150493905064]
We consider the task of matrix estimation in which a dataset $X in mathbbRntimes m$ is observed with sparsity $p$.
We provide an algorithm and analysis which shows that our algorithm improves upon naively estimating each row separately when the number of rows is not too small.
arXiv Detail & Related papers (2021-10-26T19:11:45Z) - Low-rank Matrix Recovery With Unknown Correspondence [62.634051913953485]
We show that it is possible to recover $M$ via solving a nuclear norm minimization problem under a proper low-rank condition on $M$, with provable non-asymptotic error bound for the recovery of $M$.
Experiments on simulated data, the MovieLens 100K dataset and Yale B database show that $textM3textO achieves state-of-the-art performance over several baselines and can recover the ground-truth correspondence with high accuracy.
arXiv Detail & Related papers (2021-10-15T09:27:50Z) - Unique sparse decomposition of low rank matrices [17.037882881652617]
We find a unique decomposition of a low rank matrixYin mathbbRrtimes n$.
We prove that up to some $Yin mathRrtimes n$ is a sparse-wise decomposition of $Xin mathbbRrtimes n$.
arXiv Detail & Related papers (2021-06-14T20:05:59Z) - Hutch++: Optimal Stochastic Trace Estimation [75.45968495410048]
We introduce a new randomized algorithm, Hutch++, which computes a $(1 pm epsilon)$ approximation to $tr(A)$ for any positive semidefinite (PSD) $A$.
We show that it significantly outperforms Hutchinson's method in experiments.
arXiv Detail & Related papers (2020-10-19T16:45:37Z) - Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations [28.83358353043287]
We consider the problem of robust matrix completion, which aims to recover a low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of their sum $M=L_*+S_*inmathbbRmtimes n$.
The algorithm is highly parallelizable and suitable for large scale problems.
Numerical simulations show that the simple method works as expected and is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-03-24T17:28:15Z) - Rank $2r$ iterative least squares: efficient recovery of ill-conditioned
low rank matrices from few entries [4.230158563771147]
We present a new, simple and computationally efficient iterative method for low rank matrix completion.
Our algorithm, denoted R2RILS for rank $2r$ iterative least squares, has low memory requirements.
arXiv Detail & Related papers (2020-02-05T16:20:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.