Rank $2r$ iterative least squares: efficient recovery of ill-conditioned
low rank matrices from few entries
- URL: http://arxiv.org/abs/2002.01849v2
- Date: Wed, 28 Oct 2020 17:10:08 GMT
- Title: Rank $2r$ iterative least squares: efficient recovery of ill-conditioned
low rank matrices from few entries
- Authors: Jonathan Bauch, Boaz Nadler and Pini Zilber
- Abstract summary: We present a new, simple and computationally efficient iterative method for low rank matrix completion.
Our algorithm, denoted R2RILS for rank $2r$ iterative least squares, has low memory requirements.
- Score: 4.230158563771147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new, simple and computationally efficient iterative method for
low rank matrix completion. Our method is inspired by the class of
factorization-type iterative algorithms, but substantially differs from them in
the way the problem is cast. Precisely, given a target rank $r$, instead of
optimizing on the manifold of rank $r$ matrices, we allow our interim estimated
matrix to have a specific over-parametrized rank $2r$ structure. Our algorithm,
denoted R2RILS for rank $2r$ iterative least squares, has low memory
requirements, and at each iteration it solves a computationally cheap sparse
least-squares problem. We motivate our algorithm by its theoretical analysis
for the simplified case of a rank-1 matrix. Empirically, R2RILS is able to
recover ill conditioned low rank matrices from very few observations -- near
the information limit, and it is stable to additive noise.
Related papers
- Model-free Low-Rank Reinforcement Learning via Leveraged Entry-wise Matrix Estimation [48.92318828548911]
We present LoRa-PI (Low-Rank Policy Iteration), a model-free learning algorithm alternating between policy improvement and policy evaluation steps.
LoRa-PI learns an $varepsilon$-optimal policy using $widetildeO(S+Aover mathrmpoly (1-gamma)varepsilon2)$ samples where $S$ denotes the number of states (resp. actions) and $gamma$ the discount factor.
arXiv Detail & Related papers (2024-10-30T20:22:17Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Low Rank Matrix Completion via Robust Alternating Minimization in Nearly Linear Time [8.808780937701522]
We take a major step towards a more efficient and error-robust alternating minimization framework.
Our algorithm runs in time $widetilde O(|Omega| k)$, which is nearly linear in the time to verify the solution.
arXiv Detail & Related papers (2023-02-21T23:49:36Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Unique sparse decomposition of low rank matrices [17.037882881652617]
We find a unique decomposition of a low rank matrixYin mathbbRrtimes n$.
We prove that up to some $Yin mathRrtimes n$ is a sparse-wise decomposition of $Xin mathbbRrtimes n$.
arXiv Detail & Related papers (2021-06-14T20:05:59Z) - Hutch++: Optimal Stochastic Trace Estimation [75.45968495410048]
We introduce a new randomized algorithm, Hutch++, which computes a $(1 pm epsilon)$ approximation to $tr(A)$ for any positive semidefinite (PSD) $A$.
We show that it significantly outperforms Hutchinson's method in experiments.
arXiv Detail & Related papers (2020-10-19T16:45:37Z) - Fast algorithms for robust principal component analysis with an upper
bound on the rank [12.668565749446476]
The robust principal component analysis (RPCA) decomposes a data matrix into a low-rank part and a sparse part.
The first type of algorithm applies regularization terms on the singular values of a matrix to obtain a low-rank matrix.
The second type of algorithm replaces the low-rank matrix as the multiplication of two small matrices.
arXiv Detail & Related papers (2020-08-18T15:07:57Z) - Optimal Exact Matrix Completion Under new Parametrization [0.0]
We study the problem of exact completion for $m times n$ sized matrix of rank $r$ with the adaptive sampling method.
We propose matrix completion algorithms that exactly recovers the target matrix.
arXiv Detail & Related papers (2020-02-06T18:31:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.