Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations
- URL: http://arxiv.org/abs/2003.10992v1
- Date: Tue, 24 Mar 2020 17:28:15 GMT
- Title: Solving the Robust Matrix Completion Problem via a System of Nonlinear
Equations
- Authors: Yunfeng Cai and Ping Li
- Abstract summary: We consider the problem of robust matrix completion, which aims to recover a low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of their sum $M=L_*+S_*inmathbbRmtimes n$.
The algorithm is highly parallelizable and suitable for large scale problems.
Numerical simulations show that the simple method works as expected and is comparable with state-of-the-art methods.
- Score: 28.83358353043287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of robust matrix completion, which aims to recover a
low rank matrix $L_*$ and a sparse matrix $S_*$ from incomplete observations of
their sum $M=L_*+S_*\in\mathbb{R}^{m\times n}$. Algorithmically, the robust
matrix completion problem is transformed into a problem of solving a system of
nonlinear equations, and the alternative direction method is then used to solve
the nonlinear equations. In addition, the algorithm is highly parallelizable
and suitable for large scale problems. Theoretically, we characterize the
sufficient conditions for when $L_*$ can be approximated by a low rank
approximation of the observed $M_*$. And under proper assumptions, it is shown
that the algorithm converges to the true solution linearly. Numerical
simulations show that the simple method works as expected and is comparable
with state-of-the-art methods.
Related papers
- An approximation of the $S$ matrix for solving the Marchenko equation [0.0]
I present a new approximation of the $S$-matrix dependence on momentum $q$, formulated as a sum of a rational function and a truncated Sinc series.
This approach enables pointwise determination of the $S$ matrix with specified resolution, capturing essential features such as resonance behavior with high accuracy.
arXiv Detail & Related papers (2024-10-27T11:06:28Z) - Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems [9.30306458153248]
We consider the spectral tail condition number, $kappa_ell$, defined as the ratio between the $ell$th largest and the smallest singular value of the matrix representing the system.
Some of the implications of our result, and of the use of $kappa_ell$, include direct improvement over a fine-grained analysis of the Conjugate method.
arXiv Detail & Related papers (2024-05-09T14:56:49Z) - Orthogonally weighted $\ell_{2,1}$ regularization for rank-aware joint
sparse recovery: algorithm and analysis [7.7001263654719985]
We propose and analyze an efficient algorithm for solving the joint sparse recovery problem using a new regularization-based method, named $ell_2,1$ ($mathitowell_2,1$)
This method has applications in feature extraction, matrix column selection, and dictionary learning, and it is distinct from commonly used $ell_2,1$ regularization.
We provide a proof of the method's rank-awareness, establish the existence of solutions to the proposed optimization problem, and develop an efficient algorithm for solving it, whose convergence is analyzed.
arXiv Detail & Related papers (2023-11-21T01:52:15Z) - A quantum central path algorithm for linear optimization [5.450016817940232]
We propose a novel quantum algorithm for solving linear optimization problems by quantum-mechanical simulation of the central path.
This approach yields an algorithm for solving linear optimization problems involving $m$ constraints and $n$ variables to $varepsilon$-optimality.
In the standard gate model (i.e., without access to quantum RAM), our algorithm can obtain highly-precise solutions to LO problems using at most $$mathcalO left( sqrtm + n textsfnnz (A) fracR_1
arXiv Detail & Related papers (2023-11-07T13:26:20Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Quantum Algorithm for Solving a Quadratic Nonlinear System of Equations [0.22940141855172036]
The complexity of our algorithm is $O(rm polylog(n/epsilon))$, which provides an exponential improvement over the optimal classical algorithm in dimension $n$.
Our algorithm exponentially accelerates the solution of QNSE and has wide applications in all kinds of nonlinear problems.
arXiv Detail & Related papers (2021-12-03T00:27:16Z) - Global Convergence of Gradient Descent for Asymmetric Low-Rank Matrix
Factorization [49.090785356633695]
We study the asymmetric low-rank factorization problem: [mathbfU in mathbbRm min d, mathbfU$ and mathV$.
arXiv Detail & Related papers (2021-06-27T17:25:24Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Learning nonlinear dynamical systems from a single trajectory [102.60042167341956]
We introduce algorithms for learning nonlinear dynamical systems of the form $x_t+1=sigma(Thetastarx_t)+varepsilon_t$.
We give an algorithm that recovers the weight matrix $Thetastar$ from a single trajectory with optimal sample complexity and linear running time.
arXiv Detail & Related papers (2020-04-30T10:42:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.