Fast algorithms for robust principal component analysis with an upper
bound on the rank
- URL: http://arxiv.org/abs/2008.07972v2
- Date: Thu, 3 Sep 2020 18:04:29 GMT
- Title: Fast algorithms for robust principal component analysis with an upper
bound on the rank
- Authors: Ningyu Sha and Lei Shi and Ming Yan
- Abstract summary: The robust principal component analysis (RPCA) decomposes a data matrix into a low-rank part and a sparse part.
The first type of algorithm applies regularization terms on the singular values of a matrix to obtain a low-rank matrix.
The second type of algorithm replaces the low-rank matrix as the multiplication of two small matrices.
- Score: 12.668565749446476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The robust principal component analysis (RPCA) decomposes a data matrix into
a low-rank part and a sparse part. There are mainly two types of algorithms for
RPCA. The first type of algorithm applies regularization terms on the singular
values of a matrix to obtain a low-rank matrix. However, calculating singular
values can be very expensive for large matrices. The second type of algorithm
replaces the low-rank matrix as the multiplication of two small matrices. They
are faster than the first type because no singular value decomposition (SVD) is
required. However, the rank of the low-rank matrix is required, and an accurate
rank estimation is needed to obtain a reasonable solution. In this paper, we
propose algorithms that combine both types. Our proposed algorithms require an
upper bound of the rank and SVD on small matrices. First, they are faster than
the first type because the cost of SVD on small matrices is negligible. Second,
they are more robust than the second type because an upper bound of the rank
instead of the exact rank is required. Furthermore, we apply the Gauss-Newton
method to increase the speed of our algorithms. Numerical experiments show the
better performance of our proposed algorithms.
Related papers
- Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Fundamental Machine Learning Routines as Quantum Algorithms on a
Superconducting Quantum Computer [0.0]
Harrow-Hassidim-Lloyd algorithm is intended for solving the system of linear equations on quantum devices.
We present a numerical study of the performance of the algorithm when these caveats are not perfectly matched.
arXiv Detail & Related papers (2021-09-17T15:22:06Z) - Sparse Factorization of Large Square Matrices [10.94053598642913]
In this paper, we propose to approximate a large square matrix with a product of sparse full-rank matrices.
In the approximation, our method needs only $N(log N)2$ non-zero numbers for an $Ntimes N$ full matrix.
We show that our method gives a better approximation when the approximated matrix is sparse and high-rank.
arXiv Detail & Related papers (2021-09-16T18:42:21Z) - Robust 1-bit Compressive Sensing with Partial Gaussian Circulant
Matrices and Generative Priors [54.936314353063494]
We provide recovery guarantees for a correlation-based optimization algorithm for robust 1-bit compressive sensing.
We make use of a practical iterative algorithm, and perform numerical experiments on image datasets to corroborate our results.
arXiv Detail & Related papers (2021-08-08T05:28:06Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z) - Ranky : An Approach to Solve Distributed SVD on Large Sparse Matrices [0.0]
Singular Value Decomposition (SVD) is a well studied research topic in many fields and applications.
We propose Ranky, set of methods to solve rank problem on large and sparse matrices in a distributed manner.
arXiv Detail & Related papers (2020-09-21T11:36:28Z) - Rank $2r$ iterative least squares: efficient recovery of ill-conditioned
low rank matrices from few entries [4.230158563771147]
We present a new, simple and computationally efficient iterative method for low rank matrix completion.
Our algorithm, denoted R2RILS for rank $2r$ iterative least squares, has low memory requirements.
arXiv Detail & Related papers (2020-02-05T16:20:58Z) - Fast Generalized Matrix Regression with Applications in Machine Learning [46.79740387890322]
We propose a fast generalized matrix regression algorithm (Fast GMR) which utilizes sketching technique to solve the GMR problem efficiently.
We apply the Fast GMR algorithm to the symmetric positive definite matrix approximation and single pass singular value decomposition.
arXiv Detail & Related papers (2019-12-27T07:03:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.