Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method
- URL: http://arxiv.org/abs/2008.07740v1
- Date: Tue, 18 Aug 2020 04:46:22 GMT
- Title: Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method
- Authors: Minhui Huang, Shiqian Ma, Lifeng Lai
- Abstract summary: Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
- Score: 47.80060761046752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust low-rank matrix completion (RMC), or robust principal component
analysis with partially observed data, has been studied extensively for
computer vision, signal processing and machine learning applications. This
problem aims to decompose a partially observed matrix into the superposition of
a low-rank matrix and a sparse matrix, where the sparse matrix captures the
grossly corrupted entries of the matrix. A widely used approach to tackle RMC
is to consider a convex formulation, which minimizes the nuclear norm of the
low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix
(to promote sparsity). In this paper, motivated by some recent works on
low-rank matrix completion and Riemannian optimization, we formulate this
problem as a nonsmooth Riemannian optimization problem over Grassmann manifold.
This new formulation is scalable because the low-rank matrix is factorized to
the multiplication of two much smaller matrices. We then propose an alternating
manifold proximal gradient continuation (AManPGC) method to solve the proposed
new formulation. The convergence rate of the proposed algorithm is rigorously
analyzed. Numerical results on both synthetic data and real data on background
extraction from surveillance videos are reported to demonstrate the advantages
of the proposed new formulation and algorithm over several popular existing
approaches.
Related papers
- Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - Optimal Low-Rank Matrix Completion: Semidefinite Relaxations and
Eigenvector Disjunctions [6.537257913467247]
Low-rank matrix completion consists of a matrix of minimal complexity that recovers a given set of observations as accurately as possible.
New convex relaxations decrease the optimal by magnitude compared to existing methods.
arXiv Detail & Related papers (2023-05-20T22:04:34Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Nonlinear matrix recovery using optimization on the Grassmann manifold [18.655422834567577]
We investigate the problem of recovering a partially observed high-rank clustering matrix whose columns obey a nonlinear structure such as a union of subspaces.
We show that the alternating limit converges to a unique point using the Kurdyka-Lojasi property.
arXiv Detail & Related papers (2021-09-13T16:13:13Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Fast Rank Reduction for Non-negative Matrices via Mean Field Theory [5.634825161148483]
We formulate rank reduction as a mean-field approximation by modeling matrices via a log-linear model on structured sample space.
We empirically show that our rank reduction method is faster than NMF and its popular variant, lraNMF, while achieving competitive low rank approximation error on synthetic and real-world datasets.
arXiv Detail & Related papers (2020-06-09T14:55:47Z) - Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled
Gradient Descent [34.0533596121548]
Low-rank matrix estimation converges convex problem that finds numerous applications in signal processing, machine learning and imaging science.
We show that ScaledGD achieves the best of the best in terms of the number of the low-rank matrix.
Our analysis is also applicable to general loss that are similar to low-rank gradient descent.
arXiv Detail & Related papers (2020-05-18T17:17:16Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.