Column $\ell_{2,0}$-norm regularized factorization model of low-rank
matrix recovery and its computation
- URL: http://arxiv.org/abs/2008.10466v3
- Date: Sun, 26 Dec 2021 15:07:10 GMT
- Title: Column $\ell_{2,0}$-norm regularized factorization model of low-rank
matrix recovery and its computation
- Authors: Ting Tao, Yitian Qian and Shaohua Pan
- Abstract summary: This paper is concerned with the column $ell_2,0$regularized factorization model of low-rank computation problems.
Numerical experiments are conducted with synthetic and real data examples.
- Score: 0.9281671380673306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is concerned with the column $\ell_{2,0}$-regularized
factorization model of low-rank matrix recovery problems and its computation.
The column $\ell_{2,0}$-norm of factor matrices is introduced to promote column
sparsity of factors and low-rank solutions. For this nonconvex discontinuous
optimization problem, we develop an alternating majorization-minimization (AMM)
method with extrapolation, and a hybrid AMM in which a majorized alternating
proximal method is proposed to seek an initial factor pair with less nonzero
columns and the AMM with extrapolation is then employed to minimize of a smooth
nonconvex loss. We provide the global convergence analysis for the proposed AMM
methods and apply them to the matrix completion problem with non-uniform
sampling schemes. Numerical experiments are conducted with synthetic and real
data examples, and comparison results with the nuclear-norm regularized
factorization model and the max-norm regularized convex model show that the
column $\ell_{2,0}$-regularized factorization model has an advantage in
offering solutions of lower error and rank within less time.
Related papers
- Entropic covariance models [0.7614628596146602]
We present a general framework for linear restrictions on different transformations of the covariance matrix.
Our proposed estimation method solves a convex problem and yields an $M$-estimator.
arXiv Detail & Related papers (2023-06-06T11:25:05Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - A Novel Maximum-Entropy-Driven Technique for Low-Rank Orthogonal
Nonnegative Matrix Factorization with $\ell_0$-Norm sparsity Constraint [0.0]
In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors.
This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem.
The proposed method achieves comparable or improved reconstruction errors in line with the literature.
arXiv Detail & Related papers (2022-10-06T04:30:59Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Fast Rank Reduction for Non-negative Matrices via Mean Field Theory [5.634825161148483]
We formulate rank reduction as a mean-field approximation by modeling matrices via a log-linear model on structured sample space.
We empirically show that our rank reduction method is faster than NMF and its popular variant, lraNMF, while achieving competitive low rank approximation error on synthetic and real-world datasets.
arXiv Detail & Related papers (2020-06-09T14:55:47Z) - Efficient MCMC Sampling for Bayesian Matrix Factorization by Breaking
Posterior Symmetries [1.3858051019755282]
We propose a simple modification to the prior choice that provably breaks these symmetries and maintains/improves accuracy.
We show that using non-zero linearly independent prior means significantly lowers the autocorrelation of MCMC samples, and can also lead to lower reconstruction errors.
arXiv Detail & Related papers (2020-06-08T00:25:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.