Fast Rank-1 NMF for Missing Data with KL Divergence
- URL: http://arxiv.org/abs/2110.12595v1
- Date: Mon, 25 Oct 2021 02:05:35 GMT
- Title: Fast Rank-1 NMF for Missing Data with KL Divergence
- Authors: Kazu Ghalamkari, Mahito Sugiyama
- Abstract summary: A1GM minimizes the KL divergence from an input matrix to the reconstructed rank-1 matrix.
We show that A1GM is more efficient than a gradient method with competitive reconstruction errors.
- Score: 8.020742121274417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a fast non-gradient based method of rank-1 non-negative matrix
factorization (NMF) for missing data, called A1GM, that minimizes the KL
divergence from an input matrix to the reconstructed rank-1 matrix. Our method
is based on our new finding of an analytical closed-formula of the best rank-1
non-negative multiple matrix factorization (NMMF), a variety of NMF. NMMF is
known to exactly solve NMF for missing data if positions of missing values
satisfy a certain condition, and A1GM transforms a given matrix so that the
analytical solution to NMMF can be applied. We empirically show that A1GM is
more efficient than a gradient method with competitive reconstruction errors.
Related papers
- Sum-of-norms regularized Nonnegative Matrix Factorization [1.5484595752241124]
In this work, we propose an approximation method to estimate such rank while solving nonnegative matrix factorization (NMF)
We use sum-of-norm (SON), a group-lasso structure that encourages pairwise similarity, to reduce the rank of a factor matrix where the rank is overestimated.
SON-NMF is able to automatically estimate the rank from data, can deal with rank-deficient data matrix, can detect weak component with small energy.
arXiv Detail & Related papers (2024-06-30T14:16:27Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - Co-Separable Nonnegative Matrix Factorization [20.550794776914508]
Nonnegative matrix factorization (NMF) is a popular model in the field of pattern recognition.
We refer to this NMF as a Co-Separable NMF (CoS-NMF)
An optimization model for CoS-NMF is proposed and alternated fast gradient method is employed to solve the model.
arXiv Detail & Related papers (2021-09-02T07:05:04Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Algorithms for Nonnegative Matrix Factorization with the
Kullback-Leibler Divergence [20.671178429005973]
Kullback-Leibler (KL) divergence is one of the most widely used objective function for nonnegative matrix factorization (NMF)
We propose three new algorithms that guarantee the non-increasingness of the objective function.
We conduct extensive numerical experiments to provide a comprehensive picture of the performances of the KL NMF algorithms.
arXiv Detail & Related papers (2020-10-05T11:51:39Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Sparse Separable Nonnegative Matrix Factorization [22.679160149512377]
We propose a new variant of nonnegative matrix factorization (NMF)
Separability requires that the columns of the first NMF factor are equal to columns of the input matrix, while sparsity requires that the columns of the second NMF factor are sparse.
We prove that, in noiseless settings and under mild assumptions, our algorithm recovers the true underlying sources.
arXiv Detail & Related papers (2020-06-13T03:52:29Z) - Fast Rank Reduction for Non-negative Matrices via Mean Field Theory [5.634825161148483]
We formulate rank reduction as a mean-field approximation by modeling matrices via a log-linear model on structured sample space.
We empirically show that our rank reduction method is faster than NMF and its popular variant, lraNMF, while achieving competitive low rank approximation error on synthetic and real-world datasets.
arXiv Detail & Related papers (2020-06-09T14:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.