Log-based Sparse Nonnegative Matrix Factorization for Data
Representation
- URL: http://arxiv.org/abs/2204.10647v1
- Date: Fri, 22 Apr 2022 11:38:10 GMT
- Title: Log-based Sparse Nonnegative Matrix Factorization for Data
Representation
- Authors: Chong Peng, Yiqun Zhang, Yongyong Chen, Zhao Kang, Chenglizhao Chen,
Qiang Cheng
- Abstract summary: Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
- Score: 55.72494900138061
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonnegative matrix factorization (NMF) has been widely studied in recent
years due to its effectiveness in representing nonnegative data with
parts-based representations. For NMF, a sparser solution implies better
parts-based representation.However, current NMF methods do not always generate
sparse solutions.In this paper, we propose a new NMF method with log-norm
imposed on the factor matrices to enhance the sparseness.Moreover, we propose a
novel column-wisely sparse norm, named $\ell_{2,\log}$-(pseudo) norm to enhance
the robustness of the proposed method.The $\ell_{2,\log}$-(pseudo) norm is
invariant, continuous, and differentiable.For the $\ell_{2,\log}$ regularized
shrinkage problem, we derive a closed-form solution, which can be used for
other general problems.Efficient multiplicative updating rules are developed
for the optimization, which theoretically guarantees the convergence of the
objective value sequence.Extensive experimental results confirm the
effectiveness of the proposed method, as well as the enhanced sparseness and
robustness.
Related papers
- A Fresh Look at Generalized Category Discovery through Non-negative Matrix Factorization [83.12938977698988]
Generalized Category Discovery (GCD) aims to classify both base and novel images using labeled base data.
Current approaches inadequately address the intrinsic optimization of the co-occurrence matrix $barA$ based on cosine similarity.
We propose a Non-Negative Generalized Category Discovery (NN-GCD) framework to address these deficiencies.
arXiv Detail & Related papers (2024-10-29T07:24:11Z) - A Novel Maximum-Entropy-Driven Technique for Low-Rank Orthogonal
Nonnegative Matrix Factorization with $\ell_0$-Norm sparsity Constraint [0.0]
In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors.
This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem.
The proposed method achieves comparable or improved reconstruction errors in line with the literature.
arXiv Detail & Related papers (2022-10-06T04:30:59Z) - Gradient-Free Methods for Deterministic and Stochastic Nonsmooth
Nonconvex Optimization [94.19177623349947]
Non-smooth non optimization problems emerge in machine learning and business making.
Two core challenges impede the development of efficient methods with finitetime convergence guarantee.
Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results.
arXiv Detail & Related papers (2022-09-12T06:53:24Z) - Adaptive Weighted Nonnegative Matrix Factorization for Robust Feature
Representation [9.844796520630522]
Nonnegative matrix factorization (NMF) has been widely used to dimensionality reduction in machine learning.
Traditional NMF does not properly handle outliers, so that it is sensitive to noise.
This paper proposes an adaptive weighted NMF, which introduces weights to emphasize the different importance of each data point.
arXiv Detail & Related papers (2022-06-07T05:27:08Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Sparse Separable Nonnegative Matrix Factorization [22.679160149512377]
We propose a new variant of nonnegative matrix factorization (NMF)
Separability requires that the columns of the first NMF factor are equal to columns of the input matrix, while sparsity requires that the columns of the second NMF factor are sparse.
We prove that, in noiseless settings and under mild assumptions, our algorithm recovers the true underlying sources.
arXiv Detail & Related papers (2020-06-13T03:52:29Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z) - Convergence to Second-Order Stationarity for Non-negative Matrix
Factorization: Provably and Concurrently [18.89597524771988]
Non-negative matrix factorization (NMF) is a fundamental non-modification optimization problem with numerous applications in Machine Learning.
This paper defines a multiplicative weight update type dynamics (Seung algorithm) that runs concurrently and provably avoids saddle points.
An important advantage is the use concurrent implementations in parallel computing environments.
arXiv Detail & Related papers (2020-02-26T06:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.