Analyze the robustness of three NMF algorithms (Robust NMF with L1 norm,
L2-1 norm NMF, L2 NMF)
- URL: http://arxiv.org/abs/2312.01357v1
- Date: Sun, 3 Dec 2023 11:39:04 GMT
- Title: Analyze the robustness of three NMF algorithms (Robust NMF with L1 norm,
L2-1 norm NMF, L2 NMF)
- Authors: Cheng Zeng, Jiaqi Tian, Yixuan Xu
- Abstract summary: We investigate the noise robustness of non-negative matrix factorization (NMF) in the face of different types of noise.
Specifically, we adopt three different NMF algorithms, namely L1 NMF, L2 NMF, and L21 NMF.
In the experiment, we use a variety of evaluation indicators, including root mean square error (RMSE), accuracy (ACC), and normalized mutual information (NMI) to evaluate the performance of different NMF algorithms.
- Score: 5.708964539699851
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-negative matrix factorization (NMF) and its variants have been widely
employed in clustering and classification tasks (Long, & Jian , 2021). However,
noises can seriously affect the results of our experiments. Our research is
dedicated to investigating the noise robustness of non-negative matrix
factorization (NMF) in the face of different types of noise. Specifically, we
adopt three different NMF algorithms, namely L1 NMF, L2 NMF, and L21 NMF, and
use the ORL and YaleB data sets to simulate a series of experiments with
salt-and-pepper noise and Block-occlusion noise separately. In the experiment,
we use a variety of evaluation indicators, including root mean square error
(RMSE), accuracy (ACC), and normalized mutual information (NMI), to evaluate
the performance of different NMF algorithms in noisy environments. Through
these indicators, we quantify the resistance of NMF algorithms to noise and
gain insights into their feasibility in practical applications.
Related papers
- On the Relation between Internal Language Model and Sequence Discriminative Training for Neural Transducers [52.88268942796418]
Internal language model (ILM) subtraction has been widely applied to improve the performance of the RNN-Transducer.
We show that sequence discriminative training has a strong correlation with ILM subtraction from both theoretical and empirical points of view.
arXiv Detail & Related papers (2023-09-25T13:35:28Z) - Contaminated Images Recovery by Implementing Non-negative Matrix
Factorisation [0.0]
We theoretically examine the robustness of the traditional NMF, HCNMF, and L2,1-NMF algorithms and execute sets of experiments to demonstrate the robustness on ORL and Extended YaleB datasets.
Due to the computational cost of these approaches, our final models, such as the HCNMF and L2,1-NMF model, fail to converge within the parameters of this work.
arXiv Detail & Related papers (2022-11-08T13:50:27Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Adaptive Weighted Nonnegative Matrix Factorization for Robust Feature
Representation [9.844796520630522]
Nonnegative matrix factorization (NMF) has been widely used to dimensionality reduction in machine learning.
Traditional NMF does not properly handle outliers, so that it is sensitive to noise.
This paper proposes an adaptive weighted NMF, which introduces weights to emphasize the different importance of each data point.
arXiv Detail & Related papers (2022-06-07T05:27:08Z) - SymNMF-Net for The Symmetric NMF Problem [62.44067422984995]
We propose a neural network called SymNMF-Net for the Symmetric NMF problem.
We show that the inference of each block corresponds to a single iteration of the optimization.
Empirical results on real-world datasets demonstrate the superiority of our SymNMF-Net.
arXiv Detail & Related papers (2022-05-26T08:17:39Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - Analysis of the robustness of NMF algorithms [0.0]
We examine three non-negative matrix factorization techniques; L2-norm, L1-norm, and L2,1-norm.
Our aim is to establish the performance of these different approaches, and their robustness in real-world applications.
arXiv Detail & Related papers (2021-06-04T02:35:24Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Algorithms for Nonnegative Matrix Factorization with the
Kullback-Leibler Divergence [20.671178429005973]
Kullback-Leibler (KL) divergence is one of the most widely used objective function for nonnegative matrix factorization (NMF)
We propose three new algorithms that guarantee the non-increasingness of the objective function.
We conduct extensive numerical experiments to provide a comprehensive picture of the performances of the KL NMF algorithms.
arXiv Detail & Related papers (2020-10-05T11:51:39Z) - Positive Semidefinite Matrix Factorization: A Connection with Phase
Retrieval and Affine Rank Minimization [71.57324258813674]
We show that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.
Motivated by this idea, we introduce a new family of PSDMF algorithms based on iterative hard thresholding (IHT)
arXiv Detail & Related papers (2020-07-24T06:10:19Z) - Sparse Separable Nonnegative Matrix Factorization [22.679160149512377]
We propose a new variant of nonnegative matrix factorization (NMF)
Separability requires that the columns of the first NMF factor are equal to columns of the input matrix, while sparsity requires that the columns of the second NMF factor are sparse.
We prove that, in noiseless settings and under mild assumptions, our algorithm recovers the true underlying sources.
arXiv Detail & Related papers (2020-06-13T03:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.