Supervised Class-pairwise NMF for Data Representation and Classification
- URL: http://arxiv.org/abs/2209.13831v1
- Date: Wed, 28 Sep 2022 04:33:03 GMT
- Title: Supervised Class-pairwise NMF for Data Representation and Classification
- Authors: Rachid Hedjam, Abdelhamid Abdesselam, Seyed Mohammad Jafar Jalali,
Imran Khan, Samir Brahim Belhaouari
- Abstract summary: Non-negative Matrix factorization (NMF) based methods add new terms to the cost function to adapt the model to specific tasks.
NMF method adopts unsupervised approaches to estimate the factorizing matrices.
- Score: 2.7320863258816512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various Non-negative Matrix factorization (NMF) based methods add new terms
to the cost function to adapt the model to specific tasks, such as clustering,
or to preserve some structural properties in the reduced space (e.g., local
invariance). The added term is mainly weighted by a hyper-parameter to control
the balance of the overall formula to guide the optimization process towards
the objective. The result is a parameterized NMF method. However, NMF method
adopts unsupervised approaches to estimate the factorizing matrices. Thus, the
ability to perform prediction (e.g. classification) using the new obtained
features is not guaranteed. The objective of this work is to design an
evolutionary framework to learn the hyper-parameter of the parameterized NMF
and estimate the factorizing matrices in a supervised way to be more suitable
for classification problems. Moreover, we claim that applying NMF-based
algorithms separately to different class-pairs instead of applying it once to
the whole dataset improves the effectiveness of the matrix factorization
process. This results in training multiple parameterized NMF algorithms with
different balancing parameter values. A cross-validation combination learning
framework is adopted and a Genetic Algorithm is used to identify the optimal
set of hyper-parameter values. The experiments we conducted on both real and
synthetic datasets demonstrated the effectiveness of the proposed approach.
Related papers
- Exponentially Convergent Algorithms for Supervised Matrix Factorization [2.1485350418225244]
Supervised factorization (SMF) is a machine learning method that converges extraction and classification tasks.
Our paper provides a novel framework that 'lifts' SMF as a low-rank estimation problem in a combined factor space estimation.
arXiv Detail & Related papers (2023-11-18T23:24:02Z) - Stratified-NMF for Heterogeneous Data [8.174199227297514]
We propose a modified NMF objective, Stratified-NMF, that simultaneously learns strata-dependent statistics and a shared topics matrix.
We apply our method to three real world datasets and empirically investigate their learned features.
arXiv Detail & Related papers (2023-11-17T00:34:41Z) - Non-Negative Matrix Factorization with Scale Data Structure Preservation [23.31865419578237]
The model described in this paper belongs to the family of non-negative matrix factorization methods designed for data representation and dimension reduction.
The idea is to add, to the NMF cost function, a penalty term to impose a scale relationship between the pairwise similarity matrices of the original and transformed data points.
The proposed clustering algorithm is compared to some existing NMF-based algorithms and to some manifold learning-based algorithms when applied to some real-life datasets.
arXiv Detail & Related papers (2022-09-22T09:32:18Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - Adaptive Weighted Nonnegative Matrix Factorization for Robust Feature
Representation [9.844796520630522]
Nonnegative matrix factorization (NMF) has been widely used to dimensionality reduction in machine learning.
Traditional NMF does not properly handle outliers, so that it is sensitive to noise.
This paper proposes an adaptive weighted NMF, which introduces weights to emphasize the different importance of each data point.
arXiv Detail & Related papers (2022-06-07T05:27:08Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Algorithms for Nonnegative Matrix Factorization with the
Kullback-Leibler Divergence [20.671178429005973]
Kullback-Leibler (KL) divergence is one of the most widely used objective function for nonnegative matrix factorization (NMF)
We propose three new algorithms that guarantee the non-increasingness of the objective function.
We conduct extensive numerical experiments to provide a comprehensive picture of the performances of the KL NMF algorithms.
arXiv Detail & Related papers (2020-10-05T11:51:39Z) - Positive Semidefinite Matrix Factorization: A Connection with Phase
Retrieval and Affine Rank Minimization [71.57324258813674]
We show that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.
Motivated by this idea, we introduce a new family of PSDMF algorithms based on iterative hard thresholding (IHT)
arXiv Detail & Related papers (2020-07-24T06:10:19Z) - Augmentation of the Reconstruction Performance of Fuzzy C-Means with an
Optimized Fuzzification Factor Vector [99.19847674810079]
Fuzzy C-Means (FCM) is one of the most frequently used methods to construct information granules.
In this paper, we augment the FCM-based degranulation mechanism by introducing a vector of fuzzification factors.
Experiments completed for both synthetic and publicly available datasets show that the proposed approach outperforms the generic data reconstruction approach.
arXiv Detail & Related papers (2020-04-13T04:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.