Feature Weighted Non-negative Matrix Factorization
- URL: http://arxiv.org/abs/2103.13491v1
- Date: Wed, 24 Mar 2021 21:17:17 GMT
- Title: Feature Weighted Non-negative Matrix Factorization
- Authors: Mulin Chen, Maoguo Gong, and Xuelong Li
- Abstract summary: We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
- Score: 92.45013716097753
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Non-negative Matrix Factorization (NMF) is one of the most popular techniques
for data representation and clustering, and has been widely used in machine
learning and data analysis. NMF concentrates the features of each sample into a
vector, and approximates it by the linear combination of basis vectors, such
that the low-dimensional representations are achieved. However, in real-world
applications, the features are usually with different importances. To exploit
the discriminative features, some methods project the samples into the subspace
with a transformation matrix, which disturbs the original feature attributes
and neglects the diversity of samples. To alleviate the above problems, we
propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this
paper. The salient properties of FNMF can be summarized as threefold: 1) it
learns the weights of features adaptively according to their importances; 2) it
utilizes multiple feature weighting components to preserve the diversity; 3) it
can be solved efficiently with the suggested optimization algorithm.
Performance on synthetic and real-world datasets demonstrate that the proposed
method obtains the state-of-the-art performance.
Related papers
- Coseparable Nonnegative Tensor Factorization With T-CUR Decomposition [2.013220890731494]
Nonnegative Matrix Factorization (NMF) is an important unsupervised learning method to extract meaningful features from data.
In this work, we provide an alternating selection method to select the coseparable core.
The results demonstrate the efficiency of coseparable NTF when compared to coseparable NMF.
arXiv Detail & Related papers (2024-01-30T09:22:37Z) - Stratified-NMF for Heterogeneous Data [8.174199227297514]
We propose a modified NMF objective, Stratified-NMF, that simultaneously learns strata-dependent statistics and a shared topics matrix.
We apply our method to three real world datasets and empirically investigate their learned features.
arXiv Detail & Related papers (2023-11-17T00:34:41Z) - NPEFF: Non-Negative Per-Example Fisher Factorization [52.44573961263344]
We introduce a novel interpretability method called NPEFF that is readily applicable to any end-to-end differentiable model.
We demonstrate that NPEFF has interpretable tunings through experiments on language and vision models.
arXiv Detail & Related papers (2023-10-07T02:02:45Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - An Entropy Weighted Nonnegative Matrix Factorization Algorithm for
Feature Representation [6.156004893556576]
We propose a new type of NMF called entropy weighted NMF (EWNMF)
EWNMF uses an optimizable weight for each attribute of each data point to emphasize their importance.
Experimental results with several data sets demonstrate the feasibility and effectiveness of the proposed method.
arXiv Detail & Related papers (2021-11-27T23:37:20Z) - A deep learning driven pseudospectral PCE based FFT homogenization
algorithm for complex microstructures [68.8204255655161]
It is shown that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
It is shown, that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
arXiv Detail & Related papers (2021-10-26T07:02:14Z) - Initialization for Nonnegative Matrix Factorization: a Comprehensive
Review [0.0]
Non-negative factorization (NMF) has become a popular method for representing meaningful data by extracting a non-negative basis from an non-negative data matrix.
Some numerical results to illustrate the performance of each method are presented.
arXiv Detail & Related papers (2021-09-08T18:49:41Z) - Co-Separable Nonnegative Matrix Factorization [20.550794776914508]
Nonnegative matrix factorization (NMF) is a popular model in the field of pattern recognition.
We refer to this NMF as a Co-Separable NMF (CoS-NMF)
An optimization model for CoS-NMF is proposed and alternated fast gradient method is employed to solve the model.
arXiv Detail & Related papers (2021-09-02T07:05:04Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.