Ordinal Non-negative Matrix Factorization for Recommendation
- URL: http://arxiv.org/abs/2006.01034v4
- Date: Wed, 2 Sep 2020 14:29:29 GMT
- Title: Ordinal Non-negative Matrix Factorization for Recommendation
- Authors: Olivier Gouvert, Thomas Oberlin and C\'edric F\'evotte
- Abstract summary: We introduce a new non-negative matrix factorization (NMF) method for ordinal data, called OrdNMF.
OrdNMF is a latent factor model that generalizes Bernoulli-Poisson factorization (BePoF) and Poisson factorization (PF) applied to binarized data.
- Score: 9.431454966446076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new non-negative matrix factorization (NMF) method for ordinal
data, called OrdNMF. Ordinal data are categorical data which exhibit a natural
ordering between the categories. In particular, they can be found in
recommender systems, either with explicit data (such as ratings) or implicit
data (such as quantized play counts). OrdNMF is a probabilistic latent factor
model that generalizes Bernoulli-Poisson factorization (BePoF) and Poisson
factorization (PF) applied to binarized data. Contrary to these methods, OrdNMF
circumvents binarization and can exploit a more informative representation of
the data. We design an efficient variational algorithm based on a suitable
model augmentation and related to variational PF. In particular, our algorithm
preserves the scalability of PF and can be applied to huge sparse datasets. We
report recommendation experiments on explicit and implicit datasets, and show
that OrdNMF outperforms BePoF and PF applied to binarized data.
Related papers
- Coseparable Nonnegative Tensor Factorization With T-CUR Decomposition [2.013220890731494]
Nonnegative Matrix Factorization (NMF) is an important unsupervised learning method to extract meaningful features from data.
In this work, we provide an alternating selection method to select the coseparable core.
The results demonstrate the efficiency of coseparable NTF when compared to coseparable NMF.
arXiv Detail & Related papers (2024-01-30T09:22:37Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - Supervised Class-pairwise NMF for Data Representation and Classification [2.7320863258816512]
Non-negative Matrix factorization (NMF) based methods add new terms to the cost function to adapt the model to specific tasks.
NMF method adopts unsupervised approaches to estimate the factorizing matrices.
arXiv Detail & Related papers (2022-09-28T04:33:03Z) - Non-Negative Matrix Factorization with Scale Data Structure Preservation [23.31865419578237]
The model described in this paper belongs to the family of non-negative matrix factorization methods designed for data representation and dimension reduction.
The idea is to add, to the NMF cost function, a penalty term to impose a scale relationship between the pairwise similarity matrices of the original and transformed data points.
The proposed clustering algorithm is compared to some existing NMF-based algorithms and to some manifold learning-based algorithms when applied to some real-life datasets.
arXiv Detail & Related papers (2022-09-22T09:32:18Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - An Entropy Weighted Nonnegative Matrix Factorization Algorithm for
Feature Representation [6.156004893556576]
We propose a new type of NMF called entropy weighted NMF (EWNMF)
EWNMF uses an optimizable weight for each attribute of each data point to emphasize their importance.
Experimental results with several data sets demonstrate the feasibility and effectiveness of the proposed method.
arXiv Detail & Related papers (2021-11-27T23:37:20Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Data embedding and prediction by sparse tropical matrix factorization [0.0]
We propose a method called Sparse Tropical Matrix Factorization (STMF) for the estimation of missing (unknown) values.
Tests on unique synthetic data showed that STMF approximation achieves a higher correlation than non-negative matrix factorization.
STMF is the first work that uses tropical semiring on sparse data.
arXiv Detail & Related papers (2020-12-09T18:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.