Deep Variational Models for Collaborative Filtering-based Recommender
Systems
- URL: http://arxiv.org/abs/2107.12677v1
- Date: Tue, 27 Jul 2021 08:59:39 GMT
- Title: Deep Variational Models for Collaborative Filtering-based Recommender
Systems
- Authors: Jes\'us Bobadilla, Fernando Ortega, Abraham Guti\'errez, \'Angel
Gonz\'alez-Prieto
- Abstract summary: Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
- Score: 63.995130144110156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning provides accurate collaborative filtering models to improve
recommender system results. Deep matrix factorization and their related
collaborative neural networks are the state-of-art in the field; nevertheless,
both models lack the necessary stochasticity to create the robust, continuous,
and structured latent spaces that variational autoencoders exhibit. On the
other hand, data augmentation through variational autoencoder does not provide
accurate results in the collaborative filtering field due to the high sparsity
of recommender systems. Our proposed models apply the variational concept to
inject stochasticity in the latent space of the deep architecture, introducing
the variational technique in the neural collaborative filtering field. This
method does not depend on the particular model used to generate the latent
representation. In this way, this approach can be applied as a plugin to any
current and future specific models. The proposed models have been tested using
four representative open datasets, three different quality measures, and
state-of-art baselines. The results show the superiority of the proposed
approach in scenarios where the variational enrichment exceeds the injected
noise effect. Additionally, a framework is provided to enable the
reproducibility of the conducted experiments.
Related papers
- Diffusion Model for Data-Driven Black-Box Optimization [54.25693582870226]
We focus on diffusion models, a powerful generative AI technology, and investigate their potential for black-box optimization.
We study two practical types of labels: 1) noisy measurements of a real-valued reward function and 2) human preference based on pairwise comparisons.
Our proposed method reformulates the design optimization problem into a conditional sampling problem, which allows us to leverage the power of diffusion models.
arXiv Detail & Related papers (2024-03-20T00:41:12Z) - Distribution-Aware Data Expansion with Diffusion Models [55.979857976023695]
We propose DistDiff, a training-free data expansion framework based on the distribution-aware diffusion model.
DistDiff consistently enhances accuracy across a diverse range of datasets compared to models trained solely on original data.
arXiv Detail & Related papers (2024-03-11T14:07:53Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Federated Variational Inference Methods for Structured Latent Variable
Models [1.0312968200748118]
Federated learning methods enable model training across distributed data sources without data leaving their original locations.
We present a general and elegant solution based on structured variational inference, widely used in Bayesian machine learning.
We also provide a communication-efficient variant analogous to the canonical FedAvg algorithm.
arXiv Detail & Related papers (2023-02-07T08:35:04Z) - Model Selection for Bayesian Autoencoders [25.619565817793422]
We propose to optimize the distributional sliced-Wasserstein distance between the output of the autoencoder and the empirical data distribution.
We turn our BAE into a generative model by fitting a flexible Dirichlet mixture model in the latent space.
We evaluate our approach qualitatively and quantitatively using a vast experimental campaign on a number of unsupervised learning tasks and show that, in small-data regimes where priors matter, our approach provides state-of-the-art results.
arXiv Detail & Related papers (2021-06-11T08:55:00Z) - Multi-output Gaussian Processes for Uncertainty-aware Recommender
Systems [3.908842679355254]
We introduce an efficient strategy for model training and inference, resulting in a model that scales to very large and sparse datasets.
Our model also provides meaningful uncertainty estimates about quantifying that prediction.
arXiv Detail & Related papers (2021-06-08T10:01:14Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Cross-Modal Generative Augmentation for Visual Question Answering [34.9601948665926]
This paper introduces a generative model for data augmentation by leveraging the correlations among multiple modalities.
The proposed model is able to quantify the confidence of augmented data by its generative probability, and can be jointly updated with a downstream pipeline.
arXiv Detail & Related papers (2021-05-11T04:51:26Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders [22.54887526392739]
We propose a novel approach to training models with deep-latent hierarchies based on Optimal Transport.
We show that our method enables the generative model to fully leverage its deep-latent hierarchy, avoiding the well known "latent variable collapse" issue of VAEs.
arXiv Detail & Related papers (2020-10-07T15:04:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.