Joint Variational Autoencoders for Recommendation with Implicit Feedback
- URL: http://arxiv.org/abs/2008.07577v1
- Date: Mon, 17 Aug 2020 19:06:31 GMT
- Title: Joint Variational Autoencoders for Recommendation with Implicit Feedback
- Authors: Bahare Askari, Jaroslaw Szlichta, Amirali Salehi-Abari
- Abstract summary: We introduce joint variational autoencoders (JoVA) for top-k recommendation with implicit feedback.
Our experiments show JoVA-Hinge outperforms a broad set of state-of-the-art collaborative filtering methods.
- Score: 7.880059199461512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational Autoencoders (VAEs) have recently shown promising performance in
collaborative filtering with implicit feedback. These existing recommendation
models learn user representations to reconstruct or predict user preferences.
We introduce joint variational autoencoders (JoVA), an ensemble of two VAEs, in
which VAEs jointly learn both user and item representations and collectively
reconstruct and predict user preferences. This design allows JoVA to capture
user-user and item-item correlations simultaneously. By extending the objective
function of JoVA with a hinge-based pairwise loss function (JoVA-Hinge), we
further specialize it for top-k recommendation with implicit feedback. Our
extensive experiments on several real-world datasets show that JoVA-Hinge
outperforms a broad set of state-of-the-art collaborative filtering methods,
under a variety of commonly-used metrics. Our empirical results also confirm
the outperformance of JoVA-Hinge over existing methods for cold-start users
with a limited number of training data.
Related papers
- GUIDE-VAE: Advancing Data Generation with User Information and Pattern Dictionaries [0.0]
This paper introduces GUIDE-VAE, a novel conditional generative model that leverages user embeddings to generate user-guided data.
The proposed GUIDE-VAE was evaluated on a multi-user smart meter dataset characterized by substantial data imbalance across users.
arXiv Detail & Related papers (2024-11-06T14:11:46Z) - GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation [55.769720670731516]
GaVaMoE is a novel framework for explainable recommendation.
It generates tailored explanations for specific user types and preferences.
It exhibits robust performance in scenarios with sparse user-item interactions.
arXiv Detail & Related papers (2024-10-15T17:59:30Z) - Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach [49.63614966954833]
Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.
This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
To effectively train the proposed framework, we model the problem as a specialized Variational AutoEncoder (VAE) task by integrating user interaction vector reconstruction with missing value prediction.
arXiv Detail & Related papers (2024-08-16T05:49:14Z) - A Large Language Model Enhanced Sequential Recommender for Joint Video and Comment Recommendation [77.42486522565295]
We propose a novel recommendation approach called LSVCR to jointly conduct personalized video and comment recommendation.
Our approach consists of two key components, namely sequential recommendation (SR) model and supplemental large language model (LLM) recommender.
In particular, we achieve a significant overall gain of 4.13% in comment watch time.
arXiv Detail & Related papers (2024-03-20T13:14:29Z) - Neural Graph Collaborative Filtering Using Variational Inference [19.80976833118502]
We introduce variational embedding collaborative filtering (GVECF) as a novel framework to incorporate representations learned through a variational graph auto-encoder.
Our proposed method achieves up to 13.78% improvement in the recall over the test data.
arXiv Detail & Related papers (2023-11-20T15:01:33Z) - ContrastVAE: Contrastive Variational AutoEncoder for Sequential
Recommendation [58.02630582309427]
We propose to incorporate contrastive learning into the framework of Variational AutoEncoders.
We introduce ContrastELBO, a novel training objective that extends the conventional single-view ELBO to two-view case.
We also propose ContrastVAE, a two-branched VAE model with contrastive regularization as an embodiment of ContrastELBO for sequential recommendation.
arXiv Detail & Related papers (2022-08-27T03:35:00Z) - Probabilistic and Variational Recommendation Denoising [56.879165033014026]
Learning from implicit feedback is one of the most common cases in the application of recommender systems.
We propose probabilistic and variational recommendation denoising for implicit feedback.
We employ the proposed DPI and DVAE on four state-of-the-art recommendation models and conduct experiments on three datasets.
arXiv Detail & Related papers (2021-05-20T08:59:44Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - Dual-embedding based Neural Collaborative Filtering for Recommender
Systems [0.7949579654743338]
We propose a general collaborative filtering framework named DNCF, short for Dual-embedding based Neural Collaborative Filtering.
In addition to learning the primitive embedding for a user (an item), we introduce an additional embedding from the perspective of the interacted items (users) to augment the user (item) representation.
arXiv Detail & Related papers (2021-02-04T11:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.