ContrastVAE: Contrastive Variational AutoEncoder for Sequential
Recommendation
- URL: http://arxiv.org/abs/2209.00456v1
- Date: Sat, 27 Aug 2022 03:35:00 GMT
- Title: ContrastVAE: Contrastive Variational AutoEncoder for Sequential
Recommendation
- Authors: Yu Wang, Hengrui Zhang, Zhiwei Liu, Liangwei Yang, Philip S. Yu
- Abstract summary: We propose to incorporate contrastive learning into the framework of Variational AutoEncoders.
We introduce ContrastELBO, a novel training objective that extends the conventional single-view ELBO to two-view case.
We also propose ContrastVAE, a two-branched VAE model with contrastive regularization as an embodiment of ContrastELBO for sequential recommendation.
- Score: 58.02630582309427
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Aiming at exploiting the rich information in user behaviour sequences,
sequential recommendation has been widely adopted in real-world recommender
systems. However, current methods suffer from the following issues: 1) sparsity
of user-item interactions, 2) uncertainty of sequential records, 3) long-tail
items. In this paper, we propose to incorporate contrastive learning into the
framework of Variational AutoEncoders to address these challenges
simultaneously. Firstly, we introduce ContrastELBO, a novel training objective
that extends the conventional single-view ELBO to two-view case and
theoretically builds a connection between VAE and contrastive learning from a
two-view perspective. Then we propose Contrastive Variational AutoEncoder
(ContrastVAE in short), a two-branched VAE model with contrastive
regularization as an embodiment of ContrastELBO for sequential recommendation.
We further introduce two simple yet effective augmentation strategies named
model augmentation and variational augmentation to create a second view of a
sequence and thus making contrastive learning possible. Experiments on four
benchmark datasets demonstrate the effectiveness of ContrastVAE and the
proposed augmentation methods. Codes are available at
https://github.com/YuWang-1024/ContrastVAE
Related papers
- Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation [84.45144851024257]
CoGCL aims to enhance graph contrastive learning by constructing contrastive views with stronger collaborative information via discrete codes.
We introduce a multi-level vector quantizer in an end-to-end manner to quantize user and item representations into discrete codes.
For neighborhood structure, we propose virtual neighbor augmentation by treating discrete codes as virtual neighbors.
Regarding semantic relevance, we identify similar users/items based on shared discrete codes and interaction targets to generate the semantically relevant view.
arXiv Detail & Related papers (2024-09-09T14:04:17Z) - End-to-End Learnable Item Tokenization for Generative Recommendation [51.82768744368208]
We propose ETEGRec, a novel End-To-End Generative Recommender by seamlessly integrating item tokenization and generative recommendation.
Our framework is developed based on the dual encoder-decoder architecture, which consists of an item tokenizer and a generative recommender.
arXiv Detail & Related papers (2024-09-09T12:11:53Z) - Diffusion-based Contrastive Learning for Sequential Recommendation [6.3482831836623355]
We propose a Context-aware Diffusion-based Contrastive Learning for Sequential Recommendation, named CaDiRec.
CaDiRec employs a context-aware diffusion model to generate alternative items for the given positions within a sequence.
We train the entire framework in an end-to-end manner, with shared item embeddings between the diffusion model and the recommendation model.
arXiv Detail & Related papers (2024-05-15T14:20:37Z) - Towards Universal Sequence Representation Learning for Recommender
Systems [98.02154164251846]
We present a novel universal sequence representation learning approach, named UniSRec.
The proposed approach utilizes the associated description text of items to learn transferable representations across different recommendation scenarios.
Our approach can be effectively transferred to new recommendation domains or platforms in a parameter-efficient way.
arXiv Detail & Related papers (2022-06-13T07:21:56Z) - Improving Contrastive Learning with Model Augmentation [123.05700988581806]
The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences.
Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance.
arXiv Detail & Related papers (2022-03-25T06:12:58Z) - Contrastively Disentangled Sequential Variational Autoencoder [20.75922928324671]
We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE)
We use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors.
Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics.
arXiv Detail & Related papers (2021-10-22T23:00:32Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Adversarial and Contrastive Variational Autoencoder for Sequential
Recommendation [25.37244686572865]
We propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.
We first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes framework, which enables our model to generate high-quality latent variables.
Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence.
arXiv Detail & Related papers (2021-03-19T09:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.