Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation
- URL: http://arxiv.org/abs/2108.06479v1
- Date: Sat, 14 Aug 2021 07:15:25 GMT
- Title: Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation
- Authors: Zhiwei Liu, Yongjun Chen, Jia Li, Philip S. Yu, Julian McAuley,
Caiming Xiong
- Abstract summary: Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
- Score: 101.25762166231904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sequential Recommendationdescribes a set of techniques to model dynamic user
behavior in order to predict future interactions in sequential user data. At
their core, such approaches model transition probabilities between items in a
sequence, whether through Markov chains, recurrent networks, or more recently,
Transformers. However both old and new issues remain, including data-sparsity
and noisy data; such issues can impair the performance, especially in complex,
parameter-hungry models. In this paper, we investigate the application of
contrastive Self-Supervised Learning (SSL) to the sequential recommendation, as
a way to alleviate some of these issues. Contrastive SSL constructs
augmentations from unlabelled instances, where agreements among positive pairs
are maximized. It is challenging to devise a contrastive SSL framework for a
sequential recommendation, due to its discrete nature, correlations among
items, and skewness of length distributions. To this end, we propose a novel
framework, Contrastive Self-supervised Learning for sequential Recommendation
(CoSeRec). We introduce two informative augmentation operators leveraging item
correlations to create high-quality views for contrastive learning.
Experimental results on three real-world datasets demonstrate the effectiveness
of the proposed method on improving model performance and the robustness
against sparse and noisy data. Our implementation is available online at
\url{https://github.com/YChen1993/CoSeRec}
Related papers
- Graph Masked Autoencoder for Sequential Recommendation [10.319298705782058]
We propose a Graph Masked AutoEncoder-enhanced sequential Recommender system (MAERec) that adaptively and dynamically distills global item transitional information for self-supervised augmentation.
Our method significantly outperforms state-of-the-art baseline models and can learn more accurate representations against data noise and sparsity.
arXiv Detail & Related papers (2023-05-08T10:57:56Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - GUESR: A Global Unsupervised Data-Enhancement with Bucket-Cluster
Sampling for Sequential Recommendation [58.6450834556133]
We propose graph contrastive learning to enhance item representations with complex associations from the global view.
We extend the CapsNet module with the elaborately introduced target-attention mechanism to derive users' dynamic preferences.
Our proposed GUESR could not only achieve significant improvements but also could be regarded as a general enhancement strategy.
arXiv Detail & Related papers (2023-03-01T05:46:36Z) - Improving Contrastive Learning with Model Augmentation [123.05700988581806]
The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences.
Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance.
arXiv Detail & Related papers (2022-03-25T06:12:58Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Adversarial and Contrastive Variational Autoencoder for Sequential
Recommendation [25.37244686572865]
We propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.
We first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes framework, which enables our model to generate high-quality latent variables.
Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence.
arXiv Detail & Related papers (2021-03-19T09:01:14Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.