Improving Contrastive Learning with Model Augmentation
- URL: http://arxiv.org/abs/2203.15508v1
- Date: Fri, 25 Mar 2022 06:12:58 GMT
- Title: Improving Contrastive Learning with Model Augmentation
- Authors: Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
- Abstract summary: The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences.
Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance.
- Score: 123.05700988581806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The sequential recommendation aims at predicting the next items in user
behaviors, which can be solved by characterizing item relationships in
sequences. Due to the data sparsity and noise issues in sequences, a new
self-supervised learning (SSL) paradigm is proposed to improve the performance,
which employs contrastive learning between positive and negative views of
sequences.
However, existing methods all construct views by adopting augmentation from
data perspectives, while we argue that 1) optimal data augmentation methods are
hard to devise, 2) data augmentation methods destroy sequential correlations,
and 3) data augmentation fails to incorporate comprehensive self-supervised
signals.
Therefore, we investigate the possibility of model augmentation to construct
view pairs. We propose three levels of model augmentation methods: neuron
masking, layer dropping, and encoder complementing.
This work opens up a novel direction in constructing views for contrastive
SSL. Experiments verify the efficacy of model augmentation for the SSL in the
sequential recommendation. Code is
available\footnote{\url{https://github.com/salesforce/SRMA}}.
Related papers
- Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer Models [0.0]
This work proposes a 3 Phase technique to adjust a base model for a classification task.
We adapt the model's signal to the data distribution by performing further training with a Denoising Autoencoder (DAE)
In addition, we introduce a new data augmentation approach for Supervised Contrastive Learning to correct the unbalanced datasets.
arXiv Detail & Related papers (2024-05-23T11:08:35Z) - Graph Masked Autoencoder for Sequential Recommendation [10.319298705782058]
We propose a Graph Masked AutoEncoder-enhanced sequential Recommender system (MAERec) that adaptively and dynamically distills global item transitional information for self-supervised augmentation.
Our method significantly outperforms state-of-the-art baseline models and can learn more accurate representations against data noise and sparsity.
arXiv Detail & Related papers (2023-05-08T10:57:56Z) - GUESR: A Global Unsupervised Data-Enhancement with Bucket-Cluster
Sampling for Sequential Recommendation [58.6450834556133]
We propose graph contrastive learning to enhance item representations with complex associations from the global view.
We extend the CapsNet module with the elaborately introduced target-attention mechanism to derive users' dynamic preferences.
Our proposed GUESR could not only achieve significant improvements but also could be regarded as a general enhancement strategy.
arXiv Detail & Related papers (2023-03-01T05:46:36Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - ContrastVAE: Contrastive Variational AutoEncoder for Sequential
Recommendation [58.02630582309427]
We propose to incorporate contrastive learning into the framework of Variational AutoEncoders.
We introduce ContrastELBO, a novel training objective that extends the conventional single-view ELBO to two-view case.
We also propose ContrastVAE, a two-branched VAE model with contrastive regularization as an embodiment of ContrastELBO for sequential recommendation.
arXiv Detail & Related papers (2022-08-27T03:35:00Z) - Learnable Model Augmentation Self-Supervised Learning for Sequential
Recommendation [36.81597777126902]
We propose a Learnable Model Augmentation self-supervised learning for sequential Recommendation (LMA4Rec)
LMA4Rec first takes model augmentation as a supplementary method for data augmentation to generate views.
Next, self-supervised learning is used between the contrastive views to extract self-supervised signals from an original sequence.
arXiv Detail & Related papers (2022-04-21T14:30:56Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.