Multi-Sample based Contrastive Loss for Top-k Recommendation
- URL: http://arxiv.org/abs/2109.00217v1
- Date: Wed, 1 Sep 2021 07:32:13 GMT
- Title: Multi-Sample based Contrastive Loss for Top-k Recommendation
- Authors: Hao Tang, Guoshuai Zhao, Yuxia Wu, Xueming Qian
- Abstract summary: The Contrastive Loss (CL) is the key in contrastive learning that has received more attention recently.
We propose a new data augmentation method by using multiple positive items (or samples) simultaneously with the CL loss function.
- Score: 33.02297142668278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The top-k recommendation is a fundamental task in recommendation systems
which is generally learned by comparing positive and negative pairs. The
Contrastive Loss (CL) is the key in contrastive learning that has received more
attention recently and we find it is well suited for top-k recommendations.
However, it is a problem that CL treats the importance of the positive and
negative samples as the same. On the one hand, CL faces the imbalance problem
of one positive sample and many negative samples. On the other hand, positive
items are so few in sparser datasets that their importance should be
emphasized. Moreover, the other important issue is that the sparse positive
items are still not sufficiently utilized in recommendations. So we propose a
new data augmentation method by using multiple positive items (or samples)
simultaneously with the CL loss function. Therefore, we propose a Multi-Sample
based Contrastive Loss (MSCL) function which solves the two problems by
balancing the importance of positive and negative samples and data
augmentation. And based on the graph convolution network (GCN) method,
experimental results demonstrate the state-of-the-art performance of MSCL. The
proposed MSCL is simple and can be applied in many methods. We will release our
code on GitHub upon the acceptance.
Related papers
- SimCE: Simplifying Cross-Entropy Loss for Collaborative Filtering [47.81610130269399]
We propose a Sampled Softmax Cross-Entropy (SSM) that compares one positive sample with multiple negative samples, leading to better performance.
We also introduce a underlineSimplified Sampled Softmax underlineCross-underlineEntropy Loss (SimCE) which simplifies the SSM using its upper bound.
Our validation on 12 benchmark datasets, using both MF and LightGCN backbones, shows that SimCE significantly outperforms both BPR and SSM.
arXiv Detail & Related papers (2024-06-23T17:24:07Z) - Multi-Margin Cosine Loss: Proposal and Application in Recommender Systems [0.0]
Collaborative filtering-based deep learning techniques have regained popularity due to their straightforward nature.
These systems consist of three main components: an interaction module, a loss function, and a negative sampling strategy.
The proposed Multi-Margin Cosine Loss (MMCL) addresses these challenges by introducing multiple margins and varying weights for negative samples.
arXiv Detail & Related papers (2024-05-07T18:58:32Z) - Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Supervised Advantage Actor-Critic for Recommender Systems [76.7066594130961]
We propose negative sampling strategy for training the RL component and combine it with supervised sequential learning.
Based on sampled (negative) actions (items), we can calculate the "advantage" of a positive action over the average case.
We instantiate SNQN and SA2C with four state-of-the-art sequential recommendation models and conduct experiments on two real-world datasets.
arXiv Detail & Related papers (2021-11-05T12:51:15Z) - Debiased Graph Contrastive Learning [27.560217866753938]
We propose a novel and effective method to estimate the probability whether each negative sample is true or not.
Debiased Graph Contrastive Learning (DGCL) outperforms or matches previous unsupervised state-of-the-art results on several benchmarks.
arXiv Detail & Related papers (2021-10-05T13:15:59Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.