Self Contrastive Learning for Session-based Recommendation
- URL: http://arxiv.org/abs/2306.01266v2
- Date: Wed, 20 Dec 2023 17:01:04 GMT
- Title: Self Contrastive Learning for Session-based Recommendation
- Authors: Zhengxiang Shi, Xi Wang, Aldo Lipani
- Abstract summary: Self-Contrastive Learning (SCL) is formulated as an objective function that directly promotes a uniform distribution among item representations.
SCL consistently improves the performance of state-of-the-art models with statistical significance.
- Score: 16.69827431125858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Session-based recommendation, which aims to predict the next item of users'
interest as per an existing sequence interaction of items, has attracted
growing applications of Contrastive Learning (CL) with improved user and item
representations. However, these contrastive objectives: (1) serve a similar
role as the cross-entropy loss while ignoring the item representation space
optimisation; and (2) commonly require complicated modelling, including complex
positive/negative sample constructions and extra data augmentation. In this
work, we introduce Self-Contrastive Learning (SCL), which simplifies the
application of CL and enhances the performance of state-of-the-art CL-based
recommendation techniques. Specifically, SCL is formulated as an objective
function that directly promotes a uniform distribution among item
representations and efficiently replaces all the existing contrastive objective
components of state-of-the-art models. Unlike previous works, SCL eliminates
the need for any positive/negative sample construction or data augmentation,
leading to enhanced interpretability of the item representation space and
facilitating its extensibility to existing recommender systems. Through
experiments on three benchmark datasets, we demonstrate that SCL consistently
improves the performance of state-of-the-art models with statistical
significance. Notably, our experiments show that SCL improves the performance
of two best-performing models by 8.2% and 9.5% in P@10 (Precision) and 9.9% and
11.2% in MRR@10 (Mean Reciprocal Rank) on average across different benchmarks.
Additionally, our analysis elucidates the improvement in terms of alignment and
uniformity of representations, as well as the effectiveness of SCL with a low
computational cost.
Related papers
- Contrastive Learning Via Equivariant Representation [19.112460889771423]
We propose CLeVER, a novel equivariant contrastive learning framework compatible with augmentation strategies of arbitrary complexity.
Experimental results demonstrate that CLeVER effectively extracts and incorporates equivariant information from practical natural images.
arXiv Detail & Related papers (2024-06-01T01:53:51Z) - Decoupled Contrastive Learning for Long-Tailed Recognition [58.255966442426484]
Supervised Contrastive Loss (SCL) is popular in visual representation learning.
In the scenario of long-tailed recognition, where the number of samples in each class is imbalanced, treating two types of positive samples equally leads to the biased optimization for intra-category distance.
We propose a patch-based self distillation to transfer knowledge from head to tail classes to relieve the under-representation of tail classes.
arXiv Detail & Related papers (2024-03-10T09:46:28Z) - RecDCL: Dual Contrastive Learning for Recommendation [65.6236784430981]
We propose a dual contrastive learning recommendation framework -- RecDCL.
In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs.
The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations.
arXiv Detail & Related papers (2024-01-28T11:51:09Z) - In-context Learning and Gradient Descent Revisited [3.085927389171139]
We show that even untrained models achieve comparable ICL-GD similarity scores despite not exhibiting ICL.
Next, we explore a major discrepancy in the flow of information throughout the model between ICL and GD, which we term Layer Causality.
We propose a simple GD-based optimization procedure that respects layer causality, and show it improves similarity scores significantly.
arXiv Detail & Related papers (2023-11-13T21:42:38Z) - What Constitutes Good Contrastive Learning in Time-Series Forecasting? [10.44543726728613]
Self-supervised contrastive learning (SSCL) has demonstrated remarkable improvements in representation learning across various domains.
This paper aims to conduct a comprehensive analysis of the effectiveness of various SSCL algorithms, learning strategies, model architectures, and their interplay.
We demonstrate that the end-to-end training of a Transformer model using the Mean Squared Error (MSE) loss and SSCL emerges as the most effective approach in time series forecasting.
arXiv Detail & Related papers (2023-06-21T08:05:05Z) - ScoreCL: Augmentation-Adaptive Contrastive Learning via Score-Matching Function [14.857965612960475]
Self-supervised contrastive learning (CL) has achieved state-of-the-art performance in representation learning.
We show the generality of our method, referred to as ScoreCL, by consistently improving various CL methods.
arXiv Detail & Related papers (2023-06-07T05:59:20Z) - PoseRAC: Pose Saliency Transformer for Repetitive Action Counting [56.34379680390869]
We introduce Pose Saliency Representation, which efficiently represents each action using only two salient poses instead of redundant frames.
We also introduce PoseRAC, which is based on this representation and achieves state-of-the-art performance.
Our lightweight model is highly efficient, requiring only 20 minutes for training on a GPU, and infers nearly 10x faster compared to previous methods.
arXiv Detail & Related papers (2023-03-15T08:51:17Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Semi-supervised Contrastive Learning with Similarity Co-calibration [72.38187308270135]
We propose a novel training strategy, termed as Semi-supervised Contrastive Learning (SsCL)
SsCL combines the well-known contrastive loss in self-supervised learning with the cross entropy loss in semi-supervised learning.
We show that SsCL produces more discriminative representation and is beneficial to few shot learning.
arXiv Detail & Related papers (2021-05-16T09:13:56Z) - Understanding and Achieving Efficient Robustness with Adversarial
Contrastive Learning [34.97017489872795]
Adversarial Supervised Contrastive Learning (ASCL) approach outperforms the state-of-the-art defenses by $2.6%$ in terms of the robust accuracy.
Our ASCL with the proposed selection strategy can further gain $1.4%$ improvement with only $42.8%$ positives and $6.3%$ negatives compared with ASCL without a selection strategy.
arXiv Detail & Related papers (2021-01-25T11:57:52Z) - Contrastive Clustering [57.71729650297379]
We propose Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning.
In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19% (39%) performance improvement compared with the best baseline.
arXiv Detail & Related papers (2020-09-21T08:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.