SSLRec: A Self-Supervised Learning Framework for Recommendation
- URL: http://arxiv.org/abs/2308.05697v3
- Date: Tue, 30 Jan 2024 16:30:43 GMT
- Title: SSLRec: A Self-Supervised Learning Framework for Recommendation
- Authors: Xubin Ren, Lianghao Xia, Yuhao Yang, Wei Wei, Tianle Wang, Xuheng Cai
and Chao Huang
- Abstract summary: SSLRec is a novel benchmark platform that provides a standardized, flexible, and comprehensive framework for evaluating various SSL-enhanced recommenders.
Our SSLRec platform covers a comprehensive set of state-of-the-art SSL-enhanced recommendation models across different scenarios.
- Score: 22.001376300511577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) has gained significant interest in recent
years as a solution to address the challenges posed by sparse and noisy data in
recommender systems. Despite the growing number of SSL algorithms designed to
provide state-of-the-art performance in various recommendation scenarios (e.g.,
graph collaborative filtering, sequential recommendation, social
recommendation, KG-enhanced recommendation), there is still a lack of unified
frameworks that integrate recommendation algorithms across different domains.
Such a framework could serve as the cornerstone for self-supervised
recommendation algorithms, unifying the validation of existing methods and
driving the design of new ones. To address this gap, we introduce SSLRec, a
novel benchmark platform that provides a standardized, flexible, and
comprehensive framework for evaluating various SSL-enhanced recommenders. The
SSLRec framework features a modular architecture that allows users to easily
evaluate state-of-the-art models and a complete set of data augmentation and
self-supervised toolkits to help create SSL recommendation models with specific
needs. Furthermore, SSLRec simplifies the process of training and evaluating
different recommendation models with consistent and fair settings. Our SSLRec
platform covers a comprehensive set of state-of-the-art SSL-enhanced
recommendation models across different scenarios, enabling researchers to
evaluate these cutting-edge models and drive further innovation in the field.
Our implemented SSLRec framework is available at the source code repository
https://github.com/HKUDS/SSLRec.
Related papers
- Supercm: Revisiting Clustering for Semi-Supervised Learning [12.324453023412142]
In this work, we propose a novel approach that explicitly incorporates the underlying clustering assumption in semi-supervised learning (SSL)<n>We leverage annotated data to guide the cluster centroids results in a simple end-to-end trainable deep SSL approach.<n>We demonstrate that the proposed model improves the performance over the supervised-only baseline and show that our framework can be used in conjunction with other SSL methods to further boost their performance.
arXiv Detail & Related papers (2025-06-30T13:17:08Z) - Real-Time Personalization for LLM-based Recommendation with Customized In-Context Learning [57.28766250993726]
This work explores adapting to dynamic user interests without any model updates.
Existing Large Language Model (LLM)-based recommenders often lose the in-context learning ability during recommendation tuning.
We propose RecICL, which customizes recommendation-specific in-context learning for real-time recommendations.
arXiv Detail & Related papers (2024-10-30T15:48:36Z) - Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning [4.137391543972184]
Semi-supervised learning (SSL) has witnessed remarkable progress, resulting in numerous method variations.
In this paper, we present a novel SSL approach named FineSSL that significantly addresses this limitation by adapting pre-trained foundation models.
We demonstrate that FineSSL sets a new state of the art for SSL on multiple benchmark datasets, reduces the training cost by over six times, and can seamlessly integrate various fine-tuning and modern SSL algorithms.
arXiv Detail & Related papers (2024-05-20T03:33:12Z) - Reinforcement Learning-Guided Semi-Supervised Learning [20.599506122857328]
We propose a novel Reinforcement Learning Guided SSL method, RLGSSL, that formulates SSL as a one-armed bandit problem.
RLGSSL incorporates a carefully designed reward function that balances the use of labeled and unlabeled data to enhance generalization performance.
We demonstrate the effectiveness of RLGSSL through extensive experiments on several benchmark datasets and show that our approach achieves consistent superior performance compared to state-of-the-art SSL methods.
arXiv Detail & Related papers (2024-05-02T21:52:24Z) - A Comprehensive Survey on Self-Supervised Learning for Recommendation [19.916057705072177]
We provide a review of self-supervised learning frameworks designed for recommender systems, encompassing a thorough analysis of over 170 papers.
We elaborate on different self-supervised learning paradigms, namely contrastive learning, generative learning, and adversarial learning, so as to present technical details of how SSL enhances recommender systems in various contexts.
arXiv Detail & Related papers (2024-04-04T10:45:23Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Improving Self-Supervised Learning by Characterizing Idealized
Representations [155.1457170539049]
We prove necessary and sufficient conditions for any task invariant to given data augmentations.
For contrastive learning, our framework prescribes simple but significant improvements to previous methods.
For non-contrastive learning, we use our framework to derive a simple and novel objective.
arXiv Detail & Related papers (2022-09-13T18:01:03Z) - Unseen Classes at a Later Time? No Problem [17.254973125515402]
We propose a new Online-CGZSL setting which is more practical and flexible.
We introduce a unified feature-generative framework for CGZSL that leverages bi-directional incremental alignment to dynamically adapt to addition of new classes, with or without labeled data, that arrive over time in any of these CGZSL settings.
arXiv Detail & Related papers (2022-03-30T17:52:16Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning [58.26384597768118]
SemiNLL is a versatile framework that combines SS strategies and SSL models in an end-to-end manner.
Our framework can absorb various SS strategies and SSL backbones, utilizing their power to achieve promising performance.
arXiv Detail & Related papers (2020-12-02T01:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.