Towards Universal Sequence Representation Learning for Recommender
Systems
- URL: http://arxiv.org/abs/2206.05941v1
- Date: Mon, 13 Jun 2022 07:21:56 GMT
- Title: Towards Universal Sequence Representation Learning for Recommender
Systems
- Authors: Yupeng Hou, Shanlei Mu, Wayne Xin Zhao, Yaliang Li, Bolin Ding,
Ji-Rong Wen
- Abstract summary: We present a novel universal sequence representation learning approach, named UniSRec.
The proposed approach utilizes the associated description text of items to learn transferable representations across different recommendation scenarios.
Our approach can be effectively transferred to new recommendation domains or platforms in a parameter-efficient way.
- Score: 98.02154164251846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to develop effective sequential recommenders, a series of sequence
representation learning (SRL) methods are proposed to model historical user
behaviors. Most existing SRL methods rely on explicit item IDs for developing
the sequence models to better capture user preference. Though effective to some
extent, these methods are difficult to be transferred to new recommendation
scenarios, due to the limitation by explicitly modeling item IDs. To tackle
this issue, we present a novel universal sequence representation learning
approach, named UniSRec. The proposed approach utilizes the associated
description text of items to learn transferable representations across
different recommendation scenarios. For learning universal item
representations, we design a lightweight item encoding architecture based on
parametric whitening and mixture-of-experts enhanced adaptor. For learning
universal sequence representations, we introduce two contrastive pre-training
tasks by sampling multi-domain negatives. With the pre-trained universal
sequence representation model, our approach can be effectively transferred to
new recommendation domains or platforms in a parameter-efficient way, under
either inductive or transductive settings. Extensive experiments conducted on
real-world datasets demonstrate the effectiveness of the proposed approach.
Especially, our approach also leads to a performance improvement in a
cross-platform setting, showing the strong transferability of the proposed
universal SRL method. The code and pre-trained model are available at:
https://github.com/RUCAIBox/UniSRec.
Related papers
- Sparse Orthogonal Parameters Tuning for Continual Learning [34.462967722928724]
Continual learning methods based on pre-trained models (PTM) have recently gained attention which adapt to successive downstream tasks without catastrophic forgetting.
We propose a novel yet effective method called SoTU (Sparse Orthogonal Parameters TUning)
arXiv Detail & Related papers (2024-11-05T05:19:09Z) - Improving generalization in large language models by learning prefix
subspaces [5.911540700785975]
This article focuses on large language models (LLMs) fine-tuning in the scarce data regime (also known as the "few-shot" learning setting)
We propose a method to increase the generalization capabilities of LLMs based on neural network subspaces.
arXiv Detail & Related papers (2023-10-24T12:44:09Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - A Model-Agnostic Framework for Recommendation via Interest-aware Item
Embeddings [4.989653738257287]
Interest-aware Capsule network (IaCN) is a model-agnostic framework that directly learns interest-oriented item representations.
IaCN serves as an auxiliary task, enabling the joint learning of both item-based and interest-based representations.
We evaluate the proposed approach on benchmark datasets, exploring various scenarios involving different deep neural networks.
arXiv Detail & Related papers (2023-08-17T22:40:59Z) - Fisher-Weighted Merge of Contrastive Learning Models in Sequential
Recommendation [0.0]
We are the first to apply the Fisher-Merging method to Sequential Recommendation, addressing and resolving practical challenges associated with it.
We demonstrate the effectiveness of our proposed methods, highlighting their potential to advance the state-of-the-art in sequential learning and recommendation systems.
arXiv Detail & Related papers (2023-07-05T05:58:56Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Self-Supervised Reinforcement Learning for Recommender Systems [77.38665506495553]
We propose self-supervised reinforcement learning for sequential recommendation tasks.
Our approach augments standard recommendation models with two output layers: one for self-supervised learning and the other for RL.
Based on such an approach, we propose two frameworks namely Self-Supervised Q-learning(SQN) and Self-Supervised Actor-Critic(SAC)
arXiv Detail & Related papers (2020-06-10T11:18:57Z) - Sequential Recommendation with Self-Attentive Multi-Adversarial Network [101.25533520688654]
We present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation.
Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time.
arXiv Detail & Related papers (2020-05-21T12:28:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.