End-to-End Learnable Item Tokenization for Generative Recommendation
- URL: http://arxiv.org/abs/2409.05546v1
- Date: Mon, 09 Sep 2024 12:11:53 GMT
- Title: End-to-End Learnable Item Tokenization for Generative Recommendation
- Authors: Enze Liu, Bowen Zheng, Cheng Ling, Lantao Hu, Han Li, Wayne Xin Zhao,
- Abstract summary: We propose ETEGRec, a novel End-To-End Generative Recommender by seamlessly integrating item tokenization and generative recommendation.
Our framework is developed based on the dual encoder-decoder architecture, which consists of an item tokenizer and a generative recommender.
- Score: 51.82768744368208
- License:
- Abstract: Recently, generative recommendation has emerged as a promising new paradigm that directly generates item identifiers for recommendation. However, a key challenge lies in how to effectively construct item identifiers that are suitable for recommender systems. Existing methods typically decouple item tokenization from subsequent generative recommendation training, likely resulting in suboptimal performance. To address this limitation, we propose ETEGRec, a novel End-To-End Generative Recommender by seamlessly integrating item tokenization and generative recommendation. Our framework is developed based on the dual encoder-decoder architecture, which consists of an item tokenizer and a generative recommender. In order to achieve mutual enhancement between the two components, we propose a recommendation-oriented alignment approach by devising two specific optimization objectives: sequence-item alignment and preference-semantic alignment. These two alignment objectives can effectively couple the learning of item tokenizer and generative recommender, thereby fostering the mutual enhancement between the two components. Finally, we further devise an alternating optimization method, to facilitate stable and effective end-to-end learning of the entire framework. Extensive experiments demonstrate the effectiveness of our proposed framework compared to a series of traditional sequential recommendation models and generative recommendation baselines.
Related papers
- EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration [63.112790050749695]
We introduce EAGER, a novel generative recommendation framework that seamlessly integrates both behavioral and semantic information.
We validate the effectiveness of EAGER on four public benchmarks, demonstrating its superior performance compared to existing methods.
arXiv Detail & Related papers (2024-06-20T06:21:56Z) - Diffusion-based Contrastive Learning for Sequential Recommendation [6.3482831836623355]
We propose a Context-aware Diffusion-based Contrastive Learning for Sequential Recommendation, named CaDiRec.
CaDiRec employs a context-aware diffusion model to generate alternative items for the given positions within a sequence.
We train the entire framework in an end-to-end manner, with shared item embeddings between the diffusion model and the recommendation model.
arXiv Detail & Related papers (2024-05-15T14:20:37Z) - Fisher-Weighted Merge of Contrastive Learning Models in Sequential
Recommendation [0.0]
We are the first to apply the Fisher-Merging method to Sequential Recommendation, addressing and resolving practical challenges associated with it.
We demonstrate the effectiveness of our proposed methods, highlighting their potential to advance the state-of-the-art in sequential learning and recommendation systems.
arXiv Detail & Related papers (2023-07-05T05:58:56Z) - Dual Personalization on Federated Recommendation [50.4115315992418]
Federated recommendation is a new Internet service architecture that aims to provide privacy-preserving recommendation services in federated settings.
This paper proposes a novel Personalized Federated Recommendation (PFedRec) framework to learn many user-specific lightweight models.
We also propose a new dual personalization mechanism to effectively learn fine-grained personalization on both users and items.
arXiv Detail & Related papers (2023-01-16T05:26:07Z) - ContrastVAE: Contrastive Variational AutoEncoder for Sequential
Recommendation [58.02630582309427]
We propose to incorporate contrastive learning into the framework of Variational AutoEncoders.
We introduce ContrastELBO, a novel training objective that extends the conventional single-view ELBO to two-view case.
We also propose ContrastVAE, a two-branched VAE model with contrastive regularization as an embodiment of ContrastELBO for sequential recommendation.
arXiv Detail & Related papers (2022-08-27T03:35:00Z) - Towards Universal Sequence Representation Learning for Recommender
Systems [98.02154164251846]
We present a novel universal sequence representation learning approach, named UniSRec.
The proposed approach utilizes the associated description text of items to learn transferable representations across different recommendation scenarios.
Our approach can be effectively transferred to new recommendation domains or platforms in a parameter-efficient way.
arXiv Detail & Related papers (2022-06-13T07:21:56Z) - D2RLIR : an improved and diversified ranking function in interactive
recommendation systems based on deep reinforcement learning [0.3058685580689604]
This paper proposes a deep reinforcement learning based recommendation system by utilizing Actor-Critic architecture.
The proposed model is able to generate a diverse while relevance recommendation list based on the user's preferences.
arXiv Detail & Related papers (2021-10-28T13:11:29Z) - Sequential Recommendation with Self-Attentive Multi-Adversarial Network [101.25533520688654]
We present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation.
Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time.
arXiv Detail & Related papers (2020-05-21T12:28:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.