E4SRec: An Elegant Effective Efficient Extensible Solution of Large
Language Models for Sequential Recommendation
- URL: http://arxiv.org/abs/2312.02443v1
- Date: Tue, 5 Dec 2023 02:50:18 GMT
- Title: E4SRec: An Elegant Effective Efficient Extensible Solution of Large
Language Models for Sequential Recommendation
- Authors: Xinhang Li, Chong Chen, Xiangyu Zhao, Yong Zhang, Chunxiao Xing
- Abstract summary: We introduce an Elegant Effective Efficient Extensible solution for large language models for Sequential Recommendation (E4SRec)
E4SRec seamlessly integrates Large Language Models with traditional recommender systems that exclusively utilize IDs to represent items.
- Score: 30.16954700102393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent advancements in Large Language Models (LLMs) have sparked interest
in harnessing their potential within recommender systems. Since LLMs are
designed for natural language tasks, existing recommendation approaches have
predominantly transformed recommendation tasks into open-domain natural
language generation tasks. However, this approach necessitates items to possess
rich semantic information, often generates out-of-range results, and suffers
from notably low efficiency and limited extensibility. Furthermore, practical
ID-based recommendation strategies, reliant on a huge number of unique
identities (IDs) to represent users and items, have gained prominence in
real-world recommender systems due to their effectiveness and efficiency.
Nevertheless, the incapacity of LLMs to model IDs presents a formidable
challenge when seeking to leverage LLMs for personalized recommendations. In
this paper, we introduce an Elegant Effective Efficient Extensible solution for
large language models for Sequential Recommendation (E4SRec), which seamlessly
integrates LLMs with traditional recommender systems that exclusively utilize
IDs to represent items. Specifically, E4SRec takes ID sequences as inputs,
ensuring that the generated outputs fall within the candidate lists.
Furthermore, E4SRec possesses the capability to generate the entire ranking
list in a single forward process, and demands only a minimal set of pluggable
parameters, which are trained for each dataset while keeping the entire LLM
frozen. We substantiate the effectiveness, efficiency, and extensibility of our
proposed E4SRec through comprehensive experiments conducted on four widely-used
real-world datasets. The implementation code is accessible at
https://github.com/HestiaSky/E4SRec/.
Related papers
- GOT4Rec: Graph of Thoughts for Sequential Recommendation [25.03964361177406]
We propose GOT4Rec, a sequential recommendation method that utilizes the graph of thoughts (GoT) prompting strategy.
We identify and utilize three key types of information within user history sequences: short-term interests, long-term interests and collaborative information from other users.
Extensive experiments on real-world datasets demonstrate the effectiveness of GOT4Rec, indicating that it outperforms existing state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-22T13:24:01Z) - Enhancing ID-based Recommendation with Large Language Models [47.14302346325941]
We introduce a pioneering approach called "LLM for ID-based Recommendation" (LLM4IDRec)
This innovative approach integrates the capabilities of LLMs while exclusively relying on ID data, thus diverging from the previous reliance on textual data.
We evaluate the effectiveness of our LLM4IDRec approach using three widely-used datasets.
arXiv Detail & Related papers (2024-11-04T12:43:12Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - TokenRec: Learning to Tokenize ID for LLM-based Generative Recommendation [16.93374578679005]
TokenRec is a novel framework for tokenizing and retrieving large-scale language models (LLMs) based Recommender Systems (RecSys)
Our strategy, Masked Vector-Quantized (MQ) Tokenizer, quantizes the masked user/item representations learned from collaborative filtering into discrete tokens.
Our generative retrieval paradigm is designed to efficiently recommend top-$K$ items for users to eliminate the need for auto-regressive decoding and beam search processes.
arXiv Detail & Related papers (2024-06-15T00:07:44Z) - One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models [67.49462724595445]
Retrieval-augmented generation (RAG) is a promising way to improve large language models (LLMs)
We propose a novel method that involves learning scalable and pluggable virtual tokens for RAG.
arXiv Detail & Related papers (2024-05-30T03:44:54Z) - RA-Rec: An Efficient ID Representation Alignment Framework for LLM-based Recommendation [9.606111709136675]
We present RA-Rec, an efficient ID representation framework for LLM-based recommendation.
RA-Rec substantially outperforms current state-of-the-art methods, achieving up to 3.0% absolute HitRate@100 improvements.
arXiv Detail & Related papers (2024-02-07T02:14:58Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Query-Dependent Prompt Evaluation and Optimization with Offline Inverse
RL [62.824464372594576]
We aim to enhance arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization.
We identify a previously overlooked objective of query dependency in such optimization.
We introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data.
arXiv Detail & Related papers (2023-09-13T01:12:52Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z) - PALR: Personalization Aware LLMs for Recommendation [7.407353565043918]
PALR aims to combine user history behaviors (such as clicks, purchases, ratings, etc.) with large language models (LLMs) to generate user preferred items.
Our solution outperforms state-of-the-art models on various sequential recommendation tasks.
arXiv Detail & Related papers (2023-05-12T17:21:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.