Reindex-Then-Adapt: Improving Large Language Models for Conversational Recommendation
- URL: http://arxiv.org/abs/2405.12119v1
- Date: Mon, 20 May 2024 15:37:55 GMT
- Title: Reindex-Then-Adapt: Improving Large Language Models for Conversational Recommendation
- Authors: Zhankui He, Zhouhang Xie, Harald Steck, Dawen Liang, Rahul Jha, Nathan Kallus, Julian McAuley,
- Abstract summary: Large language models (LLMs) are revolutionizing conversational recommender systems.
We propose a Reindex-Then-Adapt (RTA) framework, which converts multi-token item titles into single tokens within LLMs.
Our framework demonstrates improved accuracy metrics across three different conversational recommendation datasets.
- Score: 50.19602159938368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) are revolutionizing conversational recommender systems by adeptly indexing item content, understanding complex conversational contexts, and generating relevant item titles. However, controlling the distribution of recommended items remains a challenge. This leads to suboptimal performance due to the failure to capture rapidly changing data distributions, such as item popularity, on targeted conversational recommendation platforms. In conversational recommendation, LLMs recommend items by generating the titles (as multiple tokens) autoregressively, making it difficult to obtain and control the recommendations over all items. Thus, we propose a Reindex-Then-Adapt (RTA) framework, which converts multi-token item titles into single tokens within LLMs, and then adjusts the probability distributions over these single-token item titles accordingly. The RTA framework marries the benefits of both LLMs and traditional recommender systems (RecSys): understanding complex queries as LLMs do; while efficiently controlling the recommended item distributions in conversational recommendations as traditional RecSys do. Our framework demonstrates improved accuracy metrics across three different conversational recommendation datasets and two adaptation settings
Related papers
- Self-Calibrated Listwise Reranking with Large Language Models [137.6557607279876]
Large language models (LLMs) have been employed in reranking tasks through a sequence-to-sequence approach.
This reranking paradigm requires a sliding window strategy to iteratively handle larger candidate sets.
We propose a novel self-calibrated listwise reranking method, which aims to leverage LLMs to produce global relevance scores for ranking.
arXiv Detail & Related papers (2024-11-07T10:31:31Z) - STORE: Streamlining Semantic Tokenization and Generative Recommendation with A Single LLM [59.08493154172207]
We propose a unified framework to streamline the semantic tokenization and generative recommendation process.
We formulate semantic tokenization as a text-to-token task and generative recommendation as a token-to-token task, supplemented by a token-to-text reconstruction task and a text-to-token auxiliary task.
All these tasks are framed in a generative manner and trained using a single large language model (LLM) backbone.
arXiv Detail & Related papers (2024-09-11T13:49:48Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - Taxonomy-Guided Zero-Shot Recommendations with LLMs [45.81618062939684]
Large language models (LLMs) have shown promise in recommender systems (RecSys)
We propose a novel method using a taxonomy dictionary to improve the clarity and structure of item information.
TaxRec significantly enhances recommendation quality compared to traditional zero-shot approaches.
arXiv Detail & Related papers (2024-06-20T07:06:58Z) - TokenRec: Learning to Tokenize ID for LLM-based Generative Recommendation [16.93374578679005]
TokenRec is a novel framework for tokenizing and retrieving large-scale language models (LLMs) based Recommender Systems (RecSys)
Our strategy, Masked Vector-Quantized (MQ) Tokenizer, quantizes the masked user/item representations learned from collaborative filtering into discrete tokens.
Our generative retrieval paradigm is designed to efficiently recommend top-$K$ items for users to eliminate the need for auto-regressive decoding and beam search processes.
arXiv Detail & Related papers (2024-06-15T00:07:44Z) - Improving LLMs for Recommendation with Out-Of-Vocabulary Tokens [51.584024345378005]
We show how to effectively tokenize users and items in Large Language Models (LLMs)-based recommender systems.
We emphasize the role of out-of-vocabulary (OOV) tokens in addition to the in-vocabulary ones.
Our proposed framework outperforms existing state-of-the-art methods across various downstream recommendation tasks.
arXiv Detail & Related papers (2024-06-12T17:59:05Z) - LlamaRec: Two-Stage Recommendation using Large Language Models for
Ranking [10.671747198171136]
We propose a two-stage framework using large language models for ranking-based recommendation (LlamaRec)
In particular, we use small-scale sequential recommenders to retrieve candidates based on the user interaction history.
LlamaRec consistently achieves datasets superior performance in both recommendation performance and efficiency.
arXiv Detail & Related papers (2023-10-25T06:23:48Z) - How to Index Item IDs for Recommendation Foundation Models [49.425959632372425]
Recommendation foundation model utilizes large language models (LLM) for recommendation by converting recommendation tasks into natural language tasks.
To avoid generating excessively long text and hallucinated recommendations, creating LLM-compatible item IDs is essential.
We propose four simple yet effective solutions, including sequential indexing, collaborative indexing, semantic (content-based) indexing, and hybrid indexing.
arXiv Detail & Related papers (2023-05-11T05:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.