Keyword-driven Retrieval-Augmented Large Language Models for Cold-start User Recommendations
- URL: http://arxiv.org/abs/2405.19612v2
- Date: Sun, 08 Sep 2024 20:32:47 GMT
- Title: Keyword-driven Retrieval-Augmented Large Language Models for Cold-start User Recommendations
- Authors: Hai-Dang Kieu, Minh Duc Nguyen, Thanh-Son Nguyen, Dung D. Le,
- Abstract summary: We introduce KALM4Rec, a framework to address the problem of cold-start user restaurant recommendations.
KALM4Rec operates in two main stages: candidates retrieval and LLM-based candidates re-ranking.
Our evaluation, using a Yelp restaurant dataset with user reviews from three English-speaking cities, shows that our proposed framework significantly improves recommendation quality.
- Score: 5.374800961359305
- License:
- Abstract: Recent advancements in Large Language Models (LLMs) have shown significant potential in enhancing recommender systems. However, addressing the cold-start recommendation problem, where users lack historical data, remains a considerable challenge. In this paper, we introduce KALM4Rec (Keyword-driven Retrieval-Augmented Large Language Models for Cold-start User Recommendations), a novel framework specifically designed to tackle this problem by requiring only a few input keywords from users in a practical scenario of cold-start user restaurant recommendations. KALM4Rec operates in two main stages: candidates retrieval and LLM-based candidates re-ranking. In the first stage, keyword-driven retrieval models are used to identify potential candidates, addressing LLMs' limitations in processing extensive tokens and reducing the risk of generating misleading information. In the second stage, we employ LLMs with various prompting strategies, including zero-shot and few-shot techniques, to re-rank these candidates by integrating multiple examples directly into the LLM prompts. Our evaluation, using a Yelp restaurant dataset with user reviews from three English-speaking cities, shows that our proposed framework significantly improves recommendation quality. Specifically, the integration of in-context instructions with LLMs for re-ranking markedly enhances the performance of the cold-start user recommender system.
Related papers
- Towards a Unified Paradigm: Integrating Recommendation Systems as a New Language in Large Models [33.02146794292383]
We introduce a new concept, "Integrating Recommendation Systems as a New Language in Large Models" (RSLLM)
RSLLM uses a unique prompting method that combines ID-based item embeddings from conventional recommendation models with textual item features.
It treats users' sequential behaviors as a distinct language and aligns the ID embeddings with the LLM's input space using a projector.
arXiv Detail & Related papers (2024-12-22T09:08:46Z) - Large Language Models meet Collaborative Filtering: An Efficient All-round LLM-based Recommender System [19.8986219047121]
Collaborative filtering recommender systems (CF-RecSys) have shown successive results in enhancing the user experience on social media and e-commerce platforms.
Recent strategies have focused on leveraging modality information of user/items based on pre-trained modality encoders and Large Language Models.
We propose an efficient All-round LLM-based Recommender system, called A-LLMRec, that excels not only in the cold scenario but also in the warm scenario.
arXiv Detail & Related papers (2024-04-17T13:03:07Z) - LLMTreeRec: Unleashing the Power of Large Language Models for Cold-Start Recommendations [67.57808826577678]
Large Language Models (LLMs) can model recommendation tasks as language analysis tasks and provide zero-shot results based on their vast open-world knowledge.
But the large scale of the item corpus poses a challenge to LLMs, leading to substantial token consumption that makes it impractical to deploy in real-world recommendation systems.
We introduce a tree-based LLM recommendation framework LLMTreeRec, which structures all items into an item tree to improve the efficiency of LLM's item retrieval.
arXiv Detail & Related papers (2024-03-31T14:41:49Z) - CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation [60.2700801392527]
We introduce CoLLM, an innovative LLMRec methodology that seamlessly incorporates collaborative information into LLMs for recommendation.
CoLLM captures collaborative information through an external traditional model and maps it to the input token embedding space of LLM.
Extensive experiments validate that CoLLM adeptly integrates collaborative information into LLMs, resulting in enhanced recommendation performance.
arXiv Detail & Related papers (2023-10-30T12:25:00Z) - LlamaRec: Two-Stage Recommendation using Large Language Models for
Ranking [10.671747198171136]
We propose a two-stage framework using large language models for ranking-based recommendation (LlamaRec)
In particular, we use small-scale sequential recommenders to retrieve candidates based on the user interaction history.
LlamaRec consistently achieves datasets superior performance in both recommendation performance and efficiency.
arXiv Detail & Related papers (2023-10-25T06:23:48Z) - ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation [43.270424225285105]
We focus on adapting and empowering a pure large language model for zero-shot and few-shot recommendation tasks.
We propose Retrieval-enhanced Large Language models (ReLLa) for recommendation tasks in both zero-shot and few-shot settings.
arXiv Detail & Related papers (2023-08-22T02:25:04Z) - Query Rewriting for Retrieval-Augmented Large Language Models [139.242907155883]
Large Language Models (LLMs) play powerful, black-box readers in the retrieve-then-read pipeline.
This work introduces a new framework, Rewrite-Retrieve-Read instead of the previous retrieve-then-read for the retrieval-augmented LLMs.
arXiv Detail & Related papers (2023-05-23T17:27:50Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.