Large Language Model Simulator for Cold-Start Recommendation
- URL: http://arxiv.org/abs/2402.09176v2
- Date: Wed, 25 Dec 2024 12:38:33 GMT
- Title: Large Language Model Simulator for Cold-Start Recommendation
- Authors: Feiran Huang, Yuanchen Bei, Zhenghang Yang, Junyi Jiang, Hao Chen, Qijie Shen, Senzhang Wang, Fakhri Karray, Philip S. Yu,
- Abstract summary: Cold items rely solely on content features, limiting their recommendation performance and impacting user experience and revenue.
Current models generate synthetic behavioral embeddings from content features but fail to address the core issue: the absence of historical behavior data.
We introduce the LLM Simulator framework, which leverages large language models to simulate user interactions for cold items.
- Score: 45.34030399042562
- License:
- Abstract: Recommending cold items remains a significant challenge in billion-scale online recommendation systems. While warm items benefit from historical user behaviors, cold items rely solely on content features, limiting their recommendation performance and impacting user experience and revenue. Current models generate synthetic behavioral embeddings from content features but fail to address the core issue: the absence of historical behavior data. To tackle this, we introduce the LLM Simulator framework, which leverages large language models to simulate user interactions for cold items, fundamentally addressing the cold-start problem. However, simply using LLM to traverse all users can introduce significant complexity in billion-scale systems. To manage the computational complexity, we propose a coupled funnel ColdLLM framework for online recommendation. ColdLLM efficiently reduces the number of candidate users from billions to hundreds using a trained coupled filter, allowing the LLM to operate efficiently and effectively on the filtered set. Extensive experiments show that ColdLLM significantly surpasses baselines in cold-start recommendations, including Recall and NDCG metrics. A two-week A/B test also validates that ColdLLM can effectively increase the cold-start period GMV.
Related papers
- Online Item Cold-Start Recommendation with Popularity-Aware Meta-Learning [14.83192161148111]
We propose a model-agnostic recommendation algorithm called Popularity-Aware Meta-learning (PAM) to address the item cold-start problem.
PAM divides incoming data into different meta-learning tasks by predefined item popularity thresholds.
These task-fixing design significantly reduces additional computation and storage costs compared to offline methods.
arXiv Detail & Related papers (2024-11-18T01:30:34Z) - Graph Neural Patching for Cold-Start Recommendations [16.08395433358279]
We introduce Graph Neural Patching for Cold-Start Recommendations (GNP)
GNP is a customized GNN framework with dual functionalities: GWarmer for modeling collaborative signal on existing warm users/items and Patching Networks for simulating and enhancing GWarmer's performance on cold-start recommendations.
Extensive experiments on three benchmark datasets confirm GNP's superiority in recommending both warm and cold users/items.
arXiv Detail & Related papers (2024-10-18T07:44:12Z) - Keyword-driven Retrieval-Augmented Large Language Models for Cold-start User Recommendations [5.374800961359305]
We introduce KALM4Rec, a framework to address the problem of cold-start user restaurant recommendations.
KALM4Rec operates in two main stages: candidates retrieval and LLM-based candidates re-ranking.
Our evaluation, using a Yelp restaurant dataset with user reviews from three English-speaking cities, shows that our proposed framework significantly improves recommendation quality.
arXiv Detail & Related papers (2024-05-30T02:00:03Z) - LLMTreeRec: Unleashing the Power of Large Language Models for Cold-Start Recommendations [67.57808826577678]
Large Language Models (LLMs) can model recommendation tasks as language analysis tasks and provide zero-shot results based on their vast open-world knowledge.
But the large scale of the item corpus poses a challenge to LLMs, leading to substantial token consumption that makes it impractical to deploy in real-world recommendation systems.
We introduce a tree-based LLM recommendation framework LLMTreeRec, which structures all items into an item tree to improve the efficiency of LLM's item retrieval.
arXiv Detail & Related papers (2024-03-31T14:41:49Z) - CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation [60.2700801392527]
We introduce CoLLM, an innovative LLMRec methodology that seamlessly incorporates collaborative information into LLMs for recommendation.
CoLLM captures collaborative information through an external traditional model and maps it to the input token embedding space of LLM.
Extensive experiments validate that CoLLM adeptly integrates collaborative information into LLMs, resulting in enhanced recommendation performance.
arXiv Detail & Related papers (2023-10-30T12:25:00Z) - GPatch: Patching Graph Neural Networks for Cold-Start Recommendations [20.326139541161194]
Cold start is an essential and persistent problem in recommender systems.
State-of-the-art solutions rely on training hybrid models for both cold-start and existing users/items.
We propose a tailored GNN-based framework (GPatch) that contains two separate but correlated components.
arXiv Detail & Related papers (2022-09-25T13:16:39Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - Privileged Graph Distillation for Cold Start Recommendation [57.918041397089254]
The cold start problem in recommender systems requires recommending to new users (items) based on attributes without any historical interaction records.
We propose a privileged graph distillation model(PGD)
Our proposed model is generally applicable to different cold start scenarios with new user, new item, or new user-new item.
arXiv Detail & Related papers (2021-05-31T14:05:27Z) - Cold-start Sequential Recommendation via Meta Learner [10.491428090228768]
We propose a Meta-learning-based Cold-Start Sequential Recommendation Framework, namely Mecos, to mitigate the item cold-start problem in sequential recommendation.
Mecos effectively extracts user preference from limited interactions and learns to match the target cold-start item with the potential user.
arXiv Detail & Related papers (2020-12-10T05:23:13Z) - Seamlessly Unifying Attributes and Items: Conversational Recommendation
for Cold-Start Users [111.28351584726092]
We consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user interactively.
Our Conversational Thompson Sampling (ConTS) model holistically solves all questions in conversational recommendation by choosing the arm with the maximal reward to play.
arXiv Detail & Related papers (2020-05-23T08:56:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.