LLM Reasoning for Cold-Start Item Recommendation
- URL: http://arxiv.org/abs/2511.18261v1
- Date: Sun, 23 Nov 2025 03:22:53 GMT
- Title: LLM Reasoning for Cold-Start Item Recommendation
- Authors: Shijun Li, Yu Wang, Jin Wang, Ying Li, Joydeep Ghosh, Anne Cocos,
- Abstract summary: Large Language Models (LLMs) have shown significant potential for improving recommendation systems.<n>We propose novel reasoning strategies designed for cold-start item recommendations within the Netflix domain.<n>Our method utilizes the advanced reasoning capabilities of LLMs to effectively infer user preferences.
- Score: 11.180516000970528
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have shown significant potential for improving recommendation systems through their inherent reasoning capabilities and extensive knowledge base. Yet, existing studies predominantly address warm-start scenarios with abundant user-item interaction data, leaving the more challenging cold-start scenarios, where sparse interactions hinder traditional collaborative filtering methods, underexplored. To address this limitation, we propose novel reasoning strategies designed for cold-start item recommendations within the Netflix domain. Our method utilizes the advanced reasoning capabilities of LLMs to effectively infer user preferences, particularly for newly introduced or rarely interacted items. We systematically evaluate supervised fine-tuning, reinforcement learning-based fine-tuning, and hybrid approaches that combine both methods to optimize recommendation performance. Extensive experiments on real-world data demonstrate significant improvements in both methodological efficacy and practical performance in cold-start recommendation contexts. Remarkably, our reasoning-based fine-tuned models outperform Netflix's production ranking model by up to 8% in certain cases.
Related papers
- Tree of Preferences for Diversified Recommendation [54.183647833064136]
We study diversified recommendation from a data-bias perspective.<n>Inspired by the outstanding performance of large language models (LLMs) in zero-shot inference leveraging world knowledge, we propose a novel approach.
arXiv Detail & Related papers (2025-12-24T04:13:17Z) - Think before Recommendation: Autonomous Reasoning-enhanced Recommender [25.883091131835172]
RecZero is a reinforcement learning-based recommendation paradigm that abandons the traditional multi-model and multi-stage distillation approach.<n>The paper explores a hybrid paradigm, RecOne, which combines supervised fine-tuning with RL, initializing the model with cold-start reasoning samples and further optimizing it with RL.
arXiv Detail & Related papers (2025-10-27T07:26:32Z) - DeepRec: Towards a Deep Dive Into the Item Space with Large Language Model Based Recommendation [83.21140655248624]
Large language models (LLMs) have been introduced into recommender systems (RSs)<n>We propose DeepRec, a novel LLM-based RS that enables autonomous multi-turn interactions between LLMs and TRMs for deep exploration of the item space.<n> Experiments on public datasets demonstrate that DeepRec significantly outperforms both traditional and LLM-based baselines.
arXiv Detail & Related papers (2025-05-22T15:49:38Z) - From Reviews to Dialogues: Active Synthesis for Zero-Shot LLM-based Conversational Recommender System [49.57258257916805]
Large Language Models (LLMs) demonstrate strong zero-shot recommendation capabilities.<n>Practical applications often favor smaller, internally managed recommender models due to scalability, interpretability, and data privacy constraints.<n>We propose an active data augmentation framework that synthesizes conversational training data by leveraging black-box LLMs guided by active learning techniques.
arXiv Detail & Related papers (2025-04-21T23:05:47Z) - LLMInit: A Free Lunch from Large Language Models for Selective Initialization of Recommendation [34.227734210743904]
Collaborative filtering models have shown strong performance in capturing user-item interactions for recommendation systems.<n>The emergence of large language models (LLMs) like GPT and LLaMA presents new possibilities for enhancing recommendation performance.
arXiv Detail & Related papers (2025-03-03T18:41:59Z) - Enhanced Recommendation Combining Collaborative Filtering and Large Language Models [0.0]
Large Language Models (LLMs) provide a new breakthrough for recommendation systems.<n>This paper proposes an enhanced recommendation method that combines collaborative filtering and LLMs.<n>The results show that the hybrid model based on collaborative filtering and LLMs significantly improves precision, recall, and user satisfaction.
arXiv Detail & Related papers (2024-12-25T00:23:53Z) - SELF: Surrogate-light Feature Selection with Large Language Models in Deep Recommender Systems [51.09233156090496]
SurrogatE-Light Feature selection method for deep recommender systems.<n> SELF integrates semantic reasoning from Large Language Models with task-specific learning from surrogate models.<n> Comprehensive experiments on three public datasets from real-world recommender platforms validate the effectiveness of SELF.
arXiv Detail & Related papers (2024-12-11T16:28:18Z) - STAR: A Simple Training-free Approach for Recommendations using Large Language Models [36.18841135511487]
Current state-of-the-art methods rely on fine-tuning large language models (LLMs) to achieve optimal results.<n>We propose a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning.<n>Our method achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys & Games, and -1.8% on Sports & Outdoors.
arXiv Detail & Related papers (2024-10-21T19:34:40Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.<n>We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Large Language Models meet Collaborative Filtering: An Efficient All-round LLM-based Recommender System [19.8986219047121]
Collaborative filtering recommender systems (CF-RecSys) have shown successive results in enhancing the user experience on social media and e-commerce platforms.
Recent strategies have focused on leveraging modality information of user/items based on pre-trained modality encoders and Large Language Models.
We propose an efficient All-round LLM-based Recommender system, called A-LLMRec, that excels not only in the cold scenario but also in the warm scenario.
arXiv Detail & Related papers (2024-04-17T13:03:07Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.