Are Large Language Models Really Effective for Training-Free Cold-Start Recommendation?
- URL: http://arxiv.org/abs/2512.13001v1
- Date: Mon, 15 Dec 2025 05:47:07 GMT
- Title: Are Large Language Models Really Effective for Training-Free Cold-Start Recommendation?
- Authors: Genki Kusano, Kenya Abe, Kunihiro Takeoka,
- Abstract summary: This study focuses on training-free recommendation, where no task-specific training is performed.<n>Large language models (LLMs) have recently been explored as a promising solution, and numerous studies have been proposed.<n>We present the first controlled experiments that systematically evaluate these two approaches in the same setting.
- Score: 3.446483216812751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems usually rely on large-scale interaction data to learn from users' past behaviors and make accurate predictions. However, real-world applications often face situations where no training data is available, such as when launching new services or handling entirely new users. In such cases, conventional approaches cannot be applied. This study focuses on training-free recommendation, where no task-specific training is performed, and particularly on \textit{training-free cold-start recommendation} (TFCSR), the more challenging case where the target user has no interactions. Large language models (LLMs) have recently been explored as a promising solution, and numerous studies have been proposed. As the ability of text embedding models (TEMs) increases, they are increasingly recognized as applicable to training-free recommendation, but no prior work has directly compared LLMs and TEMs under identical conditions. We present the first controlled experiments that systematically evaluate these two approaches in the same setting. The results show that TEMs outperform LLM rerankers, and this trend holds not only in cold-start settings but also in warm-start settings with rich interactions. These findings indicate that direct LLM ranking is not the only viable option, contrary to the commonly shared belief, and TEM-based approaches provide a stronger and more scalable basis for training-free recommendation.
Related papers
- LLM Reasoning for Cold-Start Item Recommendation [11.180516000970528]
Large Language Models (LLMs) have shown significant potential for improving recommendation systems.<n>We propose novel reasoning strategies designed for cold-start item recommendations within the Netflix domain.<n>Our method utilizes the advanced reasoning capabilities of LLMs to effectively infer user preferences.
arXiv Detail & Related papers (2025-11-23T03:22:53Z) - Towards a Real-World Aligned Benchmark for Unlearning in Recommender Systems [49.766845975588275]
We propose a set of design desiderata and research questions to guide the development of a more realistic benchmark for unlearning in recommender systems.<n>We argue for an unlearning setup that reflects the sequential, time-sensitive nature of real-world deletion requests.<n>We present a preliminary experiment in a next-basket recommendation setting based on our proposed desiderata and find that unlearning also works for sequential recommendation models.
arXiv Detail & Related papers (2025-08-23T16:05:40Z) - Search-Based Credit Assignment for Offline Preference-Based Reinforcement Learning [83.64755389431971]
We introduce a Search-Based Preference Weighting scheme to unify two feedback sources.<n>For each transition in a preference labeled trajectory, SPW searches for the most similar state-action pairs from expert demonstrations.<n>These weights are then used to guide standard preference learning, enabling more accurate credit assignment.
arXiv Detail & Related papers (2025-08-21T07:41:45Z) - From Demonstrations to Rewards: Alignment Without Explicit Human Preferences [55.988923803469305]
In this paper, we propose a fresh perspective on learning alignment based on inverse reinforcement learning principles.<n>Instead of relying on large preference data, we directly learn the reward model from demonstration data.
arXiv Detail & Related papers (2025-03-15T20:53:46Z) - AutoElicit: Using Large Language Models for Expert Prior Elicitation in Predictive Modelling [53.54623137152208]
We introduce AutoElicit to extract knowledge from large language models and construct priors for predictive models.<n>We show these priors are informative and can be refined using natural language.<n>We find that AutoElicit yields priors that can substantially reduce error over uninformative priors, using fewer labels, and consistently outperform in-context learning.
arXiv Detail & Related papers (2024-11-26T10:13:39Z) - Real-Time Personalization for LLM-based Recommendation with Customized In-Context Learning [57.28766250993726]
This work explores adapting to dynamic user interests without any model updates.
Existing Large Language Model (LLM)-based recommenders often lose the in-context learning ability during recommendation tuning.
We propose RecICL, which customizes recommendation-specific in-context learning for real-time recommendations.
arXiv Detail & Related papers (2024-10-30T15:48:36Z) - STAR: A Simple Training-free Approach for Recommendations using Large Language Models [36.18841135511487]
Current state-of-the-art methods rely on fine-tuning large language models (LLMs) to achieve optimal results.<n>We propose a framework that utilizes LLMs and can be applied to various recommendation tasks without the need for fine-tuning.<n>Our method achieves Hits@10 performance of +23.8% on Beauty, +37.5% on Toys & Games, and -1.8% on Sports & Outdoors.
arXiv Detail & Related papers (2024-10-21T19:34:40Z) - A Social-aware Gaussian Pre-trained Model for Effective Cold-start
Recommendation [25.850274659792305]
We propose a novel recommendation model, the Social-aware Gaussian Pre-trained model (SGP), which encodes the user social relations and interaction data at the pre-training stage in a Graph Neural Network (GNN)
Our experiments on three public datasets show that, in comparison to 16 competitive baselines, our SGP model significantly outperforms the best baseline by upto 7.7% in terms of NDCG@10.
In addition, we show that SGP permits to effectively alleviate the cold-start problem, especially when users newly register to the system through their friends' suggestions.
arXiv Detail & Related papers (2023-11-27T13:04:33Z) - Is Meta-Learning the Right Approach for the Cold-Start Problem in
Recommender Systems? [5.804718528857615]
We show that it is possible to obtain similar, or higher, performance on commonly used benchmarks for the cold-start problem without using meta-learning techniques.
We further show that an extremely simple modular approach using common representation learning techniques, can perform comparably to meta-learning techniques specifically designed for the cold-start setting.
arXiv Detail & Related papers (2023-08-16T13:24:47Z) - Could Small Language Models Serve as Recommenders? Towards Data-centric
Cold-start Recommendations [38.91330250981614]
We present PromptRec, a simple but effective approach based on in-context learning of language models.
We propose to enhance small language models for recommender systems with a data-centric pipeline.
To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem.
arXiv Detail & Related papers (2023-06-29T18:50:12Z) - Meta-Learning with Adaptive Weighted Loss for Imbalanced Cold-Start
Recommendation [4.379304291229695]
We propose a novel sequential recommendation framework based on gradient-based meta-learning.
Our work is the first to tackle the impact of imbalanced ratings in cold-start sequential recommendation scenarios.
arXiv Detail & Related papers (2023-02-28T15:18:42Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.