Prompt Tuning for Item Cold-start Recommendation
- URL: http://arxiv.org/abs/2412.18082v1
- Date: Tue, 24 Dec 2024 01:38:19 GMT
- Title: Prompt Tuning for Item Cold-start Recommendation
- Authors: Yuezihan Jiang, Gaode Chen, Wenhan Zhang, Jingchi Wang, Yinjie Jiang, Qi Zhang, Jingjian Lin, Peng Jiang, Kaigui Bian,
- Abstract summary: The item cold-start problem is crucial for online recommender systems, as the success of the cold-start phase determines whether items can transition into popular ones.
Prompt learning, a powerful technique used in natural language processing (NLP) to address zero- or few-shot problems, has been adapted for recommender systems to tackle similar challenges.
We propose to leverage high-value positive feedback, termed pinnacle feedback as prompt information, to simultaneously resolve the above two problems.
- Score: 21.073232866618554
- License:
- Abstract: The item cold-start problem is crucial for online recommender systems, as the success of the cold-start phase determines whether items can transition into popular ones. Prompt learning, a powerful technique used in natural language processing (NLP) to address zero- or few-shot problems, has been adapted for recommender systems to tackle similar challenges. However, existing methods typically rely on content-based properties or text descriptions for prompting, which we argue may be suboptimal for cold-start recommendations due to 1) semantic gaps with recommender tasks, 2) model bias caused by warm-up items contribute most of the positive feedback to the model, which is the core of the cold-start problem that hinders the recommender quality on cold-start items. We propose to leverage high-value positive feedback, termed pinnacle feedback as prompt information, to simultaneously resolve the above two problems. We experimentally prove that compared to the content description proposed in existing works, the positive feedback is more suitable to serve as prompt information by bridging the semantic gaps. Besides, we propose item-wise personalized prompt networks to encode pinnaclce feedback to relieve the model bias by the positive feedback dominance problem. Extensive experiments on four real-world datasets demonstrate the superiority of our model over state-of-the-art methods. Moreover, PROMO has been successfully deployed on a popular short-video sharing platform, a billion-user scale commercial short-video application, achieving remarkable performance gains across various commercial metrics within cold-start scenarios
Related papers
- Cold-Start Recommendation towards the Era of Large Language Models (LLMs): A Comprehensive Survey and Roadmap [78.26201062505814]
Cold-start problem is one of the long-standing challenges in recommender systems.
Due to the diversification of internet platforms and the exponential growth of users and items, the importance of cold-start recommendation (CSR) is becoming increasingly evident.
This paper provides a comprehensive review and discussion on the roadmap, related literature, and future directions of CSR.
arXiv Detail & Related papers (2025-01-03T18:51:18Z) - Online Item Cold-Start Recommendation with Popularity-Aware Meta-Learning [14.83192161148111]
We propose a model-agnostic recommendation algorithm called Popularity-Aware Meta-learning (PAM) to address the item cold-start problem.
PAM divides incoming data into different meta-learning tasks by predefined item popularity thresholds.
These task-fixing design significantly reduces additional computation and storage costs compared to offline methods.
arXiv Detail & Related papers (2024-11-18T01:30:34Z) - Language-Model Prior Overcomes Cold-Start Items [14.370472820496802]
The growth ofRecSys is driven by digitization and the need for personalized content in areas such as e-commerce and video streaming.
Existing solutions for the cold-start problem, such as content-based recommenders and hybrid methods, leverage item metadata to determine item similarities.
This paper introduces a novel approach for cold-start item recommendation that utilizes the language model (LM) to estimate item similarities.
arXiv Detail & Related papers (2024-11-13T22:45:52Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - A First Look at Selection Bias in Preference Elicitation for Recommendation [64.44255178199846]
We study the effect of selection bias in preference elicitation on the resulting recommendations.
A big hurdle is the lack of any publicly available dataset that has preference elicitation interactions.
We propose a simulation of a topic-based preference elicitation process.
arXiv Detail & Related papers (2024-05-01T14:56:56Z) - Cold & Warm Net: Addressing Cold-Start Users in Recommender Systems [10.133475523630139]
Cold-start recommendation is one of the major challenges faced by recommender systems (RS)
In this paper, we propose Cold & Warm Net based on expert models who are responsible for modeling cold-start and warm-up users respectively.
The proposed model has also been deployed on an industrial short video platform and achieves a significant increase in app dwell time and user retention rate.
arXiv Detail & Related papers (2023-09-27T13:31:43Z) - Could Small Language Models Serve as Recommenders? Towards Data-centric
Cold-start Recommendations [38.91330250981614]
We present PromptRec, a simple but effective approach based on in-context learning of language models.
We propose to enhance small language models for recommender systems with a data-centric pipeline.
To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem.
arXiv Detail & Related papers (2023-06-29T18:50:12Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - Cold-start Sequential Recommendation via Meta Learner [10.491428090228768]
We propose a Meta-learning-based Cold-Start Sequential Recommendation Framework, namely Mecos, to mitigate the item cold-start problem in sequential recommendation.
Mecos effectively extracts user preference from limited interactions and learns to match the target cold-start item with the potential user.
arXiv Detail & Related papers (2020-12-10T05:23:13Z) - Addressing the Cold-Start Problem in Outfit Recommendation Using Visual
Preference Modelling [51.147871738838305]
This paper attempts to address the cold-start problem for new users by leveraging a novel visual preference modelling approach.
We demonstrate the use of our approach with feature-weighted clustering to personalise occasion-oriented outfit recommendation.
arXiv Detail & Related papers (2020-08-04T10:07:09Z) - Seamlessly Unifying Attributes and Items: Conversational Recommendation
for Cold-Start Users [111.28351584726092]
We consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user interactively.
Our Conversational Thompson Sampling (ConTS) model holistically solves all questions in conversational recommendation by choosing the arm with the maximal reward to play.
arXiv Detail & Related papers (2020-05-23T08:56:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.