Could Small Language Models Serve as Recommenders? Towards Data-centric
Cold-start Recommendations
- URL: http://arxiv.org/abs/2306.17256v5
- Date: Mon, 4 Mar 2024 21:14:01 GMT
- Title: Could Small Language Models Serve as Recommenders? Towards Data-centric
Cold-start Recommendations
- Authors: Xuansheng Wu, Huachi Zhou, Yucheng Shi, Wenlin Yao, Xiao Huang,
Ninghao Liu
- Abstract summary: We present PromptRec, a simple but effective approach based on in-context learning of language models.
We propose to enhance small language models for recommender systems with a data-centric pipeline.
To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem.
- Score: 38.91330250981614
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recommendation systems help users find matched items based on their previous
behaviors. Personalized recommendation becomes challenging in the absence of
historical user-item interactions, a practical problem for startups known as
the system cold-start recommendation. While existing research addresses
cold-start issues for either users or items, we still lack solutions for system
cold-start scenarios. To tackle the problem, we propose PromptRec, a simple but
effective approach based on in-context learning of language models, where we
transform the recommendation task into the sentiment analysis task on natural
language containing user and item profiles. However, this naive approach
heavily relies on the strong in-context learning ability emerged from large
language models, which could suffer from significant latency for online
recommendations. To solve the challenge, we propose to enhance small language
models for recommender systems with a data-centric pipeline, which consists of:
(1) constructing a refined corpus for model pre-training; (2) constructing a
decomposed prompt template via prompt pre-training. They correspond to the
development of training data and inference data, respectively. The pipeline is
supported by a theoretical framework that formalizes the connection between
in-context recommendation and language modeling. To evaluate our approach, we
introduce a cold-start recommendation benchmark, and the results demonstrate
that the enhanced small language models can achieve comparable cold-start
recommendation performance to that of large models with only $17\%$ of the
inference time. To the best of our knowledge, this is the first study to tackle
the system cold-start recommendation problem. We believe our findings will
provide valuable insights for future works. The benchmark and implementations
are available at https://github.com/JacksonWuxs/PromptRec.
Related papers
- Language-Model Prior Overcomes Cold-Start Items [14.370472820496802]
The growth ofRecSys is driven by digitization and the need for personalized content in areas such as e-commerce and video streaming.
Existing solutions for the cold-start problem, such as content-based recommenders and hybrid methods, leverage item metadata to determine item similarities.
This paper introduces a novel approach for cold-start item recommendation that utilizes the language model (LM) to estimate item similarities.
arXiv Detail & Related papers (2024-11-13T22:45:52Z) - Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation [51.25461871988366]
We propose a sequential recommendation algorithm based on a pre-trained language model and knowledge distillation.
The proposed algorithm enhances recommendation accuracy and provide timely recommendation services.
arXiv Detail & Related papers (2024-09-23T08:39:07Z) - Keyword-driven Retrieval-Augmented Large Language Models for Cold-start User Recommendations [5.374800961359305]
We introduce KALM4Rec, a framework to address the problem of cold-start user restaurant recommendations.
KALM4Rec operates in two main stages: candidates retrieval and LLM-based candidates re-ranking.
Our evaluation, using a Yelp restaurant dataset with user reviews from three English-speaking cities, shows that our proposed framework significantly improves recommendation quality.
arXiv Detail & Related papers (2024-05-30T02:00:03Z) - Large Language Model Augmented Narrative Driven Recommendations [51.77271767160573]
Narrative-driven recommendation (NDR) presents an information access problem where users solicit recommendations with verbose descriptions of their preferences and context.
NDR lacks abundant training data for models, and current platforms commonly do not support these requests.
We use large language models (LLMs) for data augmentation to train NDR models.
arXiv Detail & Related papers (2023-06-04T03:46:45Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Chain of Hindsight Aligns Language Models with Feedback [62.68665658130472]
We propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity.
We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model.
By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors.
arXiv Detail & Related papers (2023-02-06T10:28:16Z) - GPatch: Patching Graph Neural Networks for Cold-Start Recommendations [20.326139541161194]
Cold start is an essential and persistent problem in recommender systems.
State-of-the-art solutions rely on training hybrid models for both cold-start and existing users/items.
We propose a tailored GNN-based framework (GPatch) that contains two separate but correlated components.
arXiv Detail & Related papers (2022-09-25T13:16:39Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - Cold-start Sequential Recommendation via Meta Learner [10.491428090228768]
We propose a Meta-learning-based Cold-Start Sequential Recommendation Framework, namely Mecos, to mitigate the item cold-start problem in sequential recommendation.
Mecos effectively extracts user preference from limited interactions and learns to match the target cold-start item with the potential user.
arXiv Detail & Related papers (2020-12-10T05:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.