Cold-start Sequential Recommendation via Meta Learner
- URL: http://arxiv.org/abs/2012.05462v1
- Date: Thu, 10 Dec 2020 05:23:13 GMT
- Title: Cold-start Sequential Recommendation via Meta Learner
- Authors: Yujia Zheng, Siyi Liu, Zekun Li, Shu Wu
- Abstract summary: We propose a Meta-learning-based Cold-Start Sequential Recommendation Framework, namely Mecos, to mitigate the item cold-start problem in sequential recommendation.
Mecos effectively extracts user preference from limited interactions and learns to match the target cold-start item with the potential user.
- Score: 10.491428090228768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores meta-learning in sequential recommendation to alleviate
the item cold-start problem. Sequential recommendation aims to capture user's
dynamic preferences based on historical behavior sequences and acts as a key
component of most online recommendation scenarios. However, most previous
methods have trouble recommending cold-start items, which are prevalent in
those scenarios. As there is generally no side information in the setting of
sequential recommendation task, previous cold-start methods could not be
applied when only user-item interactions are available. Thus, we propose a
Meta-learning-based Cold-Start Sequential Recommendation Framework, namely
Mecos, to mitigate the item cold-start problem in sequential recommendation.
This task is non-trivial as it targets at an important problem in a novel and
challenging context. Mecos effectively extracts user preference from limited
interactions and learns to match the target cold-start item with the potential
user. Besides, our framework can be painlessly integrated with neural
network-based models. Extensive experiments conducted on three real-world
datasets verify the superiority of Mecos, with the average improvement up to
99%, 91%, and 70% in HR@10 over state-of-the-art baseline methods.
Related papers
- Online Item Cold-Start Recommendation with Popularity-Aware Meta-Learning [14.83192161148111]
We propose a model-agnostic recommendation algorithm called Popularity-Aware Meta-learning (PAM) to address the item cold-start problem.
PAM divides incoming data into different meta-learning tasks by predefined item popularity thresholds.
These task-fixing design significantly reduces additional computation and storage costs compared to offline methods.
arXiv Detail & Related papers (2024-11-18T01:30:34Z) - Language-Model Prior Overcomes Cold-Start Items [14.370472820496802]
The growth ofRecSys is driven by digitization and the need for personalized content in areas such as e-commerce and video streaming.
Existing solutions for the cold-start problem, such as content-based recommenders and hybrid methods, leverage item metadata to determine item similarities.
This paper introduces a novel approach for cold-start item recommendation that utilizes the language model (LM) to estimate item similarities.
arXiv Detail & Related papers (2024-11-13T22:45:52Z) - Could Small Language Models Serve as Recommenders? Towards Data-centric
Cold-start Recommendations [38.91330250981614]
We present PromptRec, a simple but effective approach based on in-context learning of language models.
We propose to enhance small language models for recommender systems with a data-centric pipeline.
To the best of our knowledge, this is the first study to tackle the system cold-start recommendation problem.
arXiv Detail & Related papers (2023-06-29T18:50:12Z) - Meta-Learning with Adaptive Weighted Loss for Imbalanced Cold-Start
Recommendation [4.379304291229695]
We propose a novel sequential recommendation framework based on gradient-based meta-learning.
Our work is the first to tackle the impact of imbalanced ratings in cold-start sequential recommendation scenarios.
arXiv Detail & Related papers (2023-02-28T15:18:42Z) - Diverse Preference Augmentation with Multiple Domains for Cold-start
Recommendations [92.47380209981348]
We propose a Diverse Preference Augmentation framework with multiple source domains based on meta-learning.
We generate diverse ratings in a new domain of interest to handle overfitting on the case of sparse interactions.
These ratings are introduced into the meta-training procedure to learn a preference meta-learner, which produces good generalization ability.
arXiv Detail & Related papers (2022-04-01T10:10:50Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - Privileged Graph Distillation for Cold Start Recommendation [57.918041397089254]
The cold start problem in recommender systems requires recommending to new users (items) based on attributes without any historical interaction records.
We propose a privileged graph distillation model(PGD)
Our proposed model is generally applicable to different cold start scenarios with new user, new item, or new user-new item.
arXiv Detail & Related papers (2021-05-31T14:05:27Z) - Seamlessly Unifying Attributes and Items: Conversational Recommendation
for Cold-Start Users [111.28351584726092]
We consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user interactively.
Our Conversational Thompson Sampling (ConTS) model holistically solves all questions in conversational recommendation by choosing the arm with the maximal reward to play.
arXiv Detail & Related papers (2020-05-23T08:56:37Z) - Joint Training Capsule Network for Cold Start Recommendation [64.35879555545749]
This paper proposes a novel neural network, joint training capsule network (JTCN) for the cold start recommendation task.
An attentive capsule layer is proposed to aggregate high-level user preference from the low-level interaction history.
Experiments on two publicly available datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2020-05-23T04:27:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.