Personalized Adaptive Meta Learning for Cold-start User Preference
Prediction
- URL: http://arxiv.org/abs/2012.11842v1
- Date: Tue, 22 Dec 2020 05:48:08 GMT
- Title: Personalized Adaptive Meta Learning for Cold-start User Preference
Prediction
- Authors: Runsheng Yu, Yu Gong, Xu He, Bo An, Yu Zhu, Qingwen Liu, Wenwu Ou
- Abstract summary: A common challenge in personalized user preference prediction is the cold-start problem.
We propose a novel personalized adaptive meta learning approach to consider both the major and the minor users.
Our method outperforms the state-of-the-art methods dramatically for both the minor and major users.
- Score: 46.65783845757707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common challenge in personalized user preference prediction is the
cold-start problem. Due to the lack of user-item interactions, directly
learning from the new users' log data causes serious over-fitting problem.
Recently, many existing studies regard the cold-start personalized preference
prediction as a few-shot learning problem, where each user is the task and
recommended items are the classes, and the gradient-based meta learning method
(MAML) is leveraged to address this challenge. However, in real-world
application, the users are not uniformly distributed (i.e., different users may
have different browsing history, recommended items, and user profiles. We
define the major users as the users in the groups with large numbers of users
sharing similar user information, and other users are the minor users),
existing MAML approaches tend to fit the major users and ignore the minor
users. To address this cold-start task-overfitting problem, we propose a novel
personalized adaptive meta learning approach to consider both the major and the
minor users with three key contributions: 1) We are the first to present a
personalized adaptive learning rate meta-learning approach to improve the
performance of MAML by focusing on both the major and minor users. 2) To
provide better personalized learning rates for each user, we introduce a
similarity-based method to find similar users as a reference and a tree-based
method to store users' features for fast search. 3) To reduce the memory usage,
we design a memory agnostic regularizer to further reduce the space complexity
to constant while maintain the performance. Experiments on MovieLens,
BookCrossing, and real-world production datasets reveal that our method
outperforms the state-of-the-art methods dramatically for both the minor and
major users.
Related papers
- Improved Diversity-Promoting Collaborative Metric Learning for Recommendation [127.08043409083687]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2024-09-02T07:44:48Z) - User Inference Attacks on Large Language Models [26.616016510555088]
Fine-tuning is a common and effective method for tailoring large language models (LLMs) to specialized tasks and applications.
We study the privacy implications of fine-tuning LLMs on user data.
arXiv Detail & Related papers (2023-10-13T17:24:52Z) - Explainable Active Learning for Preference Elicitation [0.0]
We employ Active Learning (AL) to solve the addressed problem with the objective of maximizing information acquisition with minimal user effort.
AL operates for selecting informative data from a large unlabeled set to inquire an oracle to label them.
It harvests user feedback (given for the system's explanations on the presented items) over informative samples to update an underlying machine learning (ML) model.
arXiv Detail & Related papers (2023-09-01T09:22:33Z) - Meta-Learning with Adaptive Weighted Loss for Imbalanced Cold-Start
Recommendation [4.379304291229695]
We propose a novel sequential recommendation framework based on gradient-based meta-learning.
Our work is the first to tackle the impact of imbalanced ratings in cold-start sequential recommendation scenarios.
arXiv Detail & Related papers (2023-02-28T15:18:42Z) - RESUS: Warm-Up Cold Users via Meta-Learning Residual User Preferences in
CTR Prediction [14.807495564177252]
Click-Through Rate (CTR) prediction on cold users is a challenging task in recommender systems.
We propose a novel and efficient approach named RESUS, which decouples the learning of global preference knowledge contributed by collective users from the learning of residual preferences for individual users.
Our approach is efficient and effective in improving CTR prediction accuracy on cold users, compared with various state-of-the-art methods.
arXiv Detail & Related papers (2022-10-28T11:57:58Z) - The Minority Matters: A Diversity-Promoting Collaborative Metric
Learning Algorithm [154.47590401735323]
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems.
This paper focuses on a challenging scenario where a user has multiple categories of interests.
We propose a novel method called textitDiversity-Promoting Collaborative Metric Learning (DPCML)
arXiv Detail & Related papers (2022-09-30T08:02:18Z) - Diverse Preference Augmentation with Multiple Domains for Cold-start
Recommendations [92.47380209981348]
We propose a Diverse Preference Augmentation framework with multiple source domains based on meta-learning.
We generate diverse ratings in a new domain of interest to handle overfitting on the case of sparse interactions.
These ratings are introduced into the meta-training procedure to learn a preference meta-learner, which produces good generalization ability.
arXiv Detail & Related papers (2022-04-01T10:10:50Z) - Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions [74.00030431081751]
We formalize the notion of user-specific cost functions and introduce a new method for identifying actionable recourses for users.
Our method satisfies up to 25.89 percentage points more users compared to strong baseline methods.
arXiv Detail & Related papers (2021-11-01T19:49:35Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.