MAMO: Memory-Augmented Meta-Optimization for Cold-start Recommendation
- URL: http://arxiv.org/abs/2007.03183v1
- Date: Tue, 7 Jul 2020 03:25:15 GMT
- Title: MAMO: Memory-Augmented Meta-Optimization for Cold-start Recommendation
- Authors: Manqing Dong and Feng Yuan and Lina Yao and Xiwei Xu and Liming Zhu
- Abstract summary: A common challenge for most recommender systems is the cold-start problem.
In this paper, we design two memory matrices that can store task-specific memories and feature-specific memories.
We adopt a meta-optimization approach for optimizing the proposed method.
- Score: 46.0605442943949
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A common challenge for most current recommender systems is the cold-start
problem. Due to the lack of user-item interactions, the fine-tuned recommender
systems are unable to handle situations with new users or new items. Recently,
some works introduce the meta-optimization idea into the recommendation
scenarios, i.e. predicting the user preference by only a few of past interacted
items. The core idea is learning a global sharing initialization parameter for
all users and then learning the local parameters for each user separately.
However, most meta-learning based recommendation approaches adopt
model-agnostic meta-learning for parameter initialization, where the global
sharing parameter may lead the model into local optima for some users. In this
paper, we design two memory matrices that can store task-specific memories and
feature-specific memories. Specifically, the feature-specific memories are used
to guide the model with personalized parameter initialization, while the
task-specific memories are used to guide the model fast predicting the user
preference. And we adopt a meta-optimization approach for optimizing the
proposed method. We test the model on two widely used recommendation datasets
and consider four cold-start situations. The experimental results show the
effectiveness of the proposed methods.
Related papers
- Context-aware adaptive personalised recommendation: a meta-hybrid [0.41436032949434404]
We propose a meta-hybrid recommender that uses machine learning to predict an optimal algorithm.
Based on the proposed model, it is possible to predict which recommender will provide the most precise recommendations to a user.
arXiv Detail & Related papers (2024-10-17T09:24:40Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - Explainable Active Learning for Preference Elicitation [0.0]
We employ Active Learning (AL) to solve the addressed problem with the objective of maximizing information acquisition with minimal user effort.
AL operates for selecting informative data from a large unlabeled set to inquire an oracle to label them.
It harvests user feedback (given for the system's explanations on the presented items) over informative samples to update an underlying machine learning (ML) model.
arXiv Detail & Related papers (2023-09-01T09:22:33Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Learning to Learn a Cold-start Sequential Recommender [70.5692886883067]
The cold-start recommendation is an urgent problem in contemporary online applications.
We propose a meta-learning based cold-start sequential recommendation framework called metaCSR.
metaCSR holds the ability to learn the common patterns from regular users' behaviors.
arXiv Detail & Related papers (2021-10-18T08:11:24Z) - Personalized Adaptive Meta Learning for Cold-start User Preference
Prediction [46.65783845757707]
A common challenge in personalized user preference prediction is the cold-start problem.
We propose a novel personalized adaptive meta learning approach to consider both the major and the minor users.
Our method outperforms the state-of-the-art methods dramatically for both the minor and major users.
arXiv Detail & Related papers (2020-12-22T05:48:08Z) - MetaSelector: Meta-Learning for Recommendation with User-Level Adaptive
Model Selection [110.87712780017819]
We propose a meta-learning framework to facilitate user-level adaptive model selection in recommender systems.
We conduct experiments on two public datasets and a real-world production dataset.
arXiv Detail & Related papers (2020-01-22T16:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.