Transferable Sequential Recommendation via Vector Quantized Meta Learning
- URL: http://arxiv.org/abs/2411.01785v1
- Date: Mon, 04 Nov 2024 04:16:11 GMT
- Title: Transferable Sequential Recommendation via Vector Quantized Meta Learning
- Authors: Zhenrui Yue, Huimin Zeng, Yang Zhang, Julian McAuley, Dong Wang,
- Abstract summary: We propose a vector quantized meta learning for transferable sequential recommenders (MetaRec)
Our approach leverages user-item interactions from multiple source domains to improve the target domain performance.
To validate the effectiveness of our approach, we perform extensive experiments on benchmark datasets.
- Score: 29.49708205199897
- License:
- Abstract: While sequential recommendation achieves significant progress on capturing user-item transition patterns, transferring such large-scale recommender systems remains challenging due to the disjoint user and item groups across domains. In this paper, we propose a vector quantized meta learning for transferable sequential recommenders (MetaRec). Without requiring additional modalities or shared information across domains, our approach leverages user-item interactions from multiple source domains to improve the target domain performance. To solve the input heterogeneity issue, we adopt vector quantization that maps item embeddings from heterogeneous input spaces to a shared feature space. Moreover, our meta transfer paradigm exploits limited target data to guide the transfer of source domain knowledge to the target domain (i.e., learn to transfer). In addition, MetaRec adaptively transfers from multiple source tasks by rescaling meta gradients based on the source-target domain similarity, enabling selective learning to improve recommendation performance. To validate the effectiveness of our approach, we perform extensive experiments on benchmark datasets, where MetaRec consistently outperforms baseline methods by a considerable margin.
Related papers
- Diverse Preference Augmentation with Multiple Domains for Cold-start
Recommendations [92.47380209981348]
We propose a Diverse Preference Augmentation framework with multiple source domains based on meta-learning.
We generate diverse ratings in a new domain of interest to handle overfitting on the case of sparse interactions.
These ratings are introduced into the meta-training procedure to learn a preference meta-learner, which produces good generalization ability.
arXiv Detail & Related papers (2022-04-01T10:10:50Z) - Transfer of codebook latent factors for cross-domain recommendation with
non-overlapping data [5.145146101802871]
Data Sparsity is one of the major drawbacks with collaborative filtering technique.
In this paper, we come up with a novel transfer learning approach for cross-domain recommendation.
arXiv Detail & Related papers (2022-03-26T05:26:39Z) - A Hinge-Loss based Codebook Transfer for Cross-Domain Recommendation
with Nonoverlapping Data [4.117391333767584]
As the item space increases, and the number of items rated by the users become very less, issues like sparsity arise.
To mitigate the sparsity problem, transfer learning techniques are being used wherein the data from dense domain(source) is considered in order to predict the missing entries in the sparse domain(target)
arXiv Detail & Related papers (2021-08-02T17:17:36Z) - Latent-Optimized Adversarial Neural Transfer for Sarcasm Detection [50.29565896287595]
We apply transfer learning to exploit common datasets for sarcasm detection.
We propose a generalized latent optimization strategy that allows different losses to accommodate each other.
In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset.
arXiv Detail & Related papers (2021-04-19T13:07:52Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - MetaAlign: Coordinating Domain Alignment and Classification for
Unsupervised Domain Adaptation [84.90801699807426]
This paper proposes an effective meta-optimization based strategy dubbed MetaAlign.
We treat the domain alignment objective and the classification objective as the meta-train and meta-test tasks in a meta-learning scheme.
Experimental results demonstrate the effectiveness of our proposed method on top of various alignment-based baseline approaches.
arXiv Detail & Related papers (2021-03-25T03:16:05Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.