Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain
Recommendation
- URL: http://arxiv.org/abs/2005.10473v1
- Date: Thu, 21 May 2020 05:51:15 GMT
- Title: Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain
Recommendation
- Authors: Adit Krishnan, Mahashweta Das, Mangesh Bendre, Hao Yang, Hari Sundaram
- Abstract summary: We develop scalable neural layer-transfer approaches for cross-domain learning.
Our key intuition is to guide neural collaborative filtering with domain-invariant components shared across the dense and sparse domains.
We show the effectiveness and scalability of our approach on two public datasets and a massive transaction dataset from Visa.
- Score: 19.106717948585445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid proliferation of new users and items on the social web has
aggravated the gray-sheep user/long-tail item challenge in recommender systems.
Historically, cross-domain co-clustering methods have successfully leveraged
shared users and items across dense and sparse domains to improve inference
quality. However, they rely on shared rating data and cannot scale to multiple
sparse target domains (i.e., the one-to-many transfer setting). This, combined
with the increasing adoption of neural recommender architectures, motivates us
to develop scalable neural layer-transfer approaches for cross-domain learning.
Our key intuition is to guide neural collaborative filtering with
domain-invariant components shared across the dense and sparse domains,
improving the user and item representations learned in the sparse domains. We
leverage contextual invariances across domains to develop these shared modules,
and demonstrate that with user-item interaction context, we can learn-to-learn
informative representation spaces even with sparse interaction data. We show
the effectiveness and scalability of our approach on two public datasets and a
massive transaction dataset from Visa, a global payments technology company
(19% Item Recall, 3x faster vs. training separate models for each domain). Our
approach is applicable to both implicit and explicit feedback settings.
Related papers
- Mixed Attention Network for Cross-domain Sequential Recommendation [63.983590953727386]
We propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information.
Experimental results on two real-world datasets demonstrate the superiority of our proposed model.
arXiv Detail & Related papers (2023-11-14T16:07:16Z) - Self-Supervised Interest Transfer Network via Prototypical Contrastive
Learning for Recommendation [32.565226710636615]
Cross-domain recommendation has attracted increasing attention from industry and academia recently.
We propose a cross-domain recommendation method: Self-supervised Interest Transfer Network (SITN)
We perform two levels of cross-domain contrastive learning: 1) instance-to-instance contrastive learning, 2) instance-to-cluster contrastive learning.
We conducted extensive experiments on a public dataset and a large-scale industrial dataset collected from one of the world's leading e-commerce corporations.
arXiv Detail & Related papers (2023-02-28T09:30:24Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Diverse Preference Augmentation with Multiple Domains for Cold-start
Recommendations [92.47380209981348]
We propose a Diverse Preference Augmentation framework with multiple source domains based on meta-learning.
We generate diverse ratings in a new domain of interest to handle overfitting on the case of sparse interactions.
These ratings are introduced into the meta-training procedure to learn a preference meta-learner, which produces good generalization ability.
arXiv Detail & Related papers (2022-04-01T10:10:50Z) - Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Dual Metric Learning for Effective and Efficient Cross-Domain
Recommendations [85.6250759280292]
Cross domain recommender systems have been increasingly valuable for helping consumers identify useful items in different applications.
Existing cross-domain models typically require large number of overlap users, which can be difficult to obtain in some applications.
We propose a novel cross-domain recommendation model based on dual learning that transfers information between two related domains in an iterative manner.
arXiv Detail & Related papers (2021-04-17T09:18:59Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog [70.79442700890843]
We propose a novel Dynamic Fusion Network (DF-Net) which automatically exploit the relevance between the target domain and each domain.
With little training data, we show its transferability by outperforming prior best model by 13.9% on average.
arXiv Detail & Related papers (2020-04-23T08:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.