Self-Supervised Interest Transfer Network via Prototypical Contrastive
Learning for Recommendation
- URL: http://arxiv.org/abs/2302.14438v1
- Date: Tue, 28 Feb 2023 09:30:24 GMT
- Title: Self-Supervised Interest Transfer Network via Prototypical Contrastive
Learning for Recommendation
- Authors: Guoqiang Sun, Yibin Shen, Sijin Zhou, Xiang Chen, Hongyan Liu,
Chunming Wu, Chenyi Lei, Xianhui Wei, Fei Fang
- Abstract summary: Cross-domain recommendation has attracted increasing attention from industry and academia recently.
We propose a cross-domain recommendation method: Self-supervised Interest Transfer Network (SITN)
We perform two levels of cross-domain contrastive learning: 1) instance-to-instance contrastive learning, 2) instance-to-cluster contrastive learning.
We conducted extensive experiments on a public dataset and a large-scale industrial dataset collected from one of the world's leading e-commerce corporations.
- Score: 32.565226710636615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain recommendation has attracted increasing attention from industry
and academia recently. However, most existing methods do not exploit the
interest invariance between domains, which would yield sub-optimal solutions.
In this paper, we propose a cross-domain recommendation method: Self-supervised
Interest Transfer Network (SITN), which can effectively transfer invariant
knowledge between domains via prototypical contrastive learning. Specifically,
we perform two levels of cross-domain contrastive learning: 1)
instance-to-instance contrastive learning, 2) instance-to-cluster contrastive
learning. Not only that, we also take into account users' multi-granularity and
multi-view interests. With this paradigm, SITN can explicitly learn the
invariant knowledge of interest clusters between domains and accurately capture
users' intents and preferences. We conducted extensive experiments on a public
dataset and a large-scale industrial dataset collected from one of the world's
leading e-commerce corporations. The experimental results indicate that SITN
achieves significant improvements over state-of-the-art recommendation methods.
Additionally, SITN has been deployed on a micro-video recommendation platform,
and the online A/B testing results further demonstrate its practical value.
Supplement is available at: https://github.com/fanqieCoffee/SITN-Supplement.
Related papers
- EXIT: An EXplicit Interest Transfer Framework for Cross-Domain Recommendation [20.402006751823322]
Cross-domain recommendation has attracted substantial interest in industrial apps such as Meituan.
We propose a simple and effective EXplicit Interest Transfer framework named EXIT to address the stated challenge.
arXiv Detail & Related papers (2024-07-29T15:52:09Z) - Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation [59.41178047749177]
We focus on multi-domain Neural Machine Translation, with the goal of developing efficient models which can handle data from various domains seen during training and are robust to domains unseen during training.
We hypothesize that Sparse Mixture-of-Experts (SMoE) models are a good fit for this task, as they enable efficient model scaling.
We conduct a series of experiments aimed at validating the utility of SMoE for the multi-domain scenario, and find that a straightforward width scaling of Transformer is a simpler and surprisingly more efficient approach in practice, and reaches the same performance level as SMoE.
arXiv Detail & Related papers (2024-07-01T09:45:22Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - Reinforcement Learning-enhanced Shared-account Cross-domain Sequential
Recommendation [38.70844108264403]
Shared-account Cross-domain Sequential Recommendation (SCSR) is an emerging yet challenging task.
We propose a reinforcement learning-based solution, namely RL-ISN, which consists of a basic cross-domain recommender and a reinforcement learning-based domain filter.
To evaluate the performance of our solution, we conduct extensive experiments on two real-world datasets.
arXiv Detail & Related papers (2022-06-16T11:06:32Z) - Variational Attention: Propagating Domain-Specific Knowledge for
Multi-Domain Learning in Crowd Counting [75.80116276369694]
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset.
We resort to the multi-domain joint learning and propose a simple but effective Domain-specific Knowledge Propagating Network (DKPNet)
It is mainly achieved by proposing the novel Variational Attention(VA) technique for explicitly modeling the attention distributions for different domains.
arXiv Detail & Related papers (2021-08-18T08:06:37Z) - Dual Attentive Sequential Learning for Cross-Domain Click-Through Rate
Prediction [76.98616102965023]
Cross domain recommender system constitutes a powerful method to tackle the cold-start and sparsity problem.
We propose a novel approach to cross-domain sequential recommendations based on the dual learning mechanism.
arXiv Detail & Related papers (2021-06-05T01:21:21Z) - Dual Metric Learning for Effective and Efficient Cross-Domain
Recommendations [85.6250759280292]
Cross domain recommender systems have been increasingly valuable for helping consumers identify useful items in different applications.
Existing cross-domain models typically require large number of overlap users, which can be difficult to obtain in some applications.
We propose a novel cross-domain recommendation model based on dual learning that transfers information between two related domains in an iterative manner.
arXiv Detail & Related papers (2021-04-17T09:18:59Z) - Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain
Recommendation [19.106717948585445]
We develop scalable neural layer-transfer approaches for cross-domain learning.
Our key intuition is to guide neural collaborative filtering with domain-invariant components shared across the dense and sparse domains.
We show the effectiveness and scalability of our approach on two public datasets and a massive transaction dataset from Visa.
arXiv Detail & Related papers (2020-05-21T05:51:15Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.