A Collaborative Transfer Learning Framework for Cross-domain
Recommendation
- URL: http://arxiv.org/abs/2306.16425v1
- Date: Mon, 26 Jun 2023 09:43:58 GMT
- Title: A Collaborative Transfer Learning Framework for Cross-domain
Recommendation
- Authors: Wei Zhang, Pengye Zhang, Bo Zhang, Xingxing Wang, Dong Wang
- Abstract summary: In the recommendation systems, there are multiple business domains to meet the diverse interests and needs of users.
We propose the Collaborative Cross-Domain Transfer Learning Framework (CCTL) to overcome these challenges.
CCTL evaluates the information gain of the source domain on the target domain using a symmetric companion network.
- Score: 12.880177078884927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the recommendation systems, there are multiple business domains to meet
the diverse interests and needs of users, and the click-through rate(CTR) of
each domain can be quite different, which leads to the demand for CTR
prediction modeling for different business domains. The industry solution is to
use domain-specific models or transfer learning techniques for each domain. The
disadvantage of the former is that the data from other domains is not utilized
by a single domain model, while the latter leverage all the data from different
domains, but the fine-tuned model of transfer learning may trap the model in a
local optimum of the source domain, making it difficult to fit the target
domain. Meanwhile, significant differences in data quantity and feature schemas
between different domains, known as domain shift, may lead to negative transfer
in the process of transferring. To overcome these challenges, we propose the
Collaborative Cross-Domain Transfer Learning Framework (CCTL). CCTL evaluates
the information gain of the source domain on the target domain using a
symmetric companion network and adjusts the information transfer weight of each
source domain sample using the information flow network. This approach enables
full utilization of other domain data while avoiding negative migration.
Additionally, a representation enhancement network is used as an auxiliary task
to preserve domain-specific features. Comprehensive experiments on both public
and real-world industrial datasets, CCTL achieved SOTA score on offline
metrics. At the same time, the CCTL algorithm has been deployed in Meituan,
bringing 4.37% CTR and 5.43% GMV lift, which is significant to the business.
Related papers
- Heterogeneous Graph-based Framework with Disentangled Representations Learning for Multi-target Cross Domain Recommendation [7.247438542823219]
CDR (Cross-Domain Recommendation) is a critical solution to data sparsity problem in recommendation system.
We present HGDR, an end-to-end heterogeneous network architecture where graph convolutional layers are applied to model relations among different domains.
Experiments on real-world datasets and online A/B tests prove that our proposed model can transmit information among domains effectively and reach the SOTA performance.
arXiv Detail & Related papers (2024-07-01T02:27:54Z) - DomainVerse: A Benchmark Towards Real-World Distribution Shifts For
Tuning-Free Adaptive Domain Generalization [27.099706316752254]
We establish a novel dataset DomainVerse for Adaptive Domain Generalization (ADG)
Benefiting from the introduced hierarchical definition of domain shifts, DomainVerse consists of about 0.5 million images from 390 fine-grained realistic domains.
We propose two methods called Domain CLIP and Domain++ CLIP for tuning-free adaptive domain generalization.
arXiv Detail & Related papers (2024-03-05T07:10:25Z) - Virtual Classification: Modulating Domain-Specific Knowledge for
Multidomain Crowd Counting [67.38137379297717]
Multidomain crowd counting aims to learn a general model for multiple diverse datasets.
Deep networks prefer modeling distributions of the dominant domains instead of all domains, which is known as domain bias.
We propose a Modulating Domain-specific Knowledge Network (MDKNet) to handle the domain bias issue in multidomain crowd counting.
arXiv Detail & Related papers (2024-02-06T06:49:04Z) - DAOT: Domain-Agnostically Aligned Optimal Transport for Domain-Adaptive
Crowd Counting [35.83485358725357]
Domain adaptation is commonly employed in crowd counting to bridge the domain gaps between different datasets.
Existing domain adaptation methods tend to focus on inter-dataset differences while overlooking the intra-differences within the same dataset.
We propose a Domain-agnostically Aligned Optimal Transport (DAOT) strategy that aligns domain-agnostic factors between domains.
arXiv Detail & Related papers (2023-08-10T02:59:40Z) - Improving Fake News Detection of Influential Domain via Domain- and
Instance-Level Transfer [16.886024206337257]
We propose a Domain- and Instance-level Transfer Framework for Fake News Detection (DITFEND)
DITFEND could improve the performance of specific target domains.
Online experiments show that it brings additional improvements over the base models in a real-world scenario.
arXiv Detail & Related papers (2022-09-19T10:21:13Z) - Curriculum CycleGAN for Textual Sentiment Domain Adaptation with
Multiple Sources [68.31273535702256]
We propose a novel instance-level MDA framework, named curriculum cycle-consistent generative adversarial network (C-CycleGAN)
C-CycleGAN consists of three components: (1) pre-trained text encoder which encodes textual input from different domains into a continuous representation space, (2) intermediate domain generator with curriculum instance-level adaptation which bridges the gap across source and target domains, and (3) task classifier trained on the intermediate domain for final sentiment classification.
We conduct extensive experiments on three benchmark datasets and achieve substantial gains over state-of-the-art DA approaches.
arXiv Detail & Related papers (2020-11-17T14:50:55Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.