One for Dozens: Adaptive REcommendation for All Domains with Counterfactual Augmentation
- URL: http://arxiv.org/abs/2412.11905v2
- Date: Wed, 18 Dec 2024 02:41:21 GMT
- Title: One for Dozens: Adaptive REcommendation for All Domains with Counterfactual Augmentation
- Authors: Huishi Luo, Yiwen Chen, Yiqing Wu, Fuzhen Zhuang, Deqing Wang,
- Abstract summary: Multi-domain recommendation (MDR) aims to enhance recommendation performance across various domains.
Traditional MDR algorithms typically focus on fewer than five domains.
We propose Adaptive REcommendation for All Domains with counterfactual augmentation.
- Score: 32.945861240561
- License:
- Abstract: Multi-domain recommendation (MDR) aims to enhance recommendation performance across various domains. However, real-world recommender systems in online platforms often need to handle dozens or even hundreds of domains, far exceeding the capabilities of traditional MDR algorithms, which typically focus on fewer than five domains. Key challenges include a substantial increase in parameter count, high maintenance costs, and intricate knowledge transfer patterns across domains. Furthermore, minor domains often suffer from data sparsity, leading to inadequate training in classical methods. To address these issues, we propose Adaptive REcommendation for All Domains with counterfactual augmentation (AREAD). AREAD employs a hierarchical structure with a limited number of expert networks at several layers, to effectively capture domain knowledge at different granularities. To adaptively capture the knowledge transfer pattern across domains, we generate and iteratively prune a hierarchical expert network selection mask for each domain during training. Additionally, counterfactual assumptions are used to augment data in minor domains, supporting their iterative mask pruning. Our experiments on two public datasets, each encompassing over twenty domains, demonstrate AREAD's effectiveness, especially in data-sparse domains. Source code is available at https://github.com/Chrissie-Law/AREAD-Multi-Domain-Recommendation.
Related papers
- Large-Scale Multi-Domain Recommendation: an Automatic Domain Feature Extraction and Personalized Integration Framework [30.46152832695426]
We propose an Automatic Domain Feature Extraction and Personalized Integration (DFEI) framework for the large-scale multi-domain recommendation.
The framework automatically transforms the behavior of each individual user into an aggregation of all user behaviors within the domain, which serves as the domain features.
Experimental results on both public and industrial datasets, consisting of over 20 domains, clearly demonstrate that the proposed framework achieves significantly better performance compared with SOTA baselines.
arXiv Detail & Related papers (2024-04-12T09:57:17Z) - Role Prompting Guided Domain Adaptation with General Capability Preserve
for Large Language Models [55.51408151807268]
When tailored to specific domains, Large Language Models (LLMs) tend to experience catastrophic forgetting.
crafting a versatile model for multiple domains simultaneously often results in a decline in overall performance.
We present the RolE Prompting Guided Multi-Domain Adaptation (REGA) strategy.
arXiv Detail & Related papers (2024-03-05T08:22:41Z) - Virtual Classification: Modulating Domain-Specific Knowledge for
Multidomain Crowd Counting [67.38137379297717]
Multidomain crowd counting aims to learn a general model for multiple diverse datasets.
Deep networks prefer modeling distributions of the dominant domains instead of all domains, which is known as domain bias.
We propose a Modulating Domain-specific Knowledge Network (MDKNet) to handle the domain bias issue in multidomain crowd counting.
arXiv Detail & Related papers (2024-02-06T06:49:04Z) - Adapting Self-Supervised Representations to Multi-Domain Setups [47.03992469282679]
Current state-of-the-art self-supervised approaches, are effective when trained on individual domains but show limited generalization on unseen domains.
We propose a general-purpose, lightweight Domain Disentanglement Module that can be plugged into any self-supervised encoder.
arXiv Detail & Related papers (2023-09-07T20:05:39Z) - Domain Generalization for Domain-Linked Classes [8.738092015092207]
In the real-world, classes may often be domain-linked, i.e. expressed only in a specific domain.
We propose a Fair and cONtrastive feature-space regularization algorithm for Domain-linked DG, FOND.
arXiv Detail & Related papers (2023-06-01T16:39:50Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware
Parameterization [78.93669377251396]
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
One existing approach solves the problem by conducting multi-domain learning, using shared parameters for joint training across domains.
We propose to improve the parameterization of this method by using domain-specific and task-specific model parameters.
arXiv Detail & Related papers (2020-04-30T15:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.