Retrievable Domain-Sensitive Feature Memory for Multi-Domain Recommendation
- URL: http://arxiv.org/abs/2405.12892v1
- Date: Tue, 21 May 2024 16:02:06 GMT
- Title: Retrievable Domain-Sensitive Feature Memory for Multi-Domain Recommendation
- Authors: Yuang Zhao, Zhaocheng Du, Qinglin Jia, Linxuan Zhang, Zhenhua Dong, Ruiming Tang,
- Abstract summary: This paper focuses on features with significant differences across various domains in both distributions and effects on model predictions.
We propose a domain-sensitive feature attribution method to identify features that best reflect domain distinctions from the feature set.
We design a memory architecture that extracts domain-specific information from domain-sensitive features for the model to retrieve and integrate.
- Score: 29.044218200986695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increase in the business scale and number of domains in online advertising, multi-domain ad recommendation has become a mainstream solution in the industry. The core of multi-domain recommendation is effectively modeling the commonalities and distinctions among domains. Existing works are dedicated to designing model architectures for implicit multi-domain modeling while overlooking an in-depth investigation from a more fundamental perspective of feature distributions. This paper focuses on features with significant differences across various domains in both distributions and effects on model predictions. We refer to these features as domain-sensitive features, which serve as carriers of domain distinctions and are crucial for multi-domain modeling. Experiments demonstrate that existing multi-domain modeling methods may neglect domain-sensitive features, indicating insufficient learning of domain distinctions. To avoid this neglect, we propose a domain-sensitive feature attribution method to identify features that best reflect domain distinctions from the feature set. Further, we design a memory architecture that extracts domain-specific information from domain-sensitive features for the model to retrieve and integrate, thereby enhancing the awareness of domain distinctions. Extensive offline and online experiments demonstrate the superiority of our method in capturing domain distinctions and improving multi-domain recommendation performance.
Related papers
- Domain-Aware Fine-Tuning of Foundation Models [18.336887359257087]
Foundation models (FMs) have revolutionized computer vision, enabling effective learning across different domains.
This paper investigates the zero-shot domain adaptation potential of FMs by comparing different backbone architectures.
We introduce novel domain-aware components that leverage domain related textual embeddings.
arXiv Detail & Related papers (2024-07-03T20:10:55Z) - Large-Scale Multi-Domain Recommendation: an Automatic Domain Feature Extraction and Personalized Integration Framework [30.46152832695426]
We propose an Automatic Domain Feature Extraction and Personalized Integration (DFEI) framework for the large-scale multi-domain recommendation.
The framework automatically transforms the behavior of each individual user into an aggregation of all user behaviors within the domain, which serves as the domain features.
Experimental results on both public and industrial datasets, consisting of over 20 domains, clearly demonstrate that the proposed framework achieves significantly better performance compared with SOTA baselines.
arXiv Detail & Related papers (2024-04-12T09:57:17Z) - DIGIC: Domain Generalizable Imitation Learning by Causal Discovery [69.13526582209165]
Causality has been combined with machine learning to produce robust representations for domain generalization.
We make a different attempt by leveraging the demonstration data distribution to discover causal features for a domain generalizable policy.
We design a novel framework, called DIGIC, to identify the causal features by finding the direct cause of the expert action from the demonstration data distribution.
arXiv Detail & Related papers (2024-02-29T07:09:01Z) - MetaDefa: Meta-learning based on Domain Enhancement and Feature
Alignment for Single Domain Generalization [12.095382249996032]
A novel meta-learning method based on domain enhancement and feature alignment (MetaDefa) is proposed to improve the model generalization performance.
In this paper, domain-invariant features can be fully explored by focusing on similar target regions between source and augmented domains feature space.
Extensive experiments on two publicly available datasets show that MetaDefa has significant generalization performance advantages in unknown multiple target domains.
arXiv Detail & Related papers (2023-11-27T15:13:02Z) - Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Label Distribution Learning for Generalizable Multi-source Person
Re-identification [48.77206888171507]
Person re-identification (Re-ID) is a critical technique in the video surveillance system.
It is difficult to directly apply the supervised model to arbitrary unseen domains.
We propose a novel label distribution learning (LDL) method to address the generalizable multi-source person Re-ID task.
arXiv Detail & Related papers (2022-04-12T15:59:10Z) - Self-Adversarial Disentangling for Specific Domain Adaptation [52.1935168534351]
Domain adaptation aims to bridge the domain shifts between the source and target domains.
Recent methods typically do not consider explicit prior knowledge on a specific dimension.
arXiv Detail & Related papers (2021-08-08T02:36:45Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Unsupervised Domain Adaptation with Progressive Domain Augmentation [34.887690018011675]
We propose a novel unsupervised domain adaptation method based on progressive domain augmentation.
The proposed method generates virtual intermediate domains via domain, progressively augments the source domain and bridges the source-target domain divergence.
We conduct experiments on multiple domain adaptation tasks and the results shows the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-04-03T18:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.