Towards Robust Cross-Domain Recommendation with Joint Identifiability of User Preference
- URL: http://arxiv.org/abs/2411.17361v1
- Date: Tue, 26 Nov 2024 12:08:06 GMT
- Title: Towards Robust Cross-Domain Recommendation with Joint Identifiability of User Preference
- Authors: Jing Du, Zesheng Ye, Bin Guo, Zhiwen Yu, Jia Wu, Jian Yang, Michael Sheng, Lina Yao,
- Abstract summary: Cross-domain recommendation (CDR) studies assume that disentangled domain-shared and domain-specific user representations can mitigate domain gaps and facilitate effective knowledge transfer.
We propose to model it joint identifiability that establishes unique correspondence of user representations across domains.
We show that our method consistently surpasses state-of-the-art, even with weakly correlated tasks.
- Score: 31.22912313591263
- License:
- Abstract: Recent cross-domain recommendation (CDR) studies assume that disentangled domain-shared and domain-specific user representations can mitigate domain gaps and facilitate effective knowledge transfer. However, achieving perfect disentanglement is challenging in practice, because user behaviors in CDR are highly complex, and the true underlying user preferences cannot be fully captured through observed user-item interactions alone. Given this impracticability, we instead propose to model {\it joint identifiability} that establishes unique correspondence of user representations across domains, ensuring consistent preference modeling even when user behaviors exhibit shifts in different domains. To achieve this, we introduce a hierarchical user preference modeling framework that organizes user representations by the neural network encoder's depth, allowing separate treatment of shallow and deeper subspaces. In the shallow subspace, our framework models the interest centroids for each user within each domain, probabilistically determining the users' interest belongings and selectively aligning these centroids across domains to ensure fine-grained consistency in domain-irrelevant features. For deeper subspace representations, we enforce joint identifiability by decomposing it into a shared cross-domain stable component and domain-variant components, linked by a bijective transformation for unique correspondence. Empirical studies on real-world CDR tasks with varying domain correlations demonstrate that our method consistently surpasses state-of-the-art, even with weakly correlated tasks, highlighting the importance of joint identifiability in achieving robust CDR.
Related papers
- Cross-Domain Sequential Recommendation via Neural Process [9.01082886458853]
Cross-Domain Sequential Recommendation (CDSR) is a hot topic in sequence-based user interest modeling.
We show how to unleash the potential of non-overlapped users' behaviors to empower CDSR.
arXiv Detail & Related papers (2024-10-17T14:22:57Z) - Cross-domain Transfer of Valence Preferences via a Meta-optimization Approach [17.545983294377958]
CVPM formalizes cross-domain interest transfer as a hybrid architecture of meta-learning and self-supervised learning.
With deep insights into user preferences, we employ differentiated encoders to learn their distributions.
In particular, we treat each user's mapping as two parts, the common transformation and the personalized bias, where the network used to generate the personalized bias is output by a meta-learner.
arXiv Detail & Related papers (2024-06-24T10:02:24Z) - Joint Identifiability of Cross-Domain Recommendation via Hierarchical Subspace Disentanglement [19.29182848154183]
Cross-Domain Recommendation (CDR) seeks to enable effective knowledge transfer across domains.
While CDR describes user representations as a joint distribution over two domains, these methods fail to account for its joint identifiability.
We propose a Hierarchical subspace disentanglement approach to explore the Joint IDentifiability of cross-domain joint distribution.
arXiv Detail & Related papers (2024-04-06T03:11:31Z) - Robust Unsupervised Domain Adaptation by Retaining Confident Entropy via
Edge Concatenation [7.953644697658355]
Unsupervised domain adaptation can mitigate the need for extensive pixel-level annotations to train semantic segmentation networks.
We introduce a novel approach to domain adaptation, leveraging the synergy of internal and external information within entropy-based adversarial networks.
We devised a probability-sharing network that integrates diverse information for more effective segmentation.
arXiv Detail & Related papers (2023-10-11T02:50:16Z) - Cross-domain recommendation via user interest alignment [20.387327479445773]
Cross-domain recommendation aims to leverage knowledge from multiple domains to alleviate the data sparsity and cold-start problems in traditional recommender systems.
The general practice of this approach is to train user embeddings in each domain separately and then aggregate them in a plain manner.
We propose a novel cross-domain recommendation framework, namely COAST, to improve recommendation performance on dual domains.
arXiv Detail & Related papers (2023-01-26T23:54:41Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Joint Disentangling and Adaptation for Cross-Domain Person
Re-Identification [88.79480792084995]
We propose a joint learning framework that disentangles id-related/unrelated features and enforces adaptation to work on the id-related feature space exclusively.
Our model involves a disentangling module that encodes cross-domain images into a shared appearance space and two separate structure spaces, and an adaptation module that performs adversarial alignment and self-training on the shared appearance space.
arXiv Detail & Related papers (2020-07-20T17:57:02Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.