ABXI: Invariant Interest Adaptation for Task-Guided Cross-Domain Sequential Recommendation
- URL: http://arxiv.org/abs/2501.15118v2
- Date: Thu, 13 Feb 2025 17:27:18 GMT
- Title: ABXI: Invariant Interest Adaptation for Task-Guided Cross-Domain Sequential Recommendation
- Authors: Qingtian Bian, Marcus VinÃcius de Carvalho, Tieying Li, Jiaxing Xu, Hui Fang, Yiping Ke,
- Abstract summary: Cross-Domain Sequential Recommendation (CDSR) has recently gained attention for countering data sparsity by transferring knowledge across domains.
One key challenge is to correctly extract the shared knowledge among these sequences and appropriately transfer it.
We propose the A-B-Cross-to-Invariant Learning Recommender (ABXI) to address these challenges.
- Score: 6.234890828342688
- License:
- Abstract: Cross-Domain Sequential Recommendation (CDSR) has recently gained attention for countering data sparsity by transferring knowledge across domains. A common approach merges domain-specific sequences into cross-domain sequences, serving as bridges to connect domains. One key challenge is to correctly extract the shared knowledge among these sequences and appropriately transfer it. Most existing works directly transfer unfiltered cross-domain knowledge rather than extracting domain-invariant components and adaptively integrating them into domain-specific modelings. Another challenge lies in aligning the domain-specific and cross-domain sequences. Existing methods align these sequences based on timestamps, but this approach can cause prediction mismatches when the current tokens and their targets belong to different domains. In such cases, the domain-specific knowledge carried by the current tokens may degrade performance. To address these challenges, we propose the A-B-Cross-to-Invariant Learning Recommender (ABXI). Specifically, leveraging LoRA's effectiveness for efficient adaptation, ABXI incorporates two types of LoRAs to facilitate knowledge adaptation. First, all sequences are processed through a shared encoder that employs a domain LoRA for each sequence, thereby preserving unique domain characteristics. Next, we introduce an invariant projector that extracts domain-invariant interests from cross-domain representations, utilizing an invariant LoRA to adapt these interests into modeling each specific domain. Besides, to avoid prediction mismatches, all domain-specific sequences are aligned to match the domains of the cross-domain ground truths. Experimental results on three datasets demonstrate that our approach outperforms other CDSR counterparts by a large margin. The codes are available in https://github.com/DiMarzioBian/ABXI.
Related papers
- A Zero-Shot Generalization Framework for LLM-Driven Cross-Domain Sequential Recommendation [5.512301280728178]
Zero-shot cross-domain sequential recommendation (ZCDSR) enables predictions in unseen domains without the need for additional training or fine-tuning.
Recent advancements in large language models (LLMs) have greatly improved ZCDSR by leveraging rich pretrained representations to facilitate cross-domain knowledge transfer.
We propose a novel framework designed to enhance LLM-based ZCDSR by improving cross-domain alignment at both the item and sequential levels.
arXiv Detail & Related papers (2025-01-31T15:43:21Z) - Mixed Attention Network for Cross-domain Sequential Recommendation [63.983590953727386]
We propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information.
Experimental results on two real-world datasets demonstrate the superiority of our proposed model.
arXiv Detail & Related papers (2023-11-14T16:07:16Z) - FedDCSR: Federated Cross-domain Sequential Recommendation via
Disentangled Representation Learning [17.497009723665116]
We propose FedDCSR, a novel cross-domain sequential recommendation framework via disentangled representation learning.
We introduce an approach called inter-intra domain sequence representation disentanglement (SRD) to disentangle user sequence features into domain-shared and domain-exclusive features.
In addition, we design an intra domain contrastive infomax (CIM) strategy to learn richer domain-exclusive features of users by performing data augmentation on user sequences.
arXiv Detail & Related papers (2023-09-15T14:23:20Z) - One Model for All Domains: Collaborative Domain-Prefix Tuning for
Cross-Domain NER [92.79085995361098]
Cross-domain NER is a challenging task to address the low-resource problem in practical scenarios.
Previous solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain.
We introduce Collaborative Domain-Prefix Tuning for cross-domain NER based on text-to-text generative PLMs.
arXiv Detail & Related papers (2023-01-25T05:16:43Z) - Adaptive Methods for Aggregated Domain Generalization [26.215904177457997]
In many settings, privacy concerns prohibit obtaining domain labels for the training data samples.
We propose a domain-adaptive approach to this problem, which operates in two steps.
Our approach achieves state-of-the-art performance on a variety of domain generalization benchmarks without using domain labels.
arXiv Detail & Related papers (2021-12-09T08:57:01Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.