LLM-EDT: Large Language Model Enhanced Cross-domain Sequential Recommendation with Dual-phase Training
- URL: http://arxiv.org/abs/2511.19931v1
- Date: Tue, 25 Nov 2025 05:18:04 GMT
- Title: LLM-EDT: Large Language Model Enhanced Cross-domain Sequential Recommendation with Dual-phase Training
- Authors: Ziwei Liu, Qidong Liu, Wanyu Wang, Yejing Wang, Tong Xu, Wei Huang, Chong Chen, Peng Chuan, Xiangyu Zhao,
- Abstract summary: Cross-domain Sequential Recommendation (CDSR) has been proposed to enrich user-item interactions by incorporating information from various domains.<n>Despite current progress, the imbalance issue and transition issue hinder further development of CDSR.<n>We propose an LLMs Enhanced Cross-domain Sequential Recommendation with Dual-phase Training (LLM-EDT)
- Score: 53.539682966282534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-domain Sequential Recommendation (CDSR) has been proposed to enrich user-item interactions by incorporating information from various domains. Despite current progress, the imbalance issue and transition issue hinder further development of CDSR. The former one presents a phenomenon that the interactions in one domain dominate the entire behavior, leading to difficulty in capturing the domain-specific features in the other domain. The latter points to the difficulty in capturing users' cross-domain preferences within the mixed interaction sequence, resulting in poor next-item prediction performance for specific domains. With world knowledge and powerful reasoning ability, Large Language Models (LLMs) partially alleviate the above issues by performing as a generator and an encoder. However, current LLMs-enhanced CDSR methods are still under exploration, which fail to recognize the irrelevant noise and rough profiling problems. Thus, to make peace with the aforementioned challenges, we proposed an LLMs Enhanced Cross-domain Sequential Recommendation with Dual-phase Training ({LLM-EDT}). To address the imbalance issue while introducing less irrelevant noise, we first propose the transferable item augmenter to adaptively generate possible cross-domain behaviors for users. Then, to alleviate the transition issue, we introduce a dual-phase training strategy to empower the domain-specific thread with a domain-shared background. As for the rough profiling problem, we devise a domain-aware profiling module to summarize the user's preference in each domain and adaptively aggregate them to generate comprehensive user profiles. The experiments on three public datasets validate the effectiveness of our proposed LLM-EDT. To ease reproducibility, we have released the detailed code online at {https://anonymous.4open.science/r/LLM-EDT-583F}.
Related papers
- FeDecider: An LLM-Based Framework for Federated Cross-Domain Recommendation [75.50721642765994]
Large language model (LLM)-based recommendation models have demonstrated impressive performance.<n>We propose an LLM-based framework for Federated cross-domain recommendation, FeDecider.<n>Extensive experiments across diverse datasets validate the effectiveness of our proposed FeDecider.
arXiv Detail & Related papers (2026-02-17T21:42:28Z) - Gaussian Mixture Flow Matching with Domain Alignment for Multi-Domain Sequential Recommendation [13.331414627413674]
We propose textitGMFlowRec, an efficient generative framework for MDSR that models domain-aware transition trajectories.<n>Experiments on JD and Amazon datasets demonstrate that GMFlowRec achieves state-of-the-art performance with up to 44% improvement in NDCG@5.
arXiv Detail & Related papers (2025-10-23T22:11:26Z) - Beyond Negative Transfer: Disentangled Preference-Guided Diffusion for Cross-Domain Sequential Recommendation [13.331414627413674]
DPG-Diff is a novel Disentangled Preference-Guided Diffusion Model.<n>It decomposes user preferences into domain-invariant and domain-specific components, which jointly guide the reverse diffusion process.<n>It consistently outperforms state-of-the-art baselines across multiple metrics.
arXiv Detail & Related papers (2025-08-30T06:56:56Z) - Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation [30.116213884571803]
Cross-domain Sequential Recommendation (CDSR) aims to extract the preference from the user's historical interactions across various domains.<n>Existing CDSR methods rely on users who own interactions on all domains to learn cross-domain item relationships.<n>With powerful representation and reasoning abilities, Large Language Models (LLMs) are promising to address these two problems.
arXiv Detail & Related papers (2025-04-25T14:30:25Z) - Let Synthetic Data Shine: Domain Reassembly and Soft-Fusion for Single Domain Generalization [68.41367635546183]
Single Domain Generalization aims to train models with consistent performance across diverse scenarios using data from a single source.<n>We propose Discriminative Domain Reassembly and Soft-Fusion (DRSF), a training framework leveraging synthetic data to improve model generalization.
arXiv Detail & Related papers (2025-03-17T18:08:03Z) - Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential Recommendation [66.72195610471624]
Cross-Domain Sequential Recommendation aims to mine and transfer users' sequential preferences across different domains.
We propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach.
arXiv Detail & Related papers (2024-06-05T09:19:54Z) - Role Prompting Guided Domain Adaptation with General Capability Preserve
for Large Language Models [55.51408151807268]
When tailored to specific domains, Large Language Models (LLMs) tend to experience catastrophic forgetting.
crafting a versatile model for multiple domains simultaneously often results in a decline in overall performance.
We present the RolE Prompting Guided Multi-Domain Adaptation (REGA) strategy.
arXiv Detail & Related papers (2024-03-05T08:22:41Z) - DDGHM: Dual Dynamic Graph with Hybrid Metric Training for Cross-Domain
Sequential Recommendation [15.366783212837515]
Sequential Recommendation (SR) characterizes evolving patterns of user behaviors by modeling how users transit among items.
To solve this problem, we focus on Cross-Domain Sequential Recommendation (CDSR)
We propose DDGHM, a novel framework for the CDSR problem, which includes two main modules, dual dynamic graph modeling and hybrid metric training.
arXiv Detail & Related papers (2022-09-21T07:53:06Z) - A cross-domain recommender system using deep coupled autoencoders [77.86290991564829]
Two novel coupled autoencoder-based deep learning methods are proposed for cross-domain recommendation.
The first method aims to simultaneously learn a pair of autoencoders in order to reveal the intrinsic representations of the items in the source and target domains.
The second method is derived based on a new joint regularized optimization problem, which employs two autoencoders to generate in a deep and non-linear manner the user and item-latent factors.
arXiv Detail & Related papers (2021-12-08T15:14:26Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.