Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation
- URL: http://arxiv.org/abs/2504.18383v1
- Date: Fri, 25 Apr 2025 14:30:25 GMT
- Title: Bridge the Domains: Large Language Models Enhanced Cross-domain Sequential Recommendation
- Authors: Qidong Liu, Xiangyu Zhao, Yejing Wang, Zijian Zhang, Howard Zhong, Chong Chen, Xiang Li, Wei Huang, Feng Tian,
- Abstract summary: Cross-domain Sequential Recommendation (CDSR) aims to extract the preference from the user's historical interactions across various domains.<n>Existing CDSR methods rely on users who own interactions on all domains to learn cross-domain item relationships.<n>With powerful representation and reasoning abilities, Large Language Models (LLMs) are promising to address these two problems.
- Score: 30.116213884571803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain Sequential Recommendation (CDSR) aims to extract the preference from the user's historical interactions across various domains. Despite some progress in CDSR, two problems set the barrier for further advancements, i.e., overlap dilemma and transition complexity. The former means existing CDSR methods severely rely on users who own interactions on all domains to learn cross-domain item relationships, compromising the practicability. The latter refers to the difficulties in learning the complex transition patterns from the mixed behavior sequences. With powerful representation and reasoning abilities, Large Language Models (LLMs) are promising to address these two problems by bridging the items and capturing the user's preferences from a semantic view. Therefore, we propose an LLMs Enhanced Cross-domain Sequential Recommendation model (LLM4CDSR). To obtain the semantic item relationships, we first propose an LLM-based unified representation module to represent items. Then, a trainable adapter with contrastive regularization is designed to adapt the CDSR task. Besides, a hierarchical LLMs profiling module is designed to summarize user cross-domain preferences. Finally, these two modules are integrated into the proposed tri-thread framework to derive recommendations. We have conducted extensive experiments on three public cross-domain datasets, validating the effectiveness of LLM4CDSR. We have released the code online.
Related papers
- Align-for-Fusion: Harmonizing Triple Preferences via Dual-oriented Diffusion for Cross-domain Sequential Recommendation [5.661192070842017]
Cross-domain sequential recommendation (CDSR) methods often follow an align-then-fusion paradigm.<n>We propose an align-for-fusion framework for CDSR to harmonize triple preferences via dual-oriented DMs, HorizonRec.<n>Experiments on four CDSR datasets from two distinct platforms demonstrate the effectiveness and robustness of HorizonRec.
arXiv Detail & Related papers (2025-08-07T07:00:29Z) - RCR-Router: Efficient Role-Aware Context Routing for Multi-Agent LLM Systems with Structured Memory [57.449129198822476]
RCR is a role-aware context routing framework for multi-agent large language model (LLM) systems.<n>It dynamically selects semantically relevant memory subsets for each agent based on its role and task stage.<n>A lightweight scoring policy guides memory selection, and agent outputs are integrated into a shared memory store.
arXiv Detail & Related papers (2025-08-06T21:59:34Z) - Leveraging Multimodal Data and Side Users for Diffusion Cross-Domain Recommendation [23.27301183474805]
Cross-domain recommendation (CDR) aims to address the persistent cold-start problem in Recommender Systems.<n>We propose a model leveraging Multimodal data and Side users for diffusion Cross-domain recommendation (MuSiC)<n>MuSiC achieves state-of-the-art performance, significantly outperforming all selected baselines.
arXiv Detail & Related papers (2025-07-05T10:57:29Z) - LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation [19.654959889052638]
Cross-Domain Sequential Recommendation (CDSR) predicts user behavior by leveraging historical interactions across multiple domains.<n>We propose LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation (LLM-EMF)<n>LLM-EMF is a novel and advanced approach that enhances textual information with Large Language Models (LLM) knowledge.
arXiv Detail & Related papers (2025-06-22T09:53:21Z) - LLM2Rec: Large Language Models Are Powerful Embedding Models for Sequential Recommendation [49.78419076215196]
Sequential recommendation aims to predict users' future interactions by modeling collaborative filtering (CF) signals from historical behaviors of similar users or items.<n>Traditional sequential recommenders rely on ID-based embeddings, which capture CF signals through high-order co-occurrence patterns.<n>Recent advances in large language models (LLMs) have motivated text-based recommendation approaches that derive item representations from textual descriptions.<n>We argue that an ideal embedding model should seamlessly integrate CF signals with rich semantic representations to improve both in-domain and out-of-domain recommendation performance.
arXiv Detail & Related papers (2025-06-16T13:27:06Z) - Joint Similarity Item Exploration and Overlapped User Guidance for Multi-Modal Cross-Domain Recommendation [27.00142195880019]
We propose Joint Similarity Item Exploration and Overlapped User Guidance (SIEOUG) for solving the Multi-Modal Cross-Domain Recommendation problem.<n>Our empirical study on Amazon dataset with several different tasks demonstrates that SIEOUG significantly outperforms the state-of-the-art models under the MMCDR setting.
arXiv Detail & Related papers (2025-02-22T03:57:43Z) - LLM-based Bi-level Multi-interest Learning Framework for Sequential Recommendation [54.396000434574454]
We propose a novel multi-interest SR framework combining implicit behavioral and explicit semantic perspectives.<n>It includes two modules: the Implicit Behavioral Interest Module and the Explicit Semantic Interest Module.<n>Experiments on four real-world datasets validate the framework's effectiveness and practicality.
arXiv Detail & Related papers (2024-11-14T13:00:23Z) - Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential Recommendation [66.72195610471624]
Cross-Domain Sequential Recommendation aims to mine and transfer users' sequential preferences across different domains.
We propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach.
arXiv Detail & Related papers (2024-06-05T09:19:54Z) - Information Maximization via Variational Autoencoders for Cross-Domain Recommendation [26.099908029810756]
We introduce a new CDSR framework named Information Maximization Variational Autoencoder (textbftextttIM-VAE)
Here, we suggest using a Pseudo-Sequence Generator to enhance the user's interaction history input for downstream fine-grained CDSR models.
To the best of our knowledge, this paper is the first CDSR work that considers the information disentanglement and denoising of pseudo-sequences in the open-world recommendation scenario.
arXiv Detail & Related papers (2024-05-31T09:07:03Z) - Learning Partially Aligned Item Representation for Cross-Domain Sequential Recommendation [72.73379646418435]
Cross-domain sequential recommendation aims to uncover and transfer users' sequential preferences across domains.
misaligned item representations can potentially lead to sub-optimal sequential modeling and user representation alignment.
We propose a model-agnostic framework called textbfCross-domain item representation textbfAlignment for textbfCross-textbfDomain textbfSequential textbfRecommendation.
arXiv Detail & Related papers (2024-05-21T03:25:32Z) - Mixed Attention Network for Cross-domain Sequential Recommendation [63.983590953727386]
We propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information.
Experimental results on two real-world datasets demonstrate the superiority of our proposed model.
arXiv Detail & Related papers (2023-11-14T16:07:16Z) - One Model for All: Large Language Models are Domain-Agnostic Recommendation Systems [43.79001185418127]
This paper introduces a framework that utilizes pre-trained large language models (LLMs) for domain-agnostic recommendation.<n>Specifically, we mix user's behaviors from multiple domains and item titles into a sentence, then use LLMs for generating user and item representations.
arXiv Detail & Related papers (2023-10-22T13:56:14Z) - MISSRec: Pre-training and Transferring Multi-modal Interest-aware
Sequence Representation for Recommendation [61.45986275328629]
We propose MISSRec, a multi-modal pre-training and transfer learning framework for sequential recommendation.
On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests.
On the candidate item side, we adopt a dynamic fusion module to produce user-adaptive item representation.
arXiv Detail & Related papers (2023-08-22T04:06:56Z) - DDGHM: Dual Dynamic Graph with Hybrid Metric Training for Cross-Domain
Sequential Recommendation [15.366783212837515]
Sequential Recommendation (SR) characterizes evolving patterns of user behaviors by modeling how users transit among items.
To solve this problem, we focus on Cross-Domain Sequential Recommendation (CDSR)
We propose DDGHM, a novel framework for the CDSR problem, which includes two main modules, dual dynamic graph modeling and hybrid metric training.
arXiv Detail & Related papers (2022-09-21T07:53:06Z) - A cross-domain recommender system using deep coupled autoencoders [77.86290991564829]
Two novel coupled autoencoder-based deep learning methods are proposed for cross-domain recommendation.
The first method aims to simultaneously learn a pair of autoencoders in order to reveal the intrinsic representations of the items in the source and target domains.
The second method is derived based on a new joint regularized optimization problem, which employs two autoencoders to generate in a deep and non-linear manner the user and item-latent factors.
arXiv Detail & Related papers (2021-12-08T15:14:26Z) - Dual Attentive Sequential Learning for Cross-Domain Click-Through Rate
Prediction [76.98616102965023]
Cross domain recommender system constitutes a powerful method to tackle the cold-start and sparsity problem.
We propose a novel approach to cross-domain sequential recommendations based on the dual learning mechanism.
arXiv Detail & Related papers (2021-06-05T01:21:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.