WeaveRec: An LLM-Based Cross-Domain Sequential Recommendation Framework with Model Merging
- URL: http://arxiv.org/abs/2510.26546v1
- Date: Thu, 30 Oct 2025 14:37:15 GMT
- Title: WeaveRec: An LLM-Based Cross-Domain Sequential Recommendation Framework with Model Merging
- Authors: Min Hou, Xin Liu, Le Wu, Chenyi He, Hao Liu, Zhi Li, Xin Li, Si Wei,
- Abstract summary: We introduce WeaveRec, which cross-trains multiple LoRA modules with source and target domain data in a weaving fashion.<n>We provide a theoretical guarantee that WeaveRec can reduce the upper bound of the expected error in the target domain.
- Score: 24.949880939628386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-Domain Sequential Recommendation (CDSR) seeks to improve user preference modeling by transferring knowledge from multiple domains. Despite the progress made in CDSR, most existing methods rely on overlapping users or items to establish cross-domain correlations-a requirement that rarely holds in real-world settings. The advent of large language models (LLM) and model-merging techniques appears to overcome this limitation by unifying multi-domain data without explicit overlaps. Yet, our empirical study shows that naively training an LLM on combined domains-or simply merging several domain-specific LLMs-often degrades performance relative to a model trained solely on the target domain. To address these challenges, we first experimentally investigate the cause of suboptimal performance in LLM-based cross-domain recommendation and model merging. Building on these insights, we introduce WeaveRec, which cross-trains multiple LoRA modules with source and target domain data in a weaving fashion, and fuses them via model merging. WeaveRec can be extended to multi-source domain scenarios and notably does not introduce additional inference-time cost in terms of latency or memory. Furthermore, we provide a theoretical guarantee that WeaveRec can reduce the upper bound of the expected error in the target domain. Extensive experiments on single-source, multi-source, and cross-platform cross-domain recommendation scenarios validate that WeaveRec effectively mitigates performance degradation and consistently outperforms baseline approaches in real-world recommendation tasks.
Related papers
- FeDecider: An LLM-Based Framework for Federated Cross-Domain Recommendation [75.50721642765994]
Large language model (LLM)-based recommendation models have demonstrated impressive performance.<n>We propose an LLM-based framework for Federated cross-domain recommendation, FeDecider.<n>Extensive experiments across diverse datasets validate the effectiveness of our proposed FeDecider.
arXiv Detail & Related papers (2026-02-17T21:42:28Z) - MergeRec: Model Merging for Data-Isolated Cross-Domain Sequential Recommendation [14.573099220558765]
Cross-domain sequential recommendation has emerged as a promising research direction to address this challenge.<n>We propose a new framework, MergeRec, based on model merging under a new and realistic problem setting.<n> MergeRec consistently achieves superior performance, with average improvements of up to 17.21% in Recall@10.
arXiv Detail & Related papers (2026-01-05T03:14:23Z) - LLM-EDT: Large Language Model Enhanced Cross-domain Sequential Recommendation with Dual-phase Training [53.539682966282534]
Cross-domain Sequential Recommendation (CDSR) has been proposed to enrich user-item interactions by incorporating information from various domains.<n>Despite current progress, the imbalance issue and transition issue hinder further development of CDSR.<n>We propose an LLMs Enhanced Cross-domain Sequential Recommendation with Dual-phase Training (LLM-EDT)
arXiv Detail & Related papers (2025-11-25T05:18:04Z) - DmC: Nearest Neighbor Guidance Diffusion Model for Offline Cross-domain Reinforcement Learning [11.290019540058625]
Cross-domain offline reinforcement learning (RL) seeks to enhance sample efficiency by utilizing additional offline source datasets.<n>DmC is a novel framework for cross-domain offline RL with limited target samples.
arXiv Detail & Related papers (2025-07-28T03:34:15Z) - Generative Multi-Target Cross-Domain Recommendation [48.54929268144516]
This paper introduces GMC, a generative paradigm-based approach for multi-target cross-domain recommendation.<n>The core idea of GMC is to leverage semantically quantized discrete item identifiers as a medium for integrating multi-domain knowledge.<n>Extensive experiments on five public datasets demonstrate the effectiveness of GMC.
arXiv Detail & Related papers (2025-07-17T07:44:05Z) - LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation [32.40055370439922]
Cross-Domain Sequential Recommendation (CDSR) predicts user behavior by leveraging historical interactions across multiple domains.<n>We propose LLM-Enhanced Multimodal Fusion for Cross-Domain Sequential Recommendation (LLM-EMF)<n>LLM-EMF is a novel and advanced approach that enhances textual information with Large Language Models (LLM) knowledge.
arXiv Detail & Related papers (2025-06-22T09:53:21Z) - Let Synthetic Data Shine: Domain Reassembly and Soft-Fusion for Single Domain Generalization [68.41367635546183]
Single Domain Generalization aims to train models with consistent performance across diverse scenarios using data from a single source.<n>We propose Discriminative Domain Reassembly and Soft-Fusion (DRSF), a training framework leveraging synthetic data to improve model generalization.
arXiv Detail & Related papers (2025-03-17T18:08:03Z) - Cross-Domain Recommendation Meets Large Language Models [3.1519384727993582]
Cross-domain recommendation (CDR) has emerged as a promising solution to the cold-start problem.<n>Existing CDR models rely on complex neural architectures, large datasets, and significant computational resources.<n>In this work, we leverage the reasoning capabilities of large language models (LLMs) and explore their performance in the CDR domain.
arXiv Detail & Related papers (2024-11-29T17:25:00Z) - Exploring Language Model Generalization in Low-Resource Extractive QA [57.14068405860034]
We investigate Extractive Question Answering (EQA) with Large Language Models (LLMs) under domain drift.<n>We devise a series of experiments to explain the performance gap empirically.
arXiv Detail & Related papers (2024-09-27T05:06:43Z) - Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential Recommendation [66.72195610471624]
Cross-Domain Sequential Recommendation aims to mine and transfer users' sequential preferences across different domains.
We propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach.
arXiv Detail & Related papers (2024-06-05T09:19:54Z) - SepRep-Net: Multi-source Free Domain Adaptation via Model Separation And Reparameterization [75.74369886582394]
We propose a novel framework called SepRep-Net to tackle multi-source free domain adaptation.
SepRep-Net reassembled multiple existing models to a unified network, while maintaining separate pathways (Separation)
SepRep-Net is characterized by 1) effectiveness: competitive performance on the target domain, 2) efficiency: low computational costs, and 3) generalizability: maintaining more source knowledge than existing solutions.
arXiv Detail & Related papers (2024-02-13T06:35:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.