Gaussian Mixture Flow Matching with Domain Alignment for Multi-Domain Sequential Recommendation
- URL: http://arxiv.org/abs/2510.21021v1
- Date: Thu, 23 Oct 2025 22:11:26 GMT
- Title: Gaussian Mixture Flow Matching with Domain Alignment for Multi-Domain Sequential Recommendation
- Authors: Xiaoxin Ye, Chengkai Huang, Hongtao Huang, Lina Yao,
- Abstract summary: We propose textitGMFlowRec, an efficient generative framework for MDSR that models domain-aware transition trajectories.<n>Experiments on JD and Amazon datasets demonstrate that GMFlowRec achieves state-of-the-art performance with up to 44% improvement in NDCG@5.
- Score: 13.331414627413674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Users increasingly interact with content across multiple domains, resulting in sequential behaviors marked by frequent and complex transitions. While Cross-Domain Sequential Recommendation (CDSR) models two-domain interactions, Multi-Domain Sequential Recommendation (MDSR) introduces significantly more domain transitions, compounded by challenges such as domain heterogeneity and imbalance. Existing approaches often overlook the intricacies of domain transitions, tend to overfit to dense domains while underfitting sparse ones, and struggle to scale effectively as the number of domains increases. We propose \textit{GMFlowRec}, an efficient generative framework for MDSR that models domain-aware transition trajectories via Gaussian Mixture Flow Matching. GMFlowRec integrates: (1) a unified dual-masked Transformer to disentangle domain-invariant and domain-specific intents, (2) a Gaussian Mixture flow field to capture diverse behavioral patterns, and (3) a domain-aligned prior to support frequent and sparse transitions. Extensive experiments on JD and Amazon datasets demonstrate that GMFlowRec achieves state-of-the-art performance with up to 44\% improvement in NDCG@5, while maintaining high efficiency via a single unified backbone, making it scalable for real-world multi-domain sequential recommendation.
Related papers
- LLM-EDT: Large Language Model Enhanced Cross-domain Sequential Recommendation with Dual-phase Training [53.539682966282534]
Cross-domain Sequential Recommendation (CDSR) has been proposed to enrich user-item interactions by incorporating information from various domains.<n>Despite current progress, the imbalance issue and transition issue hinder further development of CDSR.<n>We propose an LLMs Enhanced Cross-domain Sequential Recommendation with Dual-phase Training (LLM-EDT)
arXiv Detail & Related papers (2025-11-25T05:18:04Z) - A Soft-partitioned Semi-supervised Collaborative Transfer Learning Approach for Multi-Domain Recommendation [33.21794937808597]
We propose Soft-partitioned Semi-supervised Collaborative Transfer Learning (SSCTL) for multi-domain recommendation.<n> SSCTL generates dynamic parameters to address the overwhelming issue, thus shifting focus towards samples from non-dominant domains.<n>Online tests yielded significant improvements across various domains, with increases in GMV ranging from 0.54% to 2.90% and enhancements in CTR ranging from 0.22% to 1.69%.
arXiv Detail & Related papers (2025-11-03T09:58:32Z) - Efficient Large-Scale Cross-Domain Sequential Recommendation with Dynamic State Representations [36.44078481424322]
We introduce a novel approach for scalable multi-domain recommendation systems by replacing full inter-domain attention with two innovative mechanisms.<n>First, we propose novel positional embeddings that account for domain-transition specific information.<n>Second, we introduce a dynamic state representation for each domain, which is stored and accessed during subsequent token predictions.
arXiv Detail & Related papers (2025-08-28T16:05:42Z) - LLM-RecG: A Semantic Bias-Aware Framework for Zero-Shot Sequential Recommendation [5.512301280728178]
Zero-shot cross-domain sequential recommendation (ZCDSR) enables predictions in unseen domains without additional training or fine-tuning.<n>Recent advancements in large language models (LLMs) have significantly enhanced ZCDSR by facilitating cross-domain knowledge transfer.<n>We propose a novel semantic bias-aware framework that improves cross-domain alignment at both the item and sequential levels.
arXiv Detail & Related papers (2025-01-31T15:43:21Z) - ABXI: Invariant Interest Adaptation for Task-Guided Cross-Domain Sequential Recommendation [6.234890828342688]
Cross-Domain Sequential Recommendation (CDSR) has recently gained attention for countering data sparsity by transferring knowledge across domains.<n>One key challenge is to correctly extract the shared knowledge among these sequences and appropriately transfer it.<n>We propose the A-B-Cross-to-Invariant Learning Recommender (ABXI) to address these challenges.
arXiv Detail & Related papers (2025-01-25T08:09:37Z) - Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation [59.41178047749177]
We focus on multi-domain Neural Machine Translation, with the goal of developing efficient models which can handle data from various domains seen during training and are robust to domains unseen during training.
We hypothesize that Sparse Mixture-of-Experts (SMoE) models are a good fit for this task, as they enable efficient model scaling.
We conduct a series of experiments aimed at validating the utility of SMoE for the multi-domain scenario, and find that a straightforward width scaling of Transformer is a simpler and surprisingly more efficient approach in practice, and reaches the same performance level as SMoE.
arXiv Detail & Related papers (2024-07-01T09:45:22Z) - HAMUR: Hyper Adapter for Multi-Domain Recommendation [49.87140704564021]
We propose a novel model Hyper Adapter for Multi-Domain Recommendation (HAMUR)<n>HamUR consists of two components.<n>HamUR implicitly captures shared information among domains and dynamically generates the parameters for the adapter.
arXiv Detail & Related papers (2023-09-12T13:34:33Z) - Exploiting Graph Structured Cross-Domain Representation for Multi-Domain
Recommendation [71.45854187886088]
Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer.
We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec.
We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-12T19:51:32Z) - DDGHM: Dual Dynamic Graph with Hybrid Metric Training for Cross-Domain
Sequential Recommendation [15.366783212837515]
Sequential Recommendation (SR) characterizes evolving patterns of user behaviors by modeling how users transit among items.
To solve this problem, we focus on Cross-Domain Sequential Recommendation (CDSR)
We propose DDGHM, a novel framework for the CDSR problem, which includes two main modules, dual dynamic graph modeling and hybrid metric training.
arXiv Detail & Related papers (2022-09-21T07:53:06Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.