Enhancing guidance for missing data in diffusion-based sequential recommendation
- URL: http://arxiv.org/abs/2601.15673v1
- Date: Thu, 22 Jan 2026 05:55:21 GMT
- Title: Enhancing guidance for missing data in diffusion-based sequential recommendation
- Authors: Qilong Yan, Yifei Xing, Dugang Liu, Jingpu Duan, Jian Yin,
- Abstract summary: We propose a novel Counterfactual Attention Regulation Diffusion model (CARD)<n>CARD focuses on amplifying the signal from key interest-turning-point items while concurrently identifying and suppressing noise within the user sequence.<n>Our method works well on real-world data without being computationally expensive.
- Score: 10.673207423895747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contemporary sequential recommendation methods are becoming more complex, shifting from classification to a diffusion-guided generative paradigm. However, the quality of guidance in the form of user information is often compromised by missing data in the observed sequences, leading to suboptimal generation quality. Existing methods address this by removing locally similar items, but overlook ``critical turning points'' in user interest, which are crucial for accurately predicting subsequent user intent. To address this, we propose a novel Counterfactual Attention Regulation Diffusion model (CARD), which focuses on amplifying the signal from key interest-turning-point items while concurrently identifying and suppressing noise within the user sequence. CARD consists of (1) a Dual-side Thompson Sampling method to identify sequences undergoing significant interest shift, and (2) a counterfactual attention mechanism for these sequences to quantify the importance of each item. In this manner, CARD provides the diffusion model with a high-quality guidance signal composed of dynamically re-weighted interaction vectors to enable effective generation. Experiments show our method works well on real-world data without being computationally expensive. Our code is available at https://github.com/yanqilong3321/CARD.
Related papers
- FAIR: Focused Attention Is All You Need for Generative Recommendation [43.65370600297507]
We propose the first generative recommendation framework with focused attention, which enhances attention scores to relevant context while suppressing those to irrelevant ones.<n>Specifically, we propose (1) a focused attention mechanism integrated into the standard Transformer, which learns two separate sets of Q and K attention weights and computes their difference as the final attention scores.<n>We validate the effectiveness of FAIR on four public benchmarks, demonstrating its superior performance compared to existing methods.
arXiv Detail & Related papers (2025-12-12T03:25:12Z) - Continuous-time Discrete-space Diffusion Model for Recommendation [25.432419904462694]
CDRec is a novel Continuous-time Discrete-space Diffusion Recommendation framework.<n>It is superior in both recommendation accuracy and computational efficiency.<n>Experiments on real-world datasets demonstrate CDRec's superior performance in both recommendation accuracy and computational efficiency.
arXiv Detail & Related papers (2025-11-15T09:06:57Z) - Preference Trajectory Modeling via Flow Matching for Sequential Recommendation [50.077447974294586]
Sequential recommendation predicts each user's next item based on their historical interaction sequence.<n>FlowRec is a simple yet effective sequential recommendation framework.<n>We construct a personalized behavior-based prior distribution to replace Gaussian noise and learn a vector field to model user preference trajectories.
arXiv Detail & Related papers (2025-08-25T02:55:42Z) - Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation [7.6572888950554905]
We propose Distinguished Quantized Guidance for Diffusion-based Sequence Recommendation (DiQDiff)<n>DiQDiff aims to extract robust guidance to understand user interests and generate distinguished items for personalized user interests within DMs.<n>The superior recommendation performance of DiQDiff against leading approaches demonstrates its effectiveness in sequential recommendation tasks.
arXiv Detail & Related papers (2025-01-29T14:20:42Z) - Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model [66.91323540178739]
Sequential recommendation (SR) aims to predict items that users may be interested in based on their historical behavior.
We revisit SR from a novel information-theoretic perspective and find that sequential modeling methods fail to adequately capture randomness and unpredictability of user behavior.
Inspired by fuzzy information processing theory, this paper introduces the fuzzy sets of interaction sequences to overcome the limitations and better capture the evolution of users' real interests.
arXiv Detail & Related papers (2024-10-31T14:52:01Z) - Dual Conditional Diffusion Models for Sequential Recommendation [63.82152785755723]
We propose Dual Conditional Diffusion Models for Sequential Recommendation (DCRec)<n>DCRec integrates implicit and explicit information by embedding dual conditions into both the forward and reverse diffusion processes.<n>This allows the model to retain valuable sequential and contextual information while leveraging explicit user-item interactions to guide the recommendation process.
arXiv Detail & Related papers (2024-10-29T11:51:06Z) - Long-Sequence Recommendation Models Need Decoupled Embeddings [49.410906935283585]
We identify and characterize a neglected deficiency in existing long-sequence recommendation models.<n>A single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes.<n>We propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are learned separately to fully decouple attention and representation.
arXiv Detail & Related papers (2024-10-03T15:45:15Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.<n>We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.