Masked Diffusion Generative Recommendation
- URL: http://arxiv.org/abs/2601.19501v2
- Date: Thu, 29 Jan 2026 04:42:12 GMT
- Title: Masked Diffusion Generative Recommendation
- Authors: Lingyu Mu, Hao Deng, Haibo Xing, Jinxin Hu, Yu Zhang, Xiaoyi Zeng, Jing Zhang,
- Abstract summary: Generative recommendation (GR) typically first quantizes continuous item embeddings into multi-level semantic IDs (SIDs)<n>We propose MDGR, a Masked Diffusion Generative Recommendation framework that reshapes the GR pipeline from three perspectives: codebook, training, and inference.
- Score: 14.679550929790151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative recommendation (GR) typically first quantizes continuous item embeddings into multi-level semantic IDs (SIDs), and then generates the next item via autoregressive decoding. Although existing methods are already competitive in terms of recommendation performance, directly inheriting the autoregressive decoding paradigm from language models still suffers from three key limitations: (1) autoregressive decoding struggles to jointly capture global dependencies among the multi-dimensional features associated with different positions of SID; (2) using a unified, fixed decoding path for the same item implicitly assumes that all users attend to item attributes in the same order; (3) autoregressive decoding is inefficient at inference time and struggles to meet real-time requirements. To tackle these challenges, we propose MDGR, a Masked Diffusion Generative Recommendation framework that reshapes the GR pipeline from three perspectives: codebook, training, and inference. (1) We adopt a parallel codebook to provide a structural foundation for diffusion-based GR. (2) During training, we adaptively construct masking supervision signals along both the temporal and sample dimensions. (3) During inference, we develop a warm-up-based two-stage parallel decoding strategy for efficient generation of SIDs. Extensive experiments on multiple public and industrial-scale datasets show that MDGR outperforms ten state-of-the-art baselines by up to 10.78%. Furthermore, by deploying MDGR on a large-scale online advertising platform, we achieve a 1.20% increase in revenue, demonstrating its practical value.
Related papers
- DeepInterestGR: Mining Deep Multi-Interest Using Multi-Modal LLMs for Generative Recommendation [0.0]
DeepInterestGR introduces three key innovations in generative recommendation framework.<n>We leverage multi-LLM Interest Mining, Reward-Labeled Deep Interest, and Interest-Enhanced Item Discretization.<n> Experiments on three Amazon Review benchmarks demonstrate that DeepInterestGR consistently outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2026-02-21T17:03:06Z) - S$^2$GR: Stepwise Semantic-Guided Reasoning in Latent Space for Generative Recommendation [15.69884243417431]
Generative Recommendation (GR) has emerged as a transformative paradigm with its end-to-end generation advantages.<n>Existing GR methods primarily focus on direct Semantic ID (SID) generation from interaction sequences.<n>We propose stepwise semantic-guided reasoning in latent space (S$2$GR), a novel reasoning enhanced GR framework.
arXiv Detail & Related papers (2026-01-26T16:40:37Z) - Co-GRPO: Co-Optimized Group Relative Policy Optimization for Masked Diffusion Model [74.99242687133408]
Masked Diffusion Models (MDMs) have shown promising potential across vision, language, and cross-modal generation.<n>We introduce Co-GRPO, which reformulates MDM generation as a unified Markov Decision Process (MDP) that jointly incorporates both the model and the inference schedule.
arXiv Detail & Related papers (2025-12-25T12:06:04Z) - Multi-Aspect Cross-modal Quantization for Generative Recommendation [27.92632297542123]
We propose Multi-Aspect Cross-modal quantization for generative Recommendation (MACRec)<n>We first introduce cross-modal quantization during the ID learning process, which effectively reduces conflict rates.<n>We also incorporate multi-aspect cross-modal alignments, including the implicit and explicit alignments.
arXiv Detail & Related papers (2025-11-19T04:55:14Z) - LLaDA-Rec: Discrete Diffusion for Parallel Semantic ID Generation in Generative Recommendation [32.284624021041004]
We propose LLaDA-Rec, a discrete diffusion framework that reformulates recommendation as parallel semantic ID generation.<n> Experiments on three real-world datasets show that LLaDA-Rec consistently outperforms both ID-based and state-of-the-art generative recommenders.
arXiv Detail & Related papers (2025-11-09T07:12:15Z) - DiffGRM: Diffusion-based Generative Recommendation Model [63.35379395455103]
Generative recommendation (GR) is an emerging paradigm that represents each item via a tokenizer as an n-digit semantic ID (SID)<n>We propose DiffGRM, a diffusion-based GR model that replaces the autoregressive decoder with a masked discrete diffusion model (MDM)<n> Experiments show consistent gains over strong generative and discriminative recommendation baselines on multiple datasets.
arXiv Detail & Related papers (2025-10-21T03:23:32Z) - ZeroGR: A Generalizable and Scalable Framework for Zero-Shot Generative Retrieval [125.19156877994612]
Generative retrieval (GR) reformulates information retrieval (IR) by framing it as the generation of document identifiers (docids)<n>We propose textscZeroGR, a zero-shot generative retrieval framework that leverages natural language instructions to extend GR across a wide range of IR tasks.<n>Specifically, textscZeroGR is composed of three key components: (i) an LM-based docid generator that unifies heterogeneous documents into semantically meaningful docids; (ii) an instruction-tuned query generator that generates diverse types of queries from natural language task descriptions to enhance
arXiv Detail & Related papers (2025-10-12T03:04:24Z) - Understanding Generative Recommendation with Semantic IDs from a Model-scaling View [57.471604518714535]
Generative Recommendation (GR) tries to unify rich item semantics and collaborative filtering signals.<n>One popular modern approach is to use semantic IDs (SIDs) to represent items in an autoregressive user interaction sequence modeling setup.<n>We show that SID-based GR shows significant bottlenecks while scaling up the model.<n>We revisit another GR paradigm that directly uses large language models (LLMs) as recommenders.
arXiv Detail & Related papers (2025-09-29T21:24:17Z) - FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets [64.51403245281547]
FORGE is a benchmark for FOrming semantic identifieR in Generative rEtrieval with industrial datasets.<n>For real-world applications, FORGE introduces an offline pretraining schema that reduces online convergence by half.
arXiv Detail & Related papers (2025-09-25T08:44:22Z) - Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps [16.84310001807895]
This paper introduces a model-agnostic approach that can be applied to A-RAG methods.<n>Specifically, we use cache access and parallel generation to speed up the prefilling and decoding stages respectively.
arXiv Detail & Related papers (2025-05-19T05:39:38Z) - EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration [63.112790050749695]
We introduce EAGER, a novel generative recommendation framework that seamlessly integrates both behavioral and semantic information.
We validate the effectiveness of EAGER on four public benchmarks, demonstrating its superior performance compared to existing methods.
arXiv Detail & Related papers (2024-06-20T06:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.