Continuously Augmented Discrete Diffusion model for Categorical Generative Modeling
- URL: http://arxiv.org/abs/2510.01329v1
- Date: Wed, 01 Oct 2025 18:00:56 GMT
- Title: Continuously Augmented Discrete Diffusion model for Categorical Generative Modeling
- Authors: Huangjie Zheng, Shansan Gong, Ruixiang Zhang, Tianrong Chen, Jiatao Gu, Mingyuan Zhou, Navdeep Jaitly, Yizhe Zhang,
- Abstract summary: Standard discrete diffusion models treat all unobserved states identically by mapping them to an absorbing [MASK] token.<n>This creates an 'information void' where semantic information that could be inferred from unmasked tokens is lost between denoising steps.<n>We introduce Continuously Augmented Discrete Diffusion, a framework that augments the discrete state space with a paired diffusion in a continuous latent space.
- Score: 87.34677262370924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard discrete diffusion models treat all unobserved states identically by mapping them to an absorbing [MASK] token. This creates an 'information void' where semantic information that could be inferred from unmasked tokens is lost between denoising steps. We introduce Continuously Augmented Discrete Diffusion (CADD), a framework that augments the discrete state space with a paired diffusion in a continuous latent space. This yields graded, gradually corrupted states in which masked tokens are represented by noisy yet informative latent vectors rather than collapsed 'information voids'. At each reverse step, CADD may leverage the continuous latent as a semantic hint to guide discrete denoising. The design is clean and compatible with existing discrete diffusion training. At sampling time, the strength and choice of estimator for the continuous latent vector enables a controlled trade-off between mode-coverage (generating diverse outputs) and mode-seeking (generating contextually precise outputs) behaviors. Empirically, we demonstrate CADD improves generative quality over mask-based diffusion across text generation, image synthesis, and code modeling, with consistent gains on both qualitative and quantitative metrics against strong discrete baselines.
Related papers
- CoDAR: Continuous Diffusion Language Models are More Powerful Than You Think [17.27394520177311]
CoDAR is a two-stage framework that keeps diffusion entirely continuous in an embedding space while learning a strong, context-conditional discretizer.<n>Experiments on LM1B and OpenWebText demonstrate that CoDAR substantially improves generation quality over latent diffusion.
arXiv Detail & Related papers (2026-03-03T03:05:15Z) - Rejection Mixing: Fast Semantic Propagation of Mask Tokens for Efficient DLLM Inference [58.189320101488725]
DLLMs promise fast non-autoregressive inference but suffer a severe quality-speed trade-off in parallel decoding.<n>We address this by integrating continuous representations into the discrete decoding process, as they preserve rich inter-position dependency.<n>We propose ReMix, a framework that introduces a novel Continuous Mixing State as an intermediate between the initial masked state and the final decoded token state.
arXiv Detail & Related papers (2026-02-26T11:08:11Z) - Bridging the Discrete-Continuous Gap: Unified Multimodal Generation via Coupled Manifold Discrete Absorbing Diffusion [60.186310080523135]
Bifurcation of generative modeling into autoregressive approaches for discrete data (text) and diffusion approaches for continuous data (images) hinders development of truly unified multimodal systems.<n>We propose textbfCoM-DAD, a novel probabilistic framework that reformulates multimodal generation as a hierarchical dual-process.<n>Our method demonstrates superior stability over standard masked modeling, establishing a new paradigm for scalable, unified text-image generation.
arXiv Detail & Related papers (2026-01-07T16:21:19Z) - CANDI: Hybrid Discrete-Continuous Diffusion Models [36.61898210733147]
We show how noise corrupts discrete data through two mechanisms: discrete identity corruption and continuous rank degradation.<n>We propose CANDI, a hybrid framework that decouples discrete and continuous corruption.<n>This unlocks the benefits of continuous diffusion for discrete spaces.
arXiv Detail & Related papers (2025-10-26T03:24:31Z) - Latent Discrete Diffusion Models [18.979326092796896]
We study discrete diffusion for language and other categorical data.<n>We propose emphLatent Discrete Diffusion Models (LDDM)<n>We present two instantiations: (i) FUJI-LDDMs, which perform fully joint denoising of tokens and latents, and (ii) SEQ-LDDMs, which sequentially resolve the latent and then the discrete chain conditionally on it.<n>For both variants we derive ELBO-style objectives and discuss design choices to learn informative latents yet amenable to diffusoin modeling.
arXiv Detail & Related papers (2025-10-20T21:26:52Z) - Coevolutionary Continuous Discrete Diffusion: Make Your Diffusion Language Model a Latent Reasoner [66.86440230599656]
We argue that diffusion language models do not necessarily need to be in the discrete space.<n>In particular, we prove that continuous diffusion models have stronger expressivity than discrete diffusions and looped transformers.<n>We propose Coevolutionary Continuous Diffusion (CCDD), which defines a joint multimodal diffusion process on the union of a continuous representation space and a discrete token space.
arXiv Detail & Related papers (2025-10-03T17:44:41Z) - Authentic Discrete Diffusion Model [72.31371542619121]
Authentic Discrete Diffusion (ADD) framework redefines prior pseudo-discrete approaches.<n>ADD reformulates the diffusion input by directly using float-encoded one-hot class data.<n> experiments demonstrate that ADD achieves superior performance on classification tasks compared to the baseline.
arXiv Detail & Related papers (2025-10-01T15:51:10Z) - Unifying Continuous and Discrete Text Diffusion with Non-simultaneous Diffusion Processes [9.29387855908007]
NeoDiff is a novel diffusion model that integrates the strengths of both discrete and continuous approaches.<n>Our approach unifies the theories of discrete and continuous diffusion models, offering a more principled and effective framework for text generation.
arXiv Detail & Related papers (2025-05-28T09:28:52Z) - What is Adversarial Training for Diffusion Models? [4.71482540145286]
We show that adversarial training (AT) for diffusion models (DMs) fundamentally differs from classifiers.<n>AT is a way to enforce smoothness in the diffusion flow, improving to outliers and corrupted data.<n>We rigorously evaluate our approach with proof-of-concept datasets with known distributions in low- and high-dimensional space.
arXiv Detail & Related papers (2025-05-27T20:32:28Z) - Generalized Interpolating Discrete Diffusion [65.74168524007484]
Masked diffusion is a popular choice due to its simplicity and effectiveness.<n>We generalize a new family of general interpolating discrete diffusion (GIDD) which offers greater flexibility in the design of the noising processes.<n>Exploiting GIDD's flexibility, we explore a hybrid approach combining masking and uniform noise, leading to improved sample quality.
arXiv Detail & Related papers (2025-03-06T14:30:55Z) - Interleaved Gibbs Diffusion: Generating Discrete-Continuous Data with Implicit Constraints [30.624303845550575]
Interleaved Gibbs Diffusion (IGD) is a novel generative modeling framework for discrete-continuous data.<n>IGD generalizes discrete time Gibbs sampling type Markov chain for the case of discrete-continuous generation.<n>It achieves state-of-the-art results without relying on domain-specific inductive biases.
arXiv Detail & Related papers (2025-02-19T05:51:24Z) - Continuous Speculative Decoding for Autoregressive Image Generation [27.308442169466975]
Continuous visual autoregressive (AR) models have demonstrated promising performance in image generation.<n> speculative decoding has effectively accelerated discrete autoregressive inference.<n>This work addresses challenges from low acceptance rate, inconsistent output distribution, and modified distribution without analytic expression.
arXiv Detail & Related papers (2024-11-18T09:19:15Z) - G2D2: Gradient-Guided Discrete Diffusion for Inverse Problem Solving [83.56510119503267]
This paper presents a novel method for addressing linear inverse problems by leveraging generative models based on discrete diffusion as priors.<n>We employ a star-shaped noise process to mitigate the drawbacks of traditional discrete diffusion models with absorbing states.
arXiv Detail & Related papers (2024-10-09T06:18:25Z) - A Cheaper and Better Diffusion Language Model with Soft-Masked Noise [62.719656543880596]
Masked-Diffuse LM is a novel diffusion model for language modeling, inspired by linguistic features in languages.
Specifically, we design a linguistic-informed forward process which adds corruptions to the text through strategically soft-masking to better noise the textual data.
We demonstrate that our Masked-Diffuse LM can achieve better generation quality than the state-of-the-art diffusion models with better efficiency.
arXiv Detail & Related papers (2023-04-10T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.