Teach Diffusion Language Models to Learn from Their Own Mistakes
- URL: http://arxiv.org/abs/2601.06428v1
- Date: Sat, 10 Jan 2026 05:04:33 GMT
- Title: Teach Diffusion Language Models to Learn from Their Own Mistakes
- Authors: Liming Liu, Binxuan Huang, Xin Liu, Bing Yin, Tuo Zhao,
- Abstract summary: Masked Diffusion Language Models (DLMs) achieve significant speed by generating multiple tokens in parallel.<n> parallel sampling approach will introduce strong dependency errors and cause quality to deteriorate rapidly as the generation step size grows.<n>We propose Decoupled Self-Correction to maintain high-quality multi-token generation.
- Score: 45.68746718883178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Masked Diffusion Language Models (DLMs) achieve significant speed by generating multiple tokens in parallel. However, this parallel sampling approach, especially when using fewer inference steps, will introduce strong dependency errors and cause quality to deteriorate rapidly as the generation step size grows. As a result, reliable self-correction becomes essential for maintaining high-quality multi-token generation. To address this, we propose Decoupled Self-Correction (DSC), a novel two-stage methodology. DSC first fully optimizes the DLM's generative ability before freezing the model and training a specialized correction head. This decoupling preserves the model's peak SFT performance and ensures the generated errors used for correction head training are of higher quality. Additionally, we introduce Future-Context Augmentation (FCA) to maximize the correction head's accuracy. FCA generalizes the error training distribution by augmenting samples with ground-truth tokens, effectively training the head to utilize a richer, future-looking context. This mechanism is used for reliably detecting the subtle errors of the high-fidelity base model. Our DSC framework enables the model, at inference time, to jointly generate and revise tokens, thereby correcting errors introduced by multi-token generation and mitigating error accumulation across steps. Experiments on mathematical reasoning and code generation benchmarks demonstrate that our approach substantially reduces the quality degradation associated with larger generation steps, allowing DLMs to achieve both high generation speed and strong output fidelity.
Related papers
- Discrete Stochastic Localization for Non-autoregressive Generation [17.56505846228918]
We show that emphtraining alone can substantially improve the step-efficiency of MDLM/ReMDM sampling.<n>On OpenWebText, textsc fine-tuning yields large MAUVE gains at low step budgets, surpassing the MDLM+ReMDM.<n>Analyses show improved self-correction and uncertainty calibration, making remasking markedly more compute-efficient.
arXiv Detail & Related papers (2026-02-18T04:05:40Z) - Learn from Your Mistakes: Self-Correcting Masked Diffusion Models [31.536464269884103]
Masked diffusion models (MDMs) have emerged as a promising alternative to autoregressive models.<n>We propose a framework that trains a model to perform both unmasking and correction.<n>We name our training and sampling method Progressive Self-Correction (ProSeCo) for its unique ability to iteratively refine an entire sequence.
arXiv Detail & Related papers (2026-02-12T05:17:31Z) - Training-Free Self-Correction for Multimodal Masked Diffusion Models [61.84305395626145]
We propose a training-free self-correction framework that exploits the inductive biases of pre-trained masked diffusion models.<n>Our method significantly improves generation quality on text-to-image generation and multimodal understanding tasks with reduced sampling steps.
arXiv Detail & Related papers (2026-02-02T23:58:15Z) - Plug-and-Play Fidelity Optimization for Diffusion Transformer Acceleration via Cumulative Error Minimization [26.687056294842083]
Caching-based methods achieve training-free acceleration, while suffering from considerable computational error.<n>Existing methods typically incorporate error correction strategies such as pruning or prediction to mitigate it.<n>We propose a novel fidelity-optimization plugin for existing error correction methods via cumulative error minimization, named CEM.
arXiv Detail & Related papers (2025-12-29T07:36:36Z) - Continuous Autoregressive Language Models [56.49239051750678]
We introduce Continuous Autoregressive Language Models (CALM)<n>CALM uses a high-fidelity autoencoder to compress a chunk of K tokens into a single continuous vector.<n>We develop a comprehensive likelihood-free framework that enables robust training, evaluation, and controllable sampling.
arXiv Detail & Related papers (2025-10-31T17:58:11Z) - From Denoising to Refining: A Corrective Framework for Vision-Language Diffusion Model [72.73512218682187]
We introduce ReDiff, a refining-enhanced diffusion framework that teaches the model to identify and correct its own errors.<n>Our approach features a two-stage training process: first, we instill a foundational revision capability by training the model to revise synthetic errors; second, we implement a novel online self-correction loop.<n>This mistake-driven learning endows the model with the crucial ability to revisit and refine its already generated output, effectively breaking the error cascade.
arXiv Detail & Related papers (2025-10-22T06:58:55Z) - Saber: An Efficient Sampling with Adaptive Acceleration and Backtracking Enhanced Remasking for Diffusion Language Model [98.35868970993232]
Diffusion language models (DLMs) are emerging as a powerful and promising alternative to the dominant autoregressive paradigm.<n>We introduce efficient Sampling with Adaptive acceleration and Backtracking Enhanced Remasking (i.e., Saber) to achieve better inference speed and output quality in code generation.
arXiv Detail & Related papers (2025-10-20T23:38:12Z) - Hybrid Autoregressive-Diffusion Model for Real-Time Sign Language Production [0.0]
We develop a hybrid approach that combines autoregressive and diffusion models for Sign Language Production (SLP)<n>To capture fine-grained body movements, we design a Multi-Scale Pose Representation module that separately extracts detailed features from distinct articulators.<n>We introduce a Confidence-Aware Causal Attention mechanism that utilizes joint-level confidence scores to dynamically guide the pose generation process.
arXiv Detail & Related papers (2025-07-12T01:34:50Z) - DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation [68.19756761027351]
Diffusion large language models (dLLMs) are compelling alternatives to autoregressive (AR) models.<n>We investigate their denoising processes and reinforcement learning methods.<n>Our work provides deeper insight into the machinery of dLLM generation and offers an effective, diffusion-native RL training framework.
arXiv Detail & Related papers (2025-06-25T17:35:47Z) - Fast Autoregressive Models for Continuous Latent Generation [49.079819389916764]
Autoregressive models have demonstrated remarkable success in sequential data generation, particularly in NLP.<n>Recent work, the masked autoregressive model (MAR) bypasses quantization by modeling per-token distributions in continuous spaces using a diffusion head.<n>We propose Fast AutoRegressive model (FAR), a novel framework that replaces MAR's diffusion head with a lightweight shortcut head.
arXiv Detail & Related papers (2025-04-24T13:57:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.