Stable-DiffCoder: Pushing the Frontier of Code Diffusion Large Language Model
- URL: http://arxiv.org/abs/2601.15892v2
- Date: Fri, 23 Jan 2026 06:11:02 GMT
- Title: Stable-DiffCoder: Pushing the Frontier of Code Diffusion Large Language Model
- Authors: Chenghao Fan, Wen Heng, Bo Li, Sichen Liu, Yuxuan Song, Jing Su, Xiaoye Qu, Kai Shen, Wei Wei,
- Abstract summary: Diffusion-based language models (DLLMs) offer non-sequential, blockwise generation and richer data reuse compared to autoregressive (AR) models.<n>We introduce Stable-DiffCoder, a block diffusion code model that reuses the Seed-Coder architecture, data, and training pipeline.
- Score: 35.59660517313579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion-based language models (DLLMs) offer non-sequential, block-wise generation and richer data reuse compared to autoregressive (AR) models, but existing code DLLMs still lag behind strong AR baselines under comparable budgets. We revisit this setting in a controlled study and introduce Stable-DiffCoder, a block diffusion code model that reuses the Seed-Coder architecture, data, and training pipeline. To enable efficient knowledge learning and stable training, we incorporate a block diffusion continual pretraining (CPT) stage enhanced by a tailored warmup and block-wise clipped noise schedule. Under the same data and architecture, Stable-DiffCoder overall outperforms its AR counterpart on a broad suite of code benchmarks. Moreover, relying only on the CPT and supervised fine-tuning stages, Stable-DiffCoder achieves stronger performance than a wide range of \~8B ARs and DLLMs, demonstrating that diffusion-based training can improve code modeling quality beyond AR training alone. Moreover, diffusion-based any-order modeling improves structured code modeling for editing and reasoning, and through data augmentation, benefits low-resource coding languages.
Related papers
- Visual Disentangled Diffusion Autoencoders: Scalable Counterfactual Generation for Foundation Models [1.3535770763481902]
Foundation models, despite their robust zero-shot capabilities, remain vulnerable to spurious correlations and 'Clever Hans' strategies.<n>We propose Visual Disentangled Diffusion Autoencoders (DiDAE), a novel framework integrating frozen foundation models with disentangled dictionary learning.<n>DiDAE first edits foundation model embeddings in interpretable disentangled directions of the disentangled dictionary and then decodes them via a diffusion autoencoder.
arXiv Detail & Related papers (2026-01-29T15:25:37Z) - DiffBench Meets DiffAgent: End-to-End LLM-Driven Diffusion Acceleration Code Generation [25.165655684862074]
We introduce a framework driven by large language models (LLMs) for automated acceleration code generation and evaluation.<n>First, we present DiffBench, a comprehensive benchmark that implements a three stage automated evaluation pipeline.<n>Second, we propose DiffAgent, an agent that generates optimal acceleration strategies and codes for arbitrary diffusion models.
arXiv Detail & Related papers (2026-01-06T16:55:55Z) - DiRL: An Efficient Post-Training Framework for Diffusion Language Models [54.405206032785706]
Diffusion Language Models (dLLMs) have emerged as promising alternatives to Auto-Regressive (AR) models.<n>Existing methods suffer from computational inefficiency and objective mismatches between training and inference.<n>We introduce DiRL, an efficient post-training framework that tightly integrates FlexAttention-accelerated blockwise training with LMDeploy-optimized inference.
arXiv Detail & Related papers (2025-12-23T08:33:19Z) - Fast-dLLM v2: Efficient Block-Diffusion LLM [64.38006546510337]
Fast-dLLM v2 is a block diffusion language model that adapts pretrained AR models into dLLMs for parallel text generation.<n>This represents a 500x reduction in training data compared to full-attention diffusion LLMs such as Dream (580B tokens)
arXiv Detail & Related papers (2025-09-30T14:40:18Z) - Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models [49.911784762244814]
TraceRL is a trajectory-aware reinforcement learning framework for diffusion language models (DLMs)<n>We derive a series of state-of-the-art diffusion language models, namely TraDo.<n>TraDo-8B-Instruct achieves relative accuracy improvements of 6.1% over Qwen2.5-7B-Instruct and 51.3% over Llama3.1-8B-Instruct on mathematical reasoning benchmarks.
arXiv Detail & Related papers (2025-09-08T17:58:06Z) - Dream-Coder 7B: An Open Diffusion Language Model for Code [99.14959222355988]
We present Dream-Coder 7B, an open-source discrete diffusion language model for code generation that exhibits emergent any-order generation capabilities.<n>Unlike traditional autoregressive (AR) models that decode strictly left-to-right, Dream-Coder 7B adaptively determines its decoding strategy based on the coding task.
arXiv Detail & Related papers (2025-09-01T05:30:56Z) - DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation [68.19756761027351]
Diffusion large language models (dLLMs) are compelling alternatives to autoregressive (AR) models.<n>We investigate their denoising processes and reinforcement learning methods.<n>Our work provides deeper insight into the machinery of dLLM generation and offers an effective, diffusion-native RL training framework.
arXiv Detail & Related papers (2025-06-25T17:35:47Z) - Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding [51.711605076319216]
Diffusion-based large language models (Diffusion LLMs) have shown promise for non-autoregressive text generation with parallel decoding capabilities.<n>We introduce a novel block-wise approximate KV Cache mechanism tailored for bidirectional diffusion models, enabling cache reuse with negligible performance drop.<n>We propose a confidence-aware parallel decoding strategy that selectively decodes tokens exceeding a confidence threshold, mitigating dependency violations and maintaining generation quality.
arXiv Detail & Related papers (2025-05-28T17:39:15Z) - KDC-Diff: A Latent-Aware Diffusion Model with Knowledge Retention for Memory-Efficient Image Generation [2.0250638970950905]
KDC-Diff is a novel and scalable generative framework designed to significantly reduce computational overhead while maintaining high performance.<n>Our model demonstrates strong performance across FID, CLIP, KID, and LPIPS metrics while achieving substantial reductions in parameter count, inference time, and FLOPs.
arXiv Detail & Related papers (2025-05-11T14:40:51Z) - MoSE: Hierarchical Self-Distillation Enhances Early Layer Embeddings [2.1262605464247812]
Self-Distillation is a principled approach to trading inference cost for accuracy across various code understanding tasks.<n>Our architecture improves text-to-code and code-to-code search by targeting specific encoder layers as exit heads.<n>We release a new dataset created through code translation that extends text-to-code benchmarks with cross-language code-to-code pairs.
arXiv Detail & Related papers (2025-03-04T21:08:17Z) - Adding Conditional Control to Diffusion Models with Reinforcement Learning [68.06591097066811]
Diffusion models are powerful generative models that allow for precise control over the characteristics of the generated samples.<n>While these diffusion models trained on large datasets have achieved success, there is often a need to introduce additional controls in downstream fine-tuning processes.<n>This work presents a novel method based on reinforcement learning (RL) to add such controls using an offline dataset.
arXiv Detail & Related papers (2024-06-17T22:00:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.