Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield
- URL: http://arxiv.org/abs/2511.22677v1
- Date: Thu, 27 Nov 2025 18:24:28 GMT
- Title: Decoupled DMD: CFG Augmentation as the Spear, Distribution Matching as the Shield
- Authors: Dongyang Liu, Peng Gao, David Liu, Ruoyi Du, Zhen Li, Qilong Wu, Xin Jin, Sihan Cao, Shifeng Zhang, Hongsheng Li, Steven Hoi,
- Abstract summary: Diffusion model distillation has emerged as a powerful technique for creating efficient few-step and single-step generators.<n>We show that the primary driver of few-step distillation is not distribution matching, but a previously overlooked component we identify as CFG Augmentation (CA)<n>We propose principled modifications to the distillation process, such as decoupling the noise schedules for the engine and the regularizer, leading to further performance gains.
- Score: 54.328202401611264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion model distillation has emerged as a powerful technique for creating efficient few-step and single-step generators. Among these, Distribution Matching Distillation (DMD) and its variants stand out for their impressive performance, which is widely attributed to their core mechanism of matching the student's output distribution to that of a pre-trained teacher model. In this work, we challenge this conventional understanding. Through a rigorous decomposition of the DMD training objective, we reveal that in complex tasks like text-to-image generation, where CFG is typically required for desirable few-step performance, the primary driver of few-step distillation is not distribution matching, but a previously overlooked component we identify as CFG Augmentation (CA). We demonstrate that this term acts as the core ``engine'' of distillation, while the Distribution Matching (DM) term functions as a ``regularizer'' that ensures training stability and mitigates artifacts. We further validate this decoupling by demonstrating that while the DM term is a highly effective regularizer, it is not unique; simpler non-parametric constraints or GAN-based objectives can serve the same stabilizing function, albeit with different trade-offs. This decoupling of labor motivates a more principled analysis of the properties of both terms, leading to a more systematic and in-depth understanding. This new understanding further enables us to propose principled modifications to the distillation process, such as decoupling the noise schedules for the engine and the regularizer, leading to further performance gains. Notably, our method has been adopted by the Z-Image ( https://github.com/Tongyi-MAI/Z-Image ) project to develop a top-tier 8-step image generation model, empirically validating the generalization and robustness of our findings.
Related papers
- From Structure to Detail: Hierarchical Distillation for Efficient Diffusion Model [18.782919607372328]
Trajectory-based and distribution-based step distillation methods offer solutions.<n>Trajectory-based methods preserve global structure but act as a "lossy compressor"<n>We recast them into synergistic components within our novel Hierarchical Distillation framework.
arXiv Detail & Related papers (2025-11-12T03:12:06Z) - Knowledge Distillation of Uncertainty using Deep Latent Factor Model [10.148306002388196]
We introduce a new method of distribution distillation called Gaussian distillation.<n>It estimates the distribution of a teacher ensemble through a special Gaussian process called the deep latent factor model (DLF)<n>By using multiple benchmark datasets, we demonstrate that the proposed Gaussian distillation outperforms existing baselines.
arXiv Detail & Related papers (2025-10-22T06:46:59Z) - Sculpting Latent Spaces With MMD: Disentanglement With Programmable Priors [30.182736043604304]
We introduce the Programmable Prior Framework, a method built on the Maximum Mean Discrepancy (MMD)<n>Our work provides a foundational tool for representation engineering, opening new avenues for model identifiability and causal reasoning.
arXiv Detail & Related papers (2025-10-13T21:26:01Z) - Large Scale Diffusion Distillation via Score-Regularized Continuous-Time Consistency [60.74505433956616]
continuous-time consistency model (sCM) is theoretically principled and empirically powerful for accelerating academic-scale diffusion.<n>We first develop a parallelism-compatible FlashAttention-2 JVP kernel, enabling sCM training on models with over 10 billion parameters and high-dimensional video tasks.<n>We propose the score-regularized continuous-time consistency model (rCM), which incorporates score distillation as a long-skip regularizer.
arXiv Detail & Related papers (2025-10-09T16:45:30Z) - Adversarial Distribution Matching for Diffusion Distillation Towards Efficient Image and Video Synthesis [65.77083310980896]
We propose Adrial Distribution Matching (ADM) to align latent predictions between real and fake score estimators for score distillation.<n>Our proposed method achieves superior one-step performance on SDXL compared to DMD2 while consuming less GPU time.<n>Additional experiments that apply multi-step ADM distillation on SD3-Medium, SD3.5-Large, and CogVideoX set a new benchmark towards efficient image and video synthesis.
arXiv Detail & Related papers (2025-07-24T16:45:05Z) - Revisiting Diffusion Models: From Generative Pre-training to One-Step Generation [2.3359837623080613]
We show that diffusion training may be viewed as a form of generative pre-training.<n>We create a one-step generation model by fine-tuning a pre-trained model with 85% of parameters frozen.
arXiv Detail & Related papers (2025-06-11T03:55:26Z) - Adding Additional Control to One-Step Diffusion with Joint Distribution Matching [58.37264951734603]
JDM is a novel approach that minimizes the reverse KL divergence between image-condition joint distributions.<n>By deriving a tractable upper bound, JDM decouples fidelity learning from condition learning.<n>This asymmetric distillation scheme enables our one-step student to handle controls unknown to the teacher model.
arXiv Detail & Related papers (2025-03-09T15:06:50Z) - Multi-Granularity Semantic Revision for Large Language Model Distillation [66.03746866578274]
We propose a multi-granularity semantic revision method for LLM distillation.
At the sequence level, we propose a sequence correction and re-generation strategy.
At the token level, we design a distribution adaptive clipping Kullback-Leibler loss as the distillation objective function.
At the span level, we leverage the span priors of a sequence to compute the probability correlations within spans, and constrain the teacher and student's probability correlations to be consistent.
arXiv Detail & Related papers (2024-07-14T03:51:49Z) - Distilling Diffusion Models into Conditional GANs [90.76040478677609]
We distill a complex multistep diffusion model into a single-step conditional GAN student model.
For efficient regression loss, we propose E-LatentLPIPS, a perceptual loss operating directly in diffusion model's latent space.
We demonstrate that our one-step generator outperforms cutting-edge one-step diffusion distillation models.
arXiv Detail & Related papers (2024-05-09T17:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.