Universal Inverse Distillation for Matching Models with Real-Data Supervision (No GANs)
- URL: http://arxiv.org/abs/2509.22459v1
- Date: Fri, 26 Sep 2025 15:12:02 GMT
- Title: Universal Inverse Distillation for Matching Models with Real-Data Supervision (No GANs)
- Authors: Nikita Kornilov, David Li, Tikhon Mavrin, Aleksei Leonov, Nikita Gushchin, Evgeny Burnaev, Iaroslav Koshelev, Alexander Korotin,
- Abstract summary: We present RealUID, a universal distillation framework for all matching models that seamlessly incorporates real data into the distillation procedure without GANs.<n>Our RealUID approach offers a simple theoretical foundation that covers previous distillation methods for Flow Matching and Diffusion models, and is also extended to their modifications, such as Bridge Matching and Interpolants.
- Score: 63.681263056053666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While achieving exceptional generative quality, modern diffusion, flow, and other matching models suffer from slow inference, as they require many steps of iterative generation. Recent distillation methods address this by training efficient one-step generators under the guidance of a pre-trained teacher model. However, these methods are often constrained to only one specific framework, e.g., only to diffusion or only to flow models. Furthermore, these methods are naturally data-free, and to benefit from the usage of real data, it is required to use an additional complex adversarial training with an extra discriminator model. In this paper, we present RealUID, a universal distillation framework for all matching models that seamlessly incorporates real data into the distillation procedure without GANs. Our RealUID approach offers a simple theoretical foundation that covers previous distillation methods for Flow Matching and Diffusion models, and is also extended to their modifications, such as Bridge Matching and Stochastic Interpolants.
Related papers
- Score Distillation of Flow Matching Models [67.86066177182046]
We extend Score identity Distillation (SiD) to pretrained text-to-image flow-matching models.<n>SiD works out of the box across these models, in both data-free and data-aided settings.<n>This provides the first systematic evidence that score distillation applies broadly to text-to-image flow matching models.
arXiv Detail & Related papers (2025-09-29T17:45:48Z) - Score-based Idempotent Distillation of Diffusion Models [0.9367224590861915]
Idempotent generative networks (IGNs) are a new line of generative models based on idempotent mapping to a target manifold.<n>In this work, we unite diffusion and IGNs by distilling idempotent models from diffusion model scores, called SIGN.<n>Our proposed method is highly stable and does not require adversarial losses. We provide a theoretical analysis of our proposed score-based training methods and empirically show that IGNs can be effectively distilled from a pre-trained diffusion model.
arXiv Detail & Related papers (2025-09-25T19:36:10Z) - Distillation of Discrete Diffusion through Dimensional Correlations [21.078500510691747]
"Mixture" models are capable of treating dimensional correlations while remaining scalable.<n>Loss functions enable the mixture models to distill such many-step conventional models into just a few steps by learning the dimensional correlations.<n>Results show the effectiveness of the proposed method in distilling pretrained discrete diffusion models across image and language domains.
arXiv Detail & Related papers (2024-10-11T10:53:03Z) - Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.