Overclocking Electrostatic Generative Models
- URL: http://arxiv.org/abs/2509.22454v1
- Date: Fri, 26 Sep 2025 15:07:23 GMT
- Title: Overclocking Electrostatic Generative Models
- Authors: Daniil Shlenskii, Alexander Korotin,
- Abstract summary: PFGM++ operates in an extended data space with auxiliary dimensionality $D$, recovering the diffusion model framework as $Dtoinfty$.<n>We propose Inverse Poisson Flow Matching (IPFM), a novel distillation framework that accelerates electrostatic generative models across all values of $D$.<n>IPFM produces distilled generators that achieve near-teacher or even superior sample quality using only a few function evaluations.
- Score: 59.271136356755996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electrostatic generative models such as PFGM++ have recently emerged as a powerful framework, achieving state-of-the-art performance in image synthesis. PFGM++ operates in an extended data space with auxiliary dimensionality $D$, recovering the diffusion model framework as $D\to\infty$, while yielding superior empirical results for finite $D$. Like diffusion models, PFGM++ relies on expensive ODE simulations to generate samples, making it computationally costly. To address this, we propose Inverse Poisson Flow Matching (IPFM), a novel distillation framework that accelerates electrostatic generative models across all values of $D$. Our IPFM reformulates distillation as an inverse problem: learning a generator whose induced electrostatic field matches that of the teacher. We derive a tractable training objective for this problem and show that, as $D \to \infty$, our IPFM closely recovers Score Identity Distillation (SiD), a recent method for distilling diffusion models. Empirically, our IPFM produces distilled generators that achieve near-teacher or even superior sample quality using only a few function evaluations. Moreover, we observe that distillation converges faster for finite $D$ than in the $D \to \infty$ (diffusion) limit, which is consistent with prior findings that finite-$D$ PFGM++ models exhibit more favorable optimization and sampling properties.
Related papers
- Distillation of Discrete Diffusion by Exact Conditional Distribution Matching [9.460409527892345]
We propose a simple and principled distillation alternative based on emphconditional distribution matching.<n>We exploit this structure to define distillation objectives that directly match conditional distributions between a pre-trained teacher and a low-NFE student.
arXiv Detail & Related papers (2025-12-15T00:16:10Z) - Universal Inverse Distillation for Matching Models with Real-Data Supervision (No GANs) [63.681263056053666]
We present RealUID, a universal distillation framework for all matching models that seamlessly incorporates real data into the distillation procedure without GANs.<n>Our RealUID approach offers a simple theoretical foundation that covers previous distillation methods for Flow Matching and Diffusion models, and is also extended to their modifications, such as Bridge Matching and Interpolants.
arXiv Detail & Related papers (2025-09-26T15:12:02Z) - Di$\mathtt{[M]}$O: Distilling Masked Diffusion Models into One-step Generator [22.88494918435088]
Masked Diffusion Models (MDMs) have emerged as a powerful generative modeling technique.<n>We propose Di$mathtt[M]$O, a novel approach that distills masked diffusion models into a one-step generator.<n>We show Di$mathtt[M]$O's effectiveness on both class-conditional and text-conditional image generation.
arXiv Detail & Related papers (2025-03-19T17:36:54Z) - Energy-Based Diffusion Language Models for Text Generation [126.23425882687195]
Energy-based Diffusion Language Model (EDLM) is an energy-based model operating at the full sequence level for each diffusion step.<n>Our framework offers a 1.3$times$ sampling speedup over existing diffusion models.
arXiv Detail & Related papers (2024-10-28T17:25:56Z) - One-Step Diffusion Distillation through Score Implicit Matching [74.91234358410281]
We present Score Implicit Matching (SIM) a new approach to distilling pre-trained diffusion models into single-step generator models.
SIM shows strong empirical performances for one-step generators.
By applying SIM to a leading transformer-based diffusion model, we distill a single-step generator for text-to-image generation.
arXiv Detail & Related papers (2024-10-22T08:17:20Z) - EM Distillation for One-step Diffusion Models [65.57766773137068]
We propose a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of quality.<n>We develop a reparametrized sampling scheme and a noise cancellation technique that together stabilizes the distillation process.
arXiv Detail & Related papers (2024-05-27T05:55:22Z) - Reducing Spatial Fitting Error in Distillation of Denoising Diffusion
Models [13.364271265023953]
Knowledge distillation for diffusion models is an effective method to address this limitation with a shortened sampling process.
We attribute the degradation to the spatial fitting error occurring in the training of both the teacher and student model.
SFERD utilizes attention guidance from the teacher model and a designed semantic gradient predictor to reduce the student's fitting error.
We achieve an FID of 5.31 on CIFAR-10 and 9.39 on ImageNet 64$times$64 with only one step, outperforming existing diffusion methods.
arXiv Detail & Related papers (2023-11-07T09:19:28Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.