A Comprehensive Survey on Knowledge Distillation of Diffusion Models
- URL: http://arxiv.org/abs/2304.04262v1
- Date: Sun, 9 Apr 2023 15:49:28 GMT
- Title: A Comprehensive Survey on Knowledge Distillation of Diffusion Models
- Authors: Weijian Luo
- Abstract summary: Diffusion Models (DMs) utilize neural networks to specify score functions.
Our tutorial is intended for individuals with a basic understanding of generative models who wish to apply DM's distillation or embark on a research project in this field.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion Models (DMs), also referred to as score-based diffusion models,
utilize neural networks to specify score functions. Unlike most other
probabilistic models, DMs directly model the score functions, which makes them
more flexible to parametrize and potentially highly expressive for
probabilistic modeling. DMs can learn fine-grained knowledge, i.e., marginal
score functions, of the underlying distribution. Therefore, a crucial research
direction is to explore how to distill the knowledge of DMs and fully utilize
their potential. Our objective is to provide a comprehensible overview of the
modern approaches for distilling DMs, starting with an introduction to DMs and
a discussion of the challenges involved in distilling them into neural vector
fields. We also provide an overview of the existing works on distilling DMs
into both stochastic and deterministic implicit generators. Finally, we review
the accelerated diffusion sampling algorithms as a training-free method for
distillation. Our tutorial is intended for individuals with a basic
understanding of generative models who wish to apply DM's distillation or
embark on a research project in this field.
Related papers
- A Survey on Pre-Trained Diffusion Model Distillations [8.633764273043488]
Diffusion Models (DMs) have emerged as the dominant approach in Generative Artificial Intelligence (GenAI)
DMs are typically trained on massive datasets and usually require large storage.
Distillation methods on pre-trained DM have become widely adopted practices to develop smaller, more efficient models.
arXiv Detail & Related papers (2025-02-12T12:50:24Z) - Diffusion Model from Scratch [0.0]
Diffusion generative models are currently the most popular generative models.
This paper aims to assist readers in building a foundational understanding of generative models by tracing the evolution from VAEs to DDPM.
arXiv Detail & Related papers (2024-12-14T13:05:05Z) - Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax [73.03684002513218]
We enhance Deep InfoMax (DIM) to enable automatic matching of learned representations to a selected prior distribution.
We show that such modification allows for learning uniformly and normally distributed representations.
The results indicate a moderate trade-off between the performance on the downstream tasks and quality of DM.
arXiv Detail & Related papers (2024-10-09T15:40:04Z) - Pruning then Reweighting: Towards Data-Efficient Training of Diffusion Models [33.09663675904689]
We investigate efficient diffusion training from the perspective of dataset pruning.
Inspired by the principles of data-efficient training for generative models such as generative adversarial networks (GANs), we first extend the data selection scheme used in GANs to DM training.
To further improve the generation performance, we employ a class-wise reweighting approach.
arXiv Detail & Related papers (2024-09-27T20:21:19Z) - Extracting Training Data from Unconditional Diffusion Models [76.85077961718875]
diffusion probabilistic models (DPMs) are being employed as mainstream models for generative artificial intelligence (AI)
We aim to establish a theoretical understanding of memorization in DPMs with 1) a memorization metric for theoretical analysis, 2) an analysis of conditional memorization with informative and random labels, and 3) two better evaluation metrics for measuring memorization.
Based on the theoretical analysis, we propose a novel data extraction method called textbfSurrogate condItional Data Extraction (SIDE) that leverages a trained on generated data as a surrogate condition to extract training data directly from unconditional diffusion models.
arXiv Detail & Related papers (2024-06-18T16:20:12Z) - Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian
Mixture Models [59.331993845831946]
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties.
This paper provides the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models.
arXiv Detail & Related papers (2024-03-03T23:15:48Z) - Diffusion Model as Representation Learner [86.09969334071478]
Diffusion Probabilistic Models (DPMs) have recently demonstrated impressive results on various generative tasks.
We propose a novel knowledge transfer method that leverages the knowledge acquired by DPMs for recognition tasks.
arXiv Detail & Related papers (2023-08-21T00:38:39Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.