On the Design Fundamentals of Diffusion Models: A Survey
- URL: http://arxiv.org/abs/2306.04542v3
- Date: Thu, 19 Oct 2023 12:28:12 GMT
- Title: On the Design Fundamentals of Diffusion Models: A Survey
- Authors: Ziyi Chang, George Alex Koulieris, Hubert P. H. Shum
- Abstract summary: We organize this review according to their three key components, namely the forward process, the reverse process, and the sampling procedure.
This allows us to provide a fine-grained perspective of diffusion models, benefiting future studies in the analysis of individual components, the applicability of design choices, and the implementation of diffusion models.
- Score: 9.183452635904278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models are generative models, which gradually add and remove noise
to learn the underlying distribution of training data for data generation. The
components of diffusion models have gained significant attention with many
design choices proposed. Existing reviews have primarily focused on
higher-level solutions, thereby covering less on the design fundamentals of
components. This study seeks to address this gap by providing a comprehensive
and coherent review on component-wise design choices in diffusion models.
Specifically, we organize this review according to their three key components,
namely the forward process, the reverse process, and the sampling procedure.
This allows us to provide a fine-grained perspective of diffusion models,
benefiting future studies in the analysis of individual components, the
applicability of design choices, and the implementation of diffusion models.
Related papers
- What Makes a Good Diffusion Planner for Decision Making? [31.743124638746558]
We train and evaluate over 6,000 diffusion models, identifying the critical components such as guided sampling, network architecture, action generation and planning strategy.
We reveal that some design choices opposite to the common practice in previous work in diffusion planning actually lead to better performance.
arXiv Detail & Related papers (2025-03-01T15:31:14Z) - GUD: Generation with Unified Diffusion [40.64742332352373]
Diffusion generative models transform noise into data by inverting a process that progressively adds noise to data samples.
We develop a unified framework for diffusion generative models with greatly enhanced design freedom.
arXiv Detail & Related papers (2024-10-03T16:51:14Z) - Alignment of Diffusion Models: Fundamentals, Challenges, and Future [28.64041196069495]
Diffusion models have emerged as the leading paradigm in generative modeling, excelling in various applications.
Despite their success, these models often misalign with human intentions, generating outputs that may not match text prompts or possess desired properties.
Inspired by the success of alignment in tuning large language models, recent studies have investigated aligning diffusion models with human expectations and preferences.
arXiv Detail & Related papers (2024-09-11T13:21:32Z) - Diffusion Models in Low-Level Vision: A Survey [82.77962165415153]
diffusion model-based solutions have emerged as widely acclaimed for their ability to produce samples of superior quality and diversity.
We present three generic diffusion modeling frameworks and explore their correlations with other deep generative models.
We summarize extended diffusion models applied in other tasks, including medical, remote sensing, and video scenarios.
arXiv Detail & Related papers (2024-06-17T01:49:27Z) - Guided Diffusion from Self-Supervised Diffusion Features [49.78673164423208]
Guidance serves as a key concept in diffusion models, yet its effectiveness is often limited by the need for extra data annotation or pretraining.
We propose a framework to extract guidance from, and specifically for, diffusion models.
arXiv Detail & Related papers (2023-12-14T11:19:11Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC [102.64648158034568]
diffusion models have quickly become the prevailing approach to generative modeling in many domains.
We propose an energy-based parameterization of diffusion models which enables the use of new compositional operators.
We find these samplers lead to notable improvements in compositional generation across a wide set of problems.
arXiv Detail & Related papers (2023-02-22T18:48:46Z) - Diffusion Models in Vision: A Survey [80.82832715884597]
A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage.
Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens.
arXiv Detail & Related papers (2022-09-10T22:00:30Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Diffusion Models: A Comprehensive Survey of Methods and Applications [10.557289965753437]
Diffusion models are a class of deep generative models that have shown impressive results on various tasks with dense theoretical founding.
Recent studies have shown great enthusiasm on improving the performance of diffusion model.
arXiv Detail & Related papers (2022-09-02T02:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.