Fast ODE-based Sampling for Diffusion Models in Around 5 Steps
- URL: http://arxiv.org/abs/2312.00094v3
- Date: Thu, 26 Sep 2024 23:14:27 GMT
- Title: Fast ODE-based Sampling for Diffusion Models in Around 5 Steps
- Authors: Zhenyu Zhou, Defang Chen, Can Wang, Chun Chen,
- Abstract summary: We propose Approximate MEan-Direction solver (AMED-r) that eliminates truncation errors by directly learning the mean direction for fast sampling.
Our method can be easily used as a plugin to further improve existing ODE-based samplers.
- Score: 17.500594480727617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sampling from diffusion models can be treated as solving the corresponding ordinary differential equations (ODEs), with the aim of obtaining an accurate solution with as few number of function evaluations (NFE) as possible. Recently, various fast samplers utilizing higher-order ODE solvers have emerged and achieved better performance than the initial first-order one. However, these numerical methods inherently result in certain approximation errors, which significantly degrades sample quality with extremely small NFE (e.g., around 5). In contrast, based on the geometric observation that each sampling trajectory almost lies in a two-dimensional subspace embedded in the ambient space, we propose Approximate MEan-Direction Solver (AMED-Solver) that eliminates truncation errors by directly learning the mean direction for fast diffusion sampling. Besides, our method can be easily used as a plugin to further improve existing ODE-based samplers. Extensive experiments on image synthesis with the resolution ranging from 32 to 512 demonstrate the effectiveness of our method. With only 5 NFE, we achieve 6.61 FID on CIFAR-10, 10.74 FID on ImageNet 64$\times$64, and 13.20 FID on LSUN Bedroom. Our code is available at https://github.com/zju-pi/diff-sampler.
Related papers
- Simple and Fast Distillation of Diffusion Models [39.79747569096888]
We propose Simple and Fast Distillation (SFD) of diffusion models, which simplifies the paradigm used in existing methods.
SFD achieves 4.53 FID (NFE=2) on CIFAR-10 with only 0.64 hours of fine-tuning on a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2024-09-29T12:13:06Z) - DC-Solver: Improving Predictor-Corrector Diffusion Sampler via Dynamic Compensation [68.55191764622525]
Diffusion models (DPMs) have shown remarkable performance in visual synthesis but are computationally expensive due to the need for multiple evaluations during the sampling.
Recent predictor synthesis-or diffusion samplers have significantly reduced the required number of evaluations, but inherently suffer from a misalignment issue.
We introduce a new fast DPM sampler called DC-CPRr, which leverages dynamic compensation to mitigate the misalignment.
arXiv Detail & Related papers (2024-09-05T17:59:46Z) - Fast Samplers for Inverse Problems in Iterative Refinement Models [19.099632445326826]
We propose a plug-and-play framework for constructing efficient samplers for inverse problems.
Our method can generate high-quality samples in as few as 5 conditional sampling steps and outperforms competing baselines requiring 20-1000 steps.
arXiv Detail & Related papers (2024-05-27T21:50:16Z) - Accelerating Diffusion Sampling with Optimized Time Steps [69.21208434350567]
Diffusion probabilistic models (DPMs) have shown remarkable performance in high-resolution image synthesis.
Their sampling efficiency is still to be desired due to the typically large number of sampling steps.
Recent advancements in high-order numerical ODE solvers for DPMs have enabled the generation of high-quality images with much fewer sampling steps.
arXiv Detail & Related papers (2024-02-27T10:13:30Z) - Sampler Scheduler for Diffusion Models [0.0]
Diffusion modeling (DM) has high-quality generative performance.
Currently, there is a contradiction in samplers for diffusion-based generative models.
We propose the feasibility of using different samplers (ODE/SDE) on different sampling steps of the same sampling process.
arXiv Detail & Related papers (2023-11-12T13:35:25Z) - DPM-Solver-v3: Improved Diffusion ODE Solver with Empirical Model
Statistics [23.030972042695275]
Diffusion models (DPMs) have exhibited excellent performance for high-fidelity image generation while suffering from inefficient sampling.
Recent works accelerate the sampling procedure by proposing fast ODE solvers that leverage the specific ODE form of DPMs.
We propose a novel formulation towards the optimal parameterization during sampling that minimizes the first-order discretization error.
arXiv Detail & Related papers (2023-10-20T04:23:12Z) - Restart Sampling for Improving Generative Processes [30.745245429072735]
ODE-based samplers are fast but plateau in performance while SDE-based samplers deliver higher sample quality at the cost of increased sampling time.
We propose a novel sampling algorithm called Restart in order to better balance discretization errors and contraction.
arXiv Detail & Related papers (2023-06-26T17:48:25Z) - Parallel Sampling of Diffusion Models [76.3124029406809]
Diffusion models are powerful generative models but suffer from slow sampling.
We present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel.
arXiv Detail & Related papers (2023-05-25T17:59:42Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - On Distillation of Guided Diffusion Models [94.95228078141626]
We propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from.
For standard diffusion models trained on the pixelspace, our approach is able to generate images visually comparable to that of the original model.
For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps.
arXiv Detail & Related papers (2022-10-06T18:03:56Z) - Pseudo Numerical Methods for Diffusion Models on Manifolds [77.40343577960712]
Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples.
DDPMs require hundreds to thousands of iterations to produce final samples.
We propose pseudo numerical methods for diffusion models (PNDMs)
PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup)
arXiv Detail & Related papers (2022-02-20T10:37:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.