ReDi: Efficient Learning-Free Diffusion Inference via Trajectory
Retrieval
- URL: http://arxiv.org/abs/2302.02285v2
- Date: Wed, 25 Oct 2023 17:24:18 GMT
- Title: ReDi: Efficient Learning-Free Diffusion Inference via Trajectory
Retrieval
- Authors: Kexun Zhang, Xianjun Yang, William Yang Wang, Lei Li
- Abstract summary: ReDi is a learning-free Retrieval-based Diffusion sampling framework.
We show that ReDi improves the model inference efficiency by 2x speedup.
- Score: 68.7008281316644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models show promising generation capability for a variety of data.
Despite their high generation quality, the inference for diffusion models is
still time-consuming due to the numerous sampling iterations required. To
accelerate the inference, we propose ReDi, a simple yet learning-free
Retrieval-based Diffusion sampling framework. From a precomputed knowledge
base, ReDi retrieves a trajectory similar to the partially generated trajectory
at an early stage of generation, skips a large portion of intermediate steps,
and continues sampling from a later step in the retrieved trajectory. We
theoretically prove that the generation performance of ReDi is guaranteed. Our
experiments demonstrate that ReDi improves the model inference efficiency by 2x
speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain
image generation such as image stylization.
Related papers
- Fast Sampling Through The Reuse Of Attention Maps In Diffusion Models [11.257468339231362]
Text-to-image diffusion models have demonstrated unprecedented capabilities for flexible and realistic image synthesis.
These models rely on a time-consuming sampling procedure, which has motivated attempts to reduce their latency.
Our approach seeks to reduce latency directly, without any retraining, fine-tuning, or knowledge distillation.
We empirically compare these reuse strategies with few-step sampling procedures of comparable latency, finding that reuse generates images that are closer to those produced by the original high-latency diffusion model.
arXiv Detail & Related papers (2023-12-13T17:05:37Z) - SinSR: Diffusion-Based Image Super-Resolution in a Single Step [119.18813219518042]
Super-resolution (SR) methods based on diffusion models exhibit promising results.
But their practical application is hindered by the substantial number of required inference steps.
We propose a simple yet effective method for achieving single-step SR generation, named SinSR.
arXiv Detail & Related papers (2023-11-23T16:21:29Z) - DiffuSeq-v2: Bridging Discrete and Continuous Text Spaces for
Accelerated Seq2Seq Diffusion Models [58.450152413700586]
We introduce a soft absorbing state that facilitates the diffusion model in learning to reconstruct discrete mutations based on the underlying Gaussian space.
We employ state-of-the-art ODE solvers within the continuous space to expedite the sampling process.
Our proposed method effectively accelerates the training convergence by 4x and generates samples of similar quality 800x faster.
arXiv Detail & Related papers (2023-10-09T15:29:10Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Exploring Continual Learning of Diffusion Models [24.061072903897664]
We evaluate the continual learning (CL) properties of diffusion models.
We provide insights into the dynamics of forgetting, which exhibit diverse behavior across diffusion timesteps.
arXiv Detail & Related papers (2023-03-27T15:52:14Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Come-Closer-Diffuse-Faster: Accelerating Conditional Diffusion Models
for Inverse Problems through Stochastic Contraction [31.61199061999173]
Diffusion models have a critical downside - they are inherently slow to sample from, needing few thousand steps of iteration to generate images from pure Gaussian noise.
We show that starting from Gaussian noise is unnecessary. Instead, starting from a single forward diffusion with better initialization significantly reduces the number of sampling steps in the reverse conditional diffusion.
New sampling strategy, dubbed ComeCloser-DiffuseFaster (CCDF), also reveals a new insight on how the existing feedforward neural network approaches for inverse problems can be synergistically combined with the diffusion models.
arXiv Detail & Related papers (2021-12-09T04:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.