Exploring the Design Space of Diffusion Bridge Models via Stochasticity Control
- URL: http://arxiv.org/abs/2410.21553v1
- Date: Mon, 28 Oct 2024 21:30:59 GMT
- Title: Exploring the Design Space of Diffusion Bridge Models via Stochasticity Control
- Authors: Shaorong Zhang, Yuanbin Cheng, Xianghao Kong, Greg Ver Steeg,
- Abstract summary: Diffusion bridge models facilitate image-to-image (I2I) translation by connecting two distributions.
Existing methods overlook the impact of noise in SDEs, transition kernel, and the base distribution on sampling efficiency, image quality and diversity.
We propose a novel theoretical framework that extends the design space of diffusion bridges, and provides strategies to mitigate singularities during both training and sampling.
- Score: 17.464174698465918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion bridge models effectively facilitate image-to-image (I2I) translation by connecting two distributions. However, existing methods overlook the impact of noise in sampling SDEs, transition kernel, and the base distribution on sampling efficiency, image quality and diversity. To address this gap, we propose the Stochasticity-controlled Diffusion Bridge (SDB), a novel theoretical framework that extends the design space of diffusion bridges, and provides strategies to mitigate singularities during both training and sampling. By controlling stochasticity in the sampling SDEs, our sampler achieves speeds up to 5 times faster than the baseline, while also producing lower FID scores. After training, SDB sets new benchmarks in image quality and sampling efficiency via managing stochasticity within the transition kernel. Furthermore, introducing stochasticity into the base distribution significantly improves image diversity, as quantified by a newly introduced metric.
Related papers
- Acoustic Waveform Inversion with Image-to-Image Schrödinger Bridges [0.0]
We introduce a conditional Image-to-Image Schr"odinger Bridge (c$textI2textSB$) framework to generate high-resolution samples.<n>Our experiments demonstrate that the proposed solution outperforms our reimplementation of conditional diffusion model.
arXiv Detail & Related papers (2025-06-18T10:55:26Z) - PQD: Post-training Quantization for Efficient Diffusion Models [4.809939957401427]
We propose a novel post-training quantization for diffusion models (PQD)
We show that our proposed method is able to directly quantize full-precision diffusion models into 8-bit or 4-bit models while maintaining comparable performance in a training-free manner.
arXiv Detail & Related papers (2024-12-30T19:55:59Z) - DPBridge: Latent Diffusion Bridge for Dense Prediction [49.1574468325115]
We propose DPBridge, a generative framework that establishes direct mapping between input RGB images and dense signal maps based on a tractable bridge process.
Experiments show that DPBridge achieves competitive performance compared to both feed-forward and diffusion-based approaches.
arXiv Detail & Related papers (2024-12-29T15:50:34Z) - An Ordinary Differential Equation Sampler with Stochastic Start for Diffusion Bridge Models [13.00429687431982]
Diffusion bridge models initialize the generative process from corrupted images instead of pure Gaussian noise.
Existing diffusion bridge models often rely on Differential Equation samplers, which result in slower inference speed.
We propose a high-order ODE sampler with a start for diffusion bridge models.
Our method is fully compatible with pretrained diffusion bridge models and requires no additional training.
arXiv Detail & Related papers (2024-12-28T03:32:26Z) - Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection [28.82743020243849]
Existing text-to-image diffusion models often fail to maintain high image quality and high prompt-image alignment for challenging prompts.
We propose diffusion self-reflection that alternately performs denoising and inversion.
We derive Zigzag Diffusion Sampling (Z-Sampling), a novel self-reflection-based diffusion sampling method.
arXiv Detail & Related papers (2024-12-14T16:42:41Z) - Latent Schrodinger Bridge: Prompting Latent Diffusion for Fast Unpaired Image-to-Image Translation [58.19676004192321]
Diffusion models (DMs), which enable both image generation from noise and inversion from data, have inspired powerful unpaired image-to-image (I2I) translation algorithms.
We tackle this problem with Schrodinger Bridges (SBs), which are differential equations (SDEs) between distributions with minimal transport cost.
Inspired by this observation, we propose Latent Schrodinger Bridges (LSBs) that approximate the SB ODE via pre-trained Stable Diffusion.
We demonstrate that our algorithm successfully conduct competitive I2I translation in unsupervised setting with only a fraction of cost required by previous DM-
arXiv Detail & Related papers (2024-11-22T11:24:14Z) - Learned Reference-based Diffusion Sampling for multi-modal distributions [2.1383136715042417]
We introduce Learned Reference-based Diffusion Sampler (LRDS), a methodology specifically designed to leverage prior knowledge on the location of the target modes.
LRDS proceeds in two steps by learning a reference diffusion model on samples located in high-density space regions.
We experimentally demonstrate that LRDS best exploits prior knowledge on the target distribution compared to competing algorithms on a variety of challenging distributions.
arXiv Detail & Related papers (2024-10-25T10:23:34Z) - Beta Sampling is All You Need: Efficient Image Generation Strategy for Diffusion Models using Stepwise Spectral Analysis [22.02829139522153]
We propose an efficient time step sampling method based on an image spectral analysis of the diffusion process.
Instead of the traditional uniform distribution-based time step sampling, we introduce a Beta distribution-like sampling technique.
Our hypothesis is that certain steps exhibit significant changes in image content, while others contribute minimally.
arXiv Detail & Related papers (2024-07-16T20:53:06Z) - Diffusion Bridge Implicit Models [25.213664260896103]
Denoising diffusion bridge models (DDBMs) are a powerful variant of diffusion models for interpolating between two arbitrary paired distributions.
We take the first step in fast sampling of DDBMs without extra training, motivated by the well-established recipes in diffusion models.
We induce a novel, simple, and insightful form of ordinary differential equation (ODE) which inspires high-order numerical solvers.
arXiv Detail & Related papers (2024-05-24T19:08:30Z) - Distilling Diffusion Models into Conditional GANs [90.76040478677609]
We distill a complex multistep diffusion model into a single-step conditional GAN student model.
For efficient regression loss, we propose E-LatentLPIPS, a perceptual loss operating directly in diffusion model's latent space.
We demonstrate that our one-step generator outperforms cutting-edge one-step diffusion distillation models.
arXiv Detail & Related papers (2024-05-09T17:59:40Z) - IMPUS: Image Morphing with Perceptually-Uniform Sampling Using Diffusion Models [24.382275473592046]
We present a diffusion-based image morphing approach with perceptually-uniform sampling (IMPUS)
IMPUS produces smooth, direct and realistic adaptations given an image pair.
arXiv Detail & Related papers (2023-11-12T10:03:32Z) - Denoising Diffusion Bridge Models [54.87947768074036]
Diffusion models are powerful generative models that map noise to data using processes.
For many applications such as image editing, the model input comes from a distribution that is not random noise.
In our work, we propose Denoising Diffusion Bridge Models (DDBMs)
arXiv Detail & Related papers (2023-09-29T03:24:24Z) - SDDM: Score-Decomposed Diffusion Models on Manifolds for Unpaired
Image-to-Image Translation [96.11061713135385]
This work presents a new score-decomposed diffusion model to explicitly optimize the tangled distributions during image generation.
We equalize the refinement parts of the score function and energy guidance, which permits multi-objective optimization on the manifold.
SDDM outperforms existing SBDM-based methods with much fewer diffusion steps on several I2I benchmarks.
arXiv Detail & Related papers (2023-08-04T06:21:57Z) - Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment
for Markup-to-Image Generation [15.411325887412413]
This paper proposes a novel model named "Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment" (FSA-CDM)
FSA-CDM introduces contrastive positive/negative samples into the diffusion model to boost performance for markup-to-image generation.
Experiments are conducted on four benchmark datasets from different domains.
arXiv Detail & Related papers (2023-08-02T13:43:03Z) - ResShift: Efficient Diffusion Model for Image Super-resolution by
Residual Shifting [70.83632337581034]
Diffusion-based image super-resolution (SR) methods are mainly limited by the low inference speed.
We propose a novel and efficient diffusion model for SR that significantly reduces the number of diffusion steps.
Our method constructs a Markov chain that transfers between the high-resolution image and the low-resolution image by shifting the residual.
arXiv Detail & Related papers (2023-07-23T15:10:02Z) - Variance-Preserving-Based Interpolation Diffusion Models for Speech
Enhancement [53.2171981279647]
We present a framework that encapsulates both the VP- and variance-exploding (VE)-based diffusion methods.
To improve performance and ease model training, we analyze the common difficulties encountered in diffusion models.
We evaluate our model against several methods using a public benchmark to showcase the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-14T14:22:22Z) - Protein Design with Guided Discrete Diffusion [67.06148688398677]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.
We propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models.
NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods.
arXiv Detail & Related papers (2023-05-31T16:31:24Z) - Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution [82.50210340928173]
randomness of diffusion models results in ineffectiveness and instability, making it challenging for users to guarantee the quality of SR results.
We propose a plug-and-play sampling method that owns the potential to benefit a series of diffusion-based SR methods.
The quality of SR results sampled by the proposed method with fewer steps outperforms the quality of results sampled by current methods with randomness from the same pre-trained diffusion-based SR model.
arXiv Detail & Related papers (2023-05-24T17:09:54Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.