Bi-Noising Diffusion: Towards Conditional Diffusion Models with
Generative Restoration Priors
- URL: http://arxiv.org/abs/2212.07352v1
- Date: Wed, 14 Dec 2022 17:26:35 GMT
- Title: Bi-Noising Diffusion: Towards Conditional Diffusion Models with
Generative Restoration Priors
- Authors: Kangfu Mei, Nithin Gopalakrishnan Nair, Vishal M. Patel
- Abstract summary: We introduce a new method that brings predicted samples to the training data manifold using a pretrained unconditional diffusion model.
We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks.
- Score: 64.24948495708337
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conditional diffusion probabilistic models can model the distribution of
natural images and can generate diverse and realistic samples based on given
conditions. However, oftentimes their results can be unrealistic with
observable color shifts and textures. We believe that this issue results from
the divergence between the probabilistic distribution learned by the model and
the distribution of natural images. The delicate conditions gradually enlarge
the divergence during each sampling timestep. To address this issue, we
introduce a new method that brings the predicted samples to the training data
manifold using a pretrained unconditional diffusion model. The unconditional
model acts as a regularizer and reduces the divergence introduced by the
conditional model at each sampling step. We perform comprehensive experiments
to demonstrate the effectiveness of our approach on super-resolution,
colorization, turbulence removal, and image-deraining tasks. The improvements
obtained by our method suggest that the priors can be incorporated as a general
plugin for improving conditional diffusion models.
Related papers
- Provable Statistical Rates for Consistency Diffusion Models [87.28777947976573]
Despite the state-of-the-art performance, diffusion models are known for their slow sample generation due to the extensive number of steps involved.
This paper contributes towards the first statistical theory for consistency models, formulating their training as a distribution discrepancy minimization problem.
arXiv Detail & Related papers (2024-06-23T20:34:18Z) - Membership Inference on Text-to-Image Diffusion Models via Conditional Likelihood Discrepancy [36.156856772794065]
We propose a conditional overfitting phenomenon in text-to-image diffusion models.
Our method significantly outperforms previous methods across various data and dataset scales.
arXiv Detail & Related papers (2024-05-23T17:09:51Z) - Unveil Conditional Diffusion Models with Classifier-free Guidance: A Sharp Statistical Theory [87.00653989457834]
Conditional diffusion models serve as the foundation of modern image synthesis and find extensive application in fields like computational biology and reinforcement learning.
Despite the empirical success, theory of conditional diffusion models is largely missing.
This paper bridges the gap by presenting a sharp statistical theory of distribution estimation using conditional diffusion models.
arXiv Detail & Related papers (2024-03-18T17:08:24Z) - Fair Sampling in Diffusion Models through Switching Mechanism [5.560136885815622]
We propose a fairness-aware sampling method called textitattribute switching mechanism for diffusion models.
We mathematically prove and experimentally demonstrate the effectiveness of the proposed method on two key aspects.
arXiv Detail & Related papers (2024-01-06T06:55:26Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Enhanced Controllability of Diffusion Models via Feature Disentanglement and Realism-Enhanced Sampling Methods [27.014858633903867]
We present a training framework for feature disentanglement of Diffusion Models (FDiff)
We propose two sampling methods that can boost the realism of our Diffusion Models and also enhance the controllability.
arXiv Detail & Related papers (2023-02-28T07:43:00Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.