Stable Target Field for Reduced Variance Score Estimation in Diffusion
Models
- URL: http://arxiv.org/abs/2302.00670v1
- Date: Wed, 1 Feb 2023 18:57:01 GMT
- Title: Stable Target Field for Reduced Variance Score Estimation in Diffusion
Models
- Authors: Yilun Xu, Shangyuan Tong, Tommi Jaakkola
- Abstract summary: Diffusion models generate samples by reversing a fixed forward diffusion process.
We argue that the source of such variance lies in the handling of intermediate noise-variance scales.
We propose to remedy the problem by incorporating a reference batch which we use to calculate weighted conditional scores as more stable training targets.
- Score: 5.9115407007859755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models generate samples by reversing a fixed forward diffusion
process. Despite already providing impressive empirical results, these
diffusion models algorithms can be further improved by reducing the variance of
the training targets in their denoising score-matching objective. We argue that
the source of such variance lies in the handling of intermediate noise-variance
scales, where multiple modes in the data affect the direction of reverse paths.
We propose to remedy the problem by incorporating a reference batch which we
use to calculate weighted conditional scores as more stable training targets.
We show that the procedure indeed helps in the challenging intermediate regime
by reducing (the trace of) the covariance of training targets. The new stable
targets can be seen as trading bias for reduced variance, where the bias
vanishes with increasing reference batch size. Empirically, we show that the
new objective improves the image quality, stability, and training speed of
various popular diffusion models across datasets with both general ODE and SDE
solvers. When used in combination with EDM, our method yields a current SOTA
FID of 1.90 with 35 network evaluations on the unconditional CIFAR-10
generation task. The code is available at https://github.com/Newbeeer/stf
Related papers
- Free Hunch: Denoiser Covariance Estimation for Diffusion Models Without Extra Costs [25.784316302130875]
Covariance information is available for free from training data and the curvature of the generative trajectory.
We integrate these sources of information using em (i) a novel method to transfer covariance estimates across noise levels.
We validate the method on linear inverse problems, where it outperforms recent baselines.
arXiv Detail & Related papers (2024-10-15T00:23:09Z) - Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Diffusion Models With Learned Adaptive Noise [12.530583016267768]
We propose a learned diffusion process that applies noise at different rates across an image.
MuLAN sets a new state-of-the-art in density estimation on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2023-12-20T18:00:16Z) - Denoising Diffusion Bridge Models [54.87947768074036]
Diffusion models are powerful generative models that map noise to data using processes.
For many applications such as image editing, the model input comes from a distribution that is not random noise.
In our work, we propose Denoising Diffusion Bridge Models (DDBMs)
arXiv Detail & Related papers (2023-09-29T03:24:24Z) - Improved Techniques for Maximum Likelihood Estimation for Diffusion ODEs [21.08236758778604]
We propose several improved techniques for maximum likelihood estimation for diffusion ODEs.
For training, we propose velocity parameterization and explore variance reduction techniques for faster convergence.
For evaluation, we propose a novel training-free truncated-normal dequantization to fill the training-evaluation gap commonly existing in diffusion ODEs.
arXiv Detail & Related papers (2023-05-06T05:21:24Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Consistent Diffusion Models: Mitigating Sampling Drift by Learning to be
Consistent [97.64313409741614]
We propose to enforce a emphconsistency property which states that predictions of the model on its own generated data are consistent across time.
We show that our novel training objective yields state-of-the-art results for conditional and unconditional generation in CIFAR-10 and baseline improvements in AFHQ and FFHQ.
arXiv Detail & Related papers (2023-02-17T18:45:04Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.