DiffEnc: Variational Diffusion with a Learned Encoder
- URL: http://arxiv.org/abs/2310.19789v2
- Date: Thu, 8 Feb 2024 12:31:18 GMT
- Title: DiffEnc: Variational Diffusion with a Learned Encoder
- Authors: Beatrix M. G. Nielsen, Anders Christensen, Andrea Dittadi, Ole Winther
- Abstract summary: We introduce a data- and depth-dependent mean function in the diffusion process, which leads to a modified diffusion loss.
Our proposed framework, DiffEnc, achieves a statistically significant improvement in likelihood on CIFAR-10.
- Score: 14.045374947755922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models may be viewed as hierarchical variational autoencoders
(VAEs) with two improvements: parameter sharing for the conditional
distributions in the generative process and efficient computation of the loss
as independent terms over the hierarchy. We consider two changes to the
diffusion model that retain these advantages while adding flexibility to the
model. Firstly, we introduce a data- and depth-dependent mean function in the
diffusion process, which leads to a modified diffusion loss. Our proposed
framework, DiffEnc, achieves a statistically significant improvement in
likelihood on CIFAR-10. Secondly, we let the ratio of the noise variance of the
reverse encoder process and the generative process be a free weight parameter
rather than being fixed to 1. This leads to theoretical insights: For a finite
depth hierarchy, the evidence lower bound (ELBO) can be used as an objective
for a weighted diffusion loss approach and for optimizing the noise schedule
specifically for inference. For the infinite-depth hierarchy, on the other
hand, the weight parameter has to be 1 to have a well-defined ELBO.
Related papers
- Rectified Diffusion Guidance for Conditional Generation [62.00207951161297]
We revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution.
We propose ReCFG with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory.
That way the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected.
arXiv Detail & Related papers (2024-10-24T13:41:32Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Distilling Diffusion Models into Conditional GANs [90.76040478677609]
We distill a complex multistep diffusion model into a single-step conditional GAN student model.
For efficient regression loss, we propose E-LatentLPIPS, a perceptual loss operating directly in diffusion model's latent space.
We demonstrate that our one-step generator outperforms cutting-edge one-step diffusion distillation models.
arXiv Detail & Related papers (2024-05-09T17:59:40Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Structural Pruning for Diffusion Models [65.02607075556742]
We present Diff-Pruning, an efficient compression method tailored for learning lightweight diffusion models from pre-existing ones.
Our empirical assessment, undertaken across several datasets highlights two primary benefits of our proposed method.
arXiv Detail & Related papers (2023-05-18T12:38:21Z) - Diffusion-GAN: Training GANs with Diffusion [135.24433011977874]
Generative adversarial networks (GANs) are challenging to train stably.
We propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate instance noise.
We show that Diffusion-GAN can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs.
arXiv Detail & Related papers (2022-06-05T20:45:01Z) - Subspace Diffusion Generative Models [4.310834990284412]
Score-based models generate samples by mapping noise to data (and vice versa) via a high-dimensional diffusion process.
We restrict the diffusion via projections onto subspaces as the data distribution evolves toward noise.
Our framework is fully compatible with continuous-time diffusion and retains its flexible capabilities.
arXiv Detail & Related papers (2022-05-03T13:43:47Z) - The Transitive Information Theory and its Application to Deep Generative
Models [0.0]
Variational Autoencoder (VAE) could be pushed in two opposite directions.
Existing methods narrow the issues to the rate-distortion trade-off between compression and reconstruction.
We develop a system that learns a hierarchy of disentangled representation together with a mechanism for recombining the learned representation for generalization.
arXiv Detail & Related papers (2022-03-09T22:35:02Z) - A Variational Perspective on Diffusion-Based Generative Models and Score
Matching [8.93483643820767]
We derive a variational framework for likelihood estimation for continuous-time generative diffusion.
We show that minimizing the score-matching loss is equivalent to maximizing a lower bound of the likelihood of the plug-in reverse SDE.
arXiv Detail & Related papers (2021-06-05T05:50:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.