Denoising Diffusion Variational Inference: Diffusion Models as
Expressive Variational Posteriors
- URL: http://arxiv.org/abs/2401.02739v2
- Date: Mon, 19 Feb 2024 15:50:35 GMT
- Title: Denoising Diffusion Variational Inference: Diffusion Models as
Expressive Variational Posteriors
- Authors: Top Piriyakulkij, Yingheng Wang, Volodymyr Kuleshov
- Abstract summary: DDVI is an approximate inference algorithm for latent variable models that relies on diffusion models as flexible variational posteriors.
We use DDVI on a motivating task in biology -- inferring latent ancestry from human genomes -- and we find that it outperforms strong baselines on the Thousand Genomes dataset.
- Score: 12.380863420871071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose denoising diffusion variational inference (DDVI), an approximate
inference algorithm for latent variable models which relies on diffusion models
as flexible variational posteriors. Specifically, our method introduces an
expressive class of approximate posteriors with auxiliary latent variables that
perform diffusion in latent space by reversing a user-specified noising
process. We fit these models by optimizing a lower bound on the marginal
likelihood inspired by the wake-sleep algorithm. Our method is easy to
implement (it fits a regularized extension of the ELBO), is compatible with
black-box variational inference, and outperforms alternative classes of
approximate posteriors based on normalizing flows or adversarial networks. It
increases the expressivity of flow-based methods via non-invertible deep
recurrent architectures and avoids the instability of adversarial methods. We
use DDVI on a motivating task in biology -- inferring latent ancestry from
human genomes -- and we find that it outperforms strong baselines on the
Thousand Genomes dataset.
Related papers
- G2D2: Gradient-guided Discrete Diffusion for image inverse problem solving [55.185588994883226]
This paper presents a novel method for addressing linear inverse problems by leveraging image-generation models based on discrete diffusion as priors.
To the best of our knowledge, this is the first approach to use discrete diffusion model-based priors for solving image inverse problems.
arXiv Detail & Related papers (2024-10-09T06:18:25Z) - Sparse Inducing Points in Deep Gaussian Processes: Enhancing Modeling with Denoising Diffusion Variational Inference [6.37512592611305]
In deep Gaussian processes (DGPs) a sparse integration location called inducing points are selected to approximate the posterior distribution of the model.
Traditional variational inference approaches to posterior approximation often lead to significant bias.
We propose an alternative method called Denoising Diffusion Variational Inference (DDVI) that uses a denoising diffusion differential equation (SDE) to generate posterior samples of inducing variables.
arXiv Detail & Related papers (2024-07-24T06:39:58Z) - Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems [12.482127049881026]
We propose a novel approach to solve inverse problems with a diffusion prior from an amortized variational inference perspective.
Our amortized inference learns a function that directly maps measurements to the implicit posterior distributions of corresponding clean data, enabling a single-step posterior sampling even for unseen measurements.
arXiv Detail & Related papers (2024-07-23T02:14:18Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Variance-Preserving-Based Interpolation Diffusion Models for Speech
Enhancement [53.2171981279647]
We present a framework that encapsulates both the VP- and variance-exploding (VE)-based diffusion methods.
To improve performance and ease model training, we analyze the common difficulties encountered in diffusion models.
We evaluate our model against several methods using a public benchmark to showcase the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-14T14:22:22Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Information-Theoretic Diffusion [18.356162596599436]
Denoising diffusion models have spurred significant gains in density modeling and image generation.
We introduce a new mathematical foundation for diffusion models inspired by classic results in information theory.
arXiv Detail & Related papers (2023-02-07T23:03:07Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.