Composing Normalizing Flows for Inverse Problems
- URL: http://arxiv.org/abs/2002.11743v3
- Date: Mon, 14 Jun 2021 18:00:48 GMT
- Title: Composing Normalizing Flows for Inverse Problems
- Authors: Jay Whang, Erik M. Lindgren, Alexandros G. Dimakis
- Abstract summary: We propose a framework for approximate inference that estimates the target conditional as a composition of two flow models.
Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty.
- Score: 89.06155049265641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given an inverse problem with a normalizing flow prior, we wish to estimate
the distribution of the underlying signal conditioned on the observations. We
approach this problem as a task of conditional inference on the pre-trained
unconditional flow model. We first establish that this is computationally hard
for a large class of flow models. Motivated by this, we propose a framework for
approximate inference that estimates the target conditional as a composition of
two flow models. This formulation leads to a stable variational inference
training procedure that avoids adversarial training. Our method is evaluated on
a variety of inverse problems and is shown to produce high-quality samples with
uncertainty quantification. We further demonstrate that our approach can be
amortized for zero-shot inference.
Related papers
- Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
We propose a variational inference approach to sample from the posterior distribution for solving inverse problems.
We show that our method is applicable to standard signals in Euclidean space, as well as signals on manifold.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems [12.482127049881026]
We propose a novel approach to solve inverse problems with a diffusion prior from an amortized variational inference perspective.
Our amortized inference learns a function that directly maps measurements to the implicit posterior distributions of corresponding clean data, enabling a single-step posterior sampling even for unseen measurements.
arXiv Detail & Related papers (2024-07-23T02:14:18Z) - Deep conditional distribution learning via conditional Föllmer flow [3.227277661633986]
We introduce an ordinary differential equation (ODE) based deep generative method for learning conditional distributions, named Conditional F"ollmer Flow.
For effective implementation, we discretize the flow with Euler's method where we estimate the velocity field nonparametrically using a deep neural network.
arXiv Detail & Related papers (2024-02-02T14:52:10Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Deep Variational Inverse Scattering [18.598311270757527]
Inverse medium scattering solvers generally reconstruct a single solution without an associated measure of uncertainty.
Deep networks such as conditional normalizing flows can be used to sample posteriors in inverse problems.
We propose U-Flow, a Bayesian U-Net based on conditional normalizing flows, which generates high-quality posterior samples and estimates physically-meaningful uncertainty.
arXiv Detail & Related papers (2022-12-08T14:57:06Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - Solving Inverse Problems with a Flow-based Noise Model [100.18560761392692]
We study image inverse problems with a normalizing flow prior.
Our formulation views the solution as the maximum a posteriori estimate of the image conditioned on the measurements.
We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns.
arXiv Detail & Related papers (2020-03-18T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.