Conditional score-based diffusion models for Bayesian inference in
infinite dimensions
- URL: http://arxiv.org/abs/2305.19147v2
- Date: Fri, 27 Oct 2023 19:31:22 GMT
- Title: Conditional score-based diffusion models for Bayesian inference in
infinite dimensions
- Authors: Lorenzo Baldassari, Ali Siahkoohi, Josselin Garnier, Knut Solna,
Maarten V. de Hoop
- Abstract summary: We propose a theoretically grounded method for sampling from the posterior of infinite-dimensional inverse problems based on amortized conditional SDMs.
A significant part of our analysis is dedicated to demonstrating that extending infinite-dimensional SDMs to the conditional setting requires careful consideration.
- Score: 4.747324197963405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since their initial introduction, score-based diffusion models (SDMs) have
been successfully applied to solve a variety of linear inverse problems in
finite-dimensional vector spaces due to their ability to efficiently
approximate the posterior distribution. However, using SDMs for inverse
problems in infinite-dimensional function spaces has only been addressed
recently, primarily through methods that learn the unconditional score. While
this approach is advantageous for some inverse problems, it is mostly heuristic
and involves numerous computationally costly forward operator evaluations
during posterior sampling. To address these limitations, we propose a
theoretically grounded method for sampling from the posterior of
infinite-dimensional Bayesian linear inverse problems based on amortized
conditional SDMs. In particular, we prove that one of the most successful
approaches for estimating the conditional score in finite dimensions - the
conditional denoising estimator - can also be applied in infinite dimensions. A
significant part of our analysis is dedicated to demonstrating that extending
infinite-dimensional SDMs to the conditional setting requires careful
consideration, as the conditional score typically blows up for small times,
contrarily to the unconditional score. We conclude by presenting stylized and
large-scale numerical examples that validate our approach, offer additional
insights, and demonstrate that our method enables large-scale,
discretization-invariant Bayesian inference.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems [12.482127049881026]
We propose a novel approach to solve inverse problems with a diffusion prior from an amortized variational inference perspective.
Our amortized inference learns a function that directly maps measurements to the implicit posterior distributions of corresponding clean data, enabling a single-step posterior sampling even for unseen measurements.
arXiv Detail & Related papers (2024-07-23T02:14:18Z) - Taming Score-Based Diffusion Priors for Infinite-Dimensional Nonlinear Inverse Problems [4.42498215122234]
This work introduces a sampling method capable of solving Bayesian inverse problems in function space.
It does not assume the log-concavity of the likelihood, meaning that it is compatible with nonlinear inverse problems.
A novel convergence analysis is conducted, inspired by the fixed-point methods established for traditional regularization-by-denoising algorithms.
arXiv Detail & Related papers (2024-05-24T16:17:01Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - Sampling and estimation on manifolds using the Langevin diffusion [45.57801520690309]
Two estimators of linear functionals of $mu_phi $ based on the discretized Markov process are considered.
Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion.
arXiv Detail & Related papers (2023-12-22T18:01:11Z) - Semi-supervised Invertible DeepONets for Bayesian Inverse Problems [8.594140167290098]
DeepONets offer a powerful, data-driven tool for solving parametric PDEs by learning operators.
In this work, we employ physics-informed DeepONets in the context of high-dimensional, Bayesian inverse problems.
arXiv Detail & Related papers (2022-09-06T18:55:06Z) - A Provably Efficient Model-Free Posterior Sampling Method for Episodic
Reinforcement Learning [50.910152564914405]
Existing posterior sampling methods for reinforcement learning are limited by being model-based or lack worst-case theoretical guarantees beyond linear MDPs.
This paper proposes a new model-free formulation of posterior sampling that applies to more general episodic reinforcement learning problems with theoretical guarantees.
arXiv Detail & Related papers (2022-08-23T12:21:01Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z) - A Dimensionality Reduction Method for Finding Least Favorable Priors
with a Focus on Bregman Divergence [108.28566246421742]
This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension.
In order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
arXiv Detail & Related papers (2022-02-23T16:22:28Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.