ODE-DPS: ODE-based Diffusion Posterior Sampling for Inverse Problems in Partial Differential Equation
- URL: http://arxiv.org/abs/2404.13496v1
- Date: Sun, 21 Apr 2024 00:57:13 GMT
- Title: ODE-DPS: ODE-based Diffusion Posterior Sampling for Inverse Problems in Partial Differential Equation
- Authors: Enze Jiang, Jishen Peng, Zheng Ma, Xiong-Bin Yan,
- Abstract summary: We introduce a novel unsupervised inversion methodology tailored for solving inverse problems arising from PDEs.
Our approach operates within the Bayesian inversion framework, treating the task of solving the posterior distribution as a conditional generation process.
To enhance the accuracy of inversion results, we propose an ODE-based Diffusion inversion algorithm.
- Score: 1.8356973269166506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years we have witnessed a growth in mathematics for deep learning, which has been used to solve inverse problems of partial differential equations (PDEs). However, most deep learning-based inversion methods either require paired data or necessitate retraining neural networks for modifications in the conditions of the inverse problem, significantly reducing the efficiency of inversion and limiting its applicability. To overcome this challenge, in this paper, leveraging the score-based generative diffusion model, we introduce a novel unsupervised inversion methodology tailored for solving inverse problems arising from PDEs. Our approach operates within the Bayesian inversion framework, treating the task of solving the posterior distribution as a conditional generation process achieved through solving a reverse-time stochastic differential equation. Furthermore, to enhance the accuracy of inversion results, we propose an ODE-based Diffusion Posterior Sampling inversion algorithm. The algorithm stems from the marginal probability density functions of two distinct forward generation processes that satisfy the same Fokker-Planck equation. Through a series of experiments involving various PDEs, we showcase the efficiency and robustness of our proposed method.
Related papers
- Score-Based Variational Inference for Inverse Problems [19.848238197979157]
In applications that posterior mean is preferred, we have to generate multiple samples from the posterior which is time-consuming.
We establish a framework termed reverse mean propagation (RMP) that targets the posterior mean directly.
We develop an algorithm that optimize the reverse KL divergence with natural gradient descent using score functions and propagates the mean at each reverse step.
arXiv Detail & Related papers (2024-10-08T02:55:16Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Inverse Problems with Diffusion Models: A MAP Estimation Perspective [5.002087490888723]
In Computer, several image restoration tasks such as inpainting, deblurring, and super-resolution can be formally modeled as inverse problems.
We propose a MAP estimation framework to model the reverse conditional generation process of a continuous time diffusion model.
We use our proposed framework to develop effective algorithms for image restoration.
arXiv Detail & Related papers (2024-07-27T15:41:13Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - An Unsupervised Deep Learning Approach for the Wave Equation Inverse
Problem [12.676629870617337]
Full-waveform inversion (FWI) is a powerful geophysical imaging technique that infers high-resolution subsurface physical parameters.
Due to limitations in observation, limited shots or receivers, and random noise, conventional inversion methods are confronted with numerous challenges.
We provide an unsupervised learning approach aimed at accurately reconstructing physical velocity parameters.
arXiv Detail & Related papers (2023-11-08T08:39:33Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - VI-DGP: A variational inference method with deep generative prior for
solving high-dimensional inverse problems [0.7734726150561089]
We propose a novel approximation method for estimating the high-dimensional posterior distribution.
This approach leverages a deep generative model to learn a prior model capable of generating spatially-varying parameters.
The proposed method can be fully implemented in an automatic differentiation manner.
arXiv Detail & Related papers (2023-02-22T06:48:10Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.