ODE-DPS: ODE-based Diffusion Posterior Sampling for Inverse Problems in Partial Differential Equation
- URL: http://arxiv.org/abs/2404.13496v1
- Date: Sun, 21 Apr 2024 00:57:13 GMT
- Title: ODE-DPS: ODE-based Diffusion Posterior Sampling for Inverse Problems in Partial Differential Equation
- Authors: Enze Jiang, Jishen Peng, Zheng Ma, Xiong-Bin Yan,
- Abstract summary: We introduce a novel unsupervised inversion methodology tailored for solving inverse problems arising from PDEs.
Our approach operates within the Bayesian inversion framework, treating the task of solving the posterior distribution as a conditional generation process.
To enhance the accuracy of inversion results, we propose an ODE-based Diffusion inversion algorithm.
- Score: 1.8356973269166506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years we have witnessed a growth in mathematics for deep learning, which has been used to solve inverse problems of partial differential equations (PDEs). However, most deep learning-based inversion methods either require paired data or necessitate retraining neural networks for modifications in the conditions of the inverse problem, significantly reducing the efficiency of inversion and limiting its applicability. To overcome this challenge, in this paper, leveraging the score-based generative diffusion model, we introduce a novel unsupervised inversion methodology tailored for solving inverse problems arising from PDEs. Our approach operates within the Bayesian inversion framework, treating the task of solving the posterior distribution as a conditional generation process achieved through solving a reverse-time stochastic differential equation. Furthermore, to enhance the accuracy of inversion results, we propose an ODE-based Diffusion Posterior Sampling inversion algorithm. The algorithm stems from the marginal probability density functions of two distinct forward generation processes that satisfy the same Fokker-Planck equation. Through a series of experiments involving various PDEs, we showcase the efficiency and robustness of our proposed method.
Related papers
- On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance [52.093434664236014]
Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems.
Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation.
arXiv Detail & Related papers (2024-02-03T13:35:39Z) - An Unsupervised Deep Learning Approach for the Wave Equation Inverse
Problem [12.676629870617337]
Full-waveform inversion (FWI) is a powerful geophysical imaging technique that infers high-resolution subsurface physical parameters.
Due to limitations in observation, limited shots or receivers, and random noise, conventional inversion methods are confronted with numerous challenges.
We provide an unsupervised learning approach aimed at accurately reconstructing physical velocity parameters.
arXiv Detail & Related papers (2023-11-08T08:39:33Z) - TSONN: Time-stepping-oriented neural network for solving partial
differential equations [1.9061608251056779]
This work integrates time-stepping method with deep learning to solve PDE problems.
The convergence of model training is significantly improved by following the trajectory of the pseudo time-stepping process.
Our results show that the proposed method achieves stable training and correct results in many problems that standard PINNs fail to solve.
arXiv Detail & Related papers (2023-10-25T09:19:40Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - VI-DGP: A variational inference method with deep generative prior for
solving high-dimensional inverse problems [0.7734726150561089]
We propose a novel approximation method for estimating the high-dimensional posterior distribution.
This approach leverages a deep generative model to learn a prior model capable of generating spatially-varying parameters.
The proposed method can be fully implemented in an automatic differentiation manner.
arXiv Detail & Related papers (2023-02-22T06:48:10Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Towards a machine learning pipeline in reduced order modelling for
inverse problems: neural networks for boundary parametrization,
dimensionality reduction and solution manifold approximation [0.0]
Inverse problems, especially in a partial differential equation context, require a huge computational load.
We apply a numerical pipeline that involves artificial neural networks to parametrize the boundary conditions of the problem in hand.
It derives a general framework capable to provide an ad-hoc parametrization of the inlet boundary and quickly converges to the optimal solution.
arXiv Detail & Related papers (2022-10-26T14:53:07Z) - A deep learning method for solving stochastic optimal control problems
driven by fully-coupled FBSDEs [0.2064612766965483]
In this paper, we focus on the numerical solution of high-dimensional optimal control problem driven by fully-coupled forward-backward differential equations (FBSDEs in short) through deep learning.
We first transform the problem into a Stackelberg differential game(leader-follower problem), then a cross-optimization method (COCO method) is developed where the leader's cost functional and the follower's cost are optimized via deep neural networks.
As for the numerical results, we compute two examples of the investment-consumption problem solved through utility models, and the results of both examples demonstrate the effectiveness of our
arXiv Detail & Related papers (2022-04-12T13:31:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.