Bayesian Inversion with Neural Operator (BINO) for Modeling
Subdiffusion: Forward and Inverse Problems
- URL: http://arxiv.org/abs/2211.11981v1
- Date: Tue, 22 Nov 2022 03:32:48 GMT
- Title: Bayesian Inversion with Neural Operator (BINO) for Modeling
Subdiffusion: Forward and Inverse Problems
- Authors: Xiong-bin Yan and Zhi-Qin John Xu and Zheng Ma
- Abstract summary: We propose a Bayesian Inversion with Neural Operator (BINO) to overcome the difficulty in traditional numerical methods.
We employ a deep operator network to learn the solution operators for the fractional diffusion equations.
In addition, we integrate the deep operator network with a Bayesian inversion method for modelling a problem by subdiffusion process and solving inverse subdiffusion problems.
- Score: 6.114065706275863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fractional diffusion equations have been an effective tool for modeling
anomalous diffusion in complicated systems. However, traditional numerical
methods require expensive computation cost and storage resources because of the
memory effect brought by the convolution integral of time fractional
derivative. We propose a Bayesian Inversion with Neural Operator (BINO) to
overcome the difficulty in traditional methods as follows. We employ a deep
operator network to learn the solution operators for the fractional diffusion
equations, allowing us to swiftly and precisely solve a forward problem for
given inputs (including fractional order, diffusion coefficient, source terms,
etc.). In addition, we integrate the deep operator network with a Bayesian
inversion method for modelling a problem by subdiffusion process and solving
inverse subdiffusion problems, which reduces the time costs (without suffering
from overwhelm storage resources) significantly. A large number of numerical
experiments demonstrate that the operator learning method proposed in this work
can efficiently solve the forward problems and Bayesian inverse problems of the
subdiffusion equation.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - ODE-DPS: ODE-based Diffusion Posterior Sampling for Inverse Problems in Partial Differential Equation [1.8356973269166506]
We introduce a novel unsupervised inversion methodology tailored for solving inverse problems arising from PDEs.
Our approach operates within the Bayesian inversion framework, treating the task of solving the posterior distribution as a conditional generation process.
To enhance the accuracy of inversion results, we propose an ODE-based Diffusion inversion algorithm.
arXiv Detail & Related papers (2024-04-21T00:57:13Z) - Denoising Diffusion Restoration Tackles Forward and Inverse Problems for
the Laplace Operator [3.8426297727671352]
This paper presents a novel approach for the inverse and forward solution of PDEs through the use of denoising diffusion restoration models (DDRM)
DDRMs were used in linear inverse problems to restore original clean signals by exploiting the singular value decomposition (SVD) of the linear operator.
Our results show that using denoising diffusion restoration significantly improves the estimation of the solution and parameters.
arXiv Detail & Related papers (2024-02-13T16:04:41Z) - Prompt-tuning latent diffusion models for inverse problems [72.13952857287794]
We propose a new method for solving imaging inverse problems using text-to-image latent diffusion models as general priors.
Our method, called P2L, outperforms both image- and latent-diffusion model-based inverse problem solvers on a variety of tasks, such as super-resolution, deblurring, and inpainting.
arXiv Detail & Related papers (2023-10-02T11:31:48Z) - Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency [7.671153315762146]
Training diffusion models in the pixel space are both data-intensive and computationally demanding.
Latent diffusion models, which operate in a much lower-dimensional space, offer a solution to these challenges.
We propose textitReSample, an algorithm that can solve general inverse problems with pre-trained latent diffusion models.
arXiv Detail & Related papers (2023-07-16T18:42:01Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Laplace-fPINNs: Laplace-based fractional physics-informed neural
networks for solving forward and inverse problems of subdiffusion [6.114065706275863]
We propose an extension to PINNs called Laplace-fPINNs, which can effectively solve the forward and inverse problems of fractional diffusion equations.
Our numerical results demonstrate that the Laplace-fPINNs method can effectively solve both the forward and inverse problems of high-dimensional fractional diffusion equations.
arXiv Detail & Related papers (2023-04-03T11:55:39Z) - GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Inverse
Problems with Denoising Diffusion Restoration [64.8770356696056]
We propose GibbsDDRM, an extension of Denoising Diffusion Restoration Models (DDRM) to a blind setting in which the linear measurement operator is unknown.
The proposed method is problem-agnostic, meaning that a pre-trained diffusion model can be applied to various inverse problems without fine-tuning.
arXiv Detail & Related papers (2023-01-30T06:27:48Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - Semi-supervised Invertible DeepONets for Bayesian Inverse Problems [8.594140167290098]
DeepONets offer a powerful, data-driven tool for solving parametric PDEs by learning operators.
In this work, we employ physics-informed DeepONets in the context of high-dimensional, Bayesian inverse problems.
arXiv Detail & Related papers (2022-09-06T18:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.