A deep-learning based Bayesian approach to seismic imaging and
uncertainty quantification
- URL: http://arxiv.org/abs/2001.04567v2
- Date: Wed, 15 Jan 2020 04:10:53 GMT
- Title: A deep-learning based Bayesian approach to seismic imaging and
uncertainty quantification
- Authors: Ali Siahkoohi, Gabrio Rizzuti, and Felix J. Herrmann
- Abstract summary: Uncertainty is essential when dealing with ill-conditioned inverse problems.
It is often not possible to formulate a prior distribution that precisely encodes our prior knowledge about the unknown.
We propose to use the functional form of a randomly convolutional neural network as an implicit structured prior.
- Score: 0.4588028371034407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification is essential when dealing with ill-conditioned
inverse problems due to the inherent nonuniqueness of the solution. Bayesian
approaches allow us to determine how likely an estimation of the unknown
parameters is via formulating the posterior distribution. Unfortunately, it is
often not possible to formulate a prior distribution that precisely encodes our
prior knowledge about the unknown. Furthermore, adherence to handcrafted priors
may greatly bias the outcome of the Bayesian analysis. To address this issue,
we propose to use the functional form of a randomly initialized convolutional
neural network as an implicit structured prior, which is shown to promote
natural images and excludes images with unnatural noise. In order to
incorporate the model uncertainty into the final estimate, we sample the
posterior distribution using stochastic gradient Langevin dynamics and perform
Bayesian model averaging on the obtained samples. Our synthetic numerical
experiment verifies that deep priors combined with Bayesian model averaging are
able to partially circumvent imaging artifacts and reduce the risk of
overfitting in the presence of extreme noise. Finally, we present pointwise
variance of the estimates as a measure of uncertainty, which coincides with
regions that are more difficult to image.
Related papers
- Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Score-Based Diffusion Models as Principled Priors for Inverse Imaging [46.19536250098105]
We propose turning score-based diffusion models into principled image priors.
We show how to sample from resulting posteriors by using this probability function for variational inference.
arXiv Detail & Related papers (2023-04-23T21:05:59Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Posterior samples of source galaxies in strong gravitational lenses with
score-based priors [107.52670032376555]
We use a score-based model to encode the prior for the inference of undistorted images of background galaxies.
We show how the balance between the likelihood and the prior meet our expectations in an experiment with out-of-distribution data.
arXiv Detail & Related papers (2022-11-07T19:00:42Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Deep Bayesian inference for seismic imaging with tasks [0.6445605125467573]
We propose to use techniques from Bayesian inference and deep neural networks to translate uncertainty in seismic imaging to uncertainty in tasks performed on the image.
A systematic approach is proposed to translate uncertainty due to noise in the data to confidence intervals of automatically tracked horizons in the image.
arXiv Detail & Related papers (2021-10-10T15:25:44Z) - Quantifying Sources of Uncertainty in Deep Learning-Based Image
Reconstruction [5.129343375966527]
We propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction.
We show that our method exhibits competitive performance against conventional benchmarks for computed tomography with both sparse view and limited angle data.
arXiv Detail & Related papers (2020-11-17T04:12:52Z) - Uncertainty Estimation in Medical Image Denoising with Bayesian Deep
Image Prior [2.0303656145222857]
Uncertainty in inverse medical imaging tasks with deep learning has received little attention.
Deep models trained on large data sets tend to hallucinate and create artifacts in the reconstructed output that are not present.
We use a randomly convolutional network as parameterization of the reconstructed image and perform gradient descent to match the observation, which is known as deep image prior.
arXiv Detail & Related papers (2020-08-20T08:34:51Z) - Uncertainty quantification in imaging and automatic horizon tracking: a
Bayesian deep-prior based approach [0.5156484100374059]
Uncertainty quantification (UQ) deals with a probabilistic description of the solution nonuniqueness and data noise sensitivity.
In this paper, we focus on how UQ trickles down to horizon tracking for the determination of stratigraphic models.
arXiv Detail & Related papers (2020-04-01T04:26:33Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.