Deep Bayesian inference for seismic imaging with tasks
- URL: http://arxiv.org/abs/2110.04825v1
- Date: Sun, 10 Oct 2021 15:25:44 GMT
- Title: Deep Bayesian inference for seismic imaging with tasks
- Authors: Ali Siahkoohi and Gabrio Rizzuti and Felix J. Herrmann
- Abstract summary: We propose to use techniques from Bayesian inference and deep neural networks to translate uncertainty in seismic imaging to uncertainty in tasks performed on the image.
A systematic approach is proposed to translate uncertainty due to noise in the data to confidence intervals of automatically tracked horizons in the image.
- Score: 0.6445605125467573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose to use techniques from Bayesian inference and deep neural networks
to translate uncertainty in seismic imaging to uncertainty in tasks performed
on the image, such as horizon tracking. Seismic imaging is an ill-posed inverse
problem because of unavoidable bandwidth and aperture limitations, which that
is hampered by the presence of noise and linearization errors. Many
regularization methods, such as transform-domain sparsity promotion, have been
designed to deal with the adverse effects of these errors, however, these
methods run the risk of biasing the solution and do not provide information on
uncertainty in the image space and how this uncertainty impacts certain tasks
on the image. A systematic approach is proposed to translate uncertainty due to
noise in the data to confidence intervals of automatically tracked horizons in
the image. The uncertainty is characterized by a convolutional neural network
(CNN) and to assess these uncertainties, samples are drawn from the posterior
distribution of the CNN weights, used to parameterize the image. Compared to
traditional priors, in the literature it is argued that these CNNs introduce a
flexible inductive bias that is a surprisingly good fit for many diverse
domains in imaging. The method of stochastic gradient Langevin dynamics is
employed to sample from the posterior distribution. This method is designed to
handle large scale Bayesian inference problems with computationally expensive
forward operators as in seismic imaging. Aside from offering a robust
alternative to maximum a posteriori estimate that is prone to overfitting,
access to these samples allow us to translate uncertainty in the image, due to
noise in the data, to uncertainty on the tracked horizons. For instance, it
admits estimates for the pointwise standard deviation on the image and for
confidence intervals on its automatically tracked horizons.
Related papers
- One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Instant Uncertainty Calibration of NeRFs Using a Meta-calibrator [60.47106421809998]
We introduce the concept of a meta-calibrator that performs uncertainty calibration for NeRFs with a single forward pass.
We show that the meta-calibrator can generalize on unseen scenes and achieves well-calibrated and state-of-the-art uncertainty for NeRFs.
arXiv Detail & Related papers (2023-12-04T21:29:31Z) - Equivariant Bootstrapping for Uncertainty Quantification in Imaging
Inverse Problems [0.24475591916185502]
We present a new uncertainty quantification methodology based on an equivariant formulation of the parametric bootstrap algorithm.
The proposed methodology is general and can be easily applied with any image reconstruction technique.
We demonstrate the proposed approach with a series of numerical experiments and through comparisons with alternative uncertainty quantification strategies.
arXiv Detail & Related papers (2023-10-18T09:43:15Z) - Gradient-based Uncertainty for Monocular Depth Estimation [5.7575052885308455]
In monocular depth estimation, disturbances in the image context, like moving objects or reflecting materials, can easily lead to erroneous predictions.
We propose a post hoc uncertainty estimation approach for an already trained and thus fixed depth estimation model.
Our approach achieves state-of-the-art uncertainty estimation results on the KITTI and NYU Depth V2 benchmarks without the need to retrain the neural network.
arXiv Detail & Related papers (2022-08-03T12:21:02Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Scene Uncertainty and the Wellington Posterior of Deterministic Image
Classifiers [68.9065881270224]
We introduce the Wellington Posterior, which is the distribution of outcomes that would have been obtained in response to data that could have been generated by the same scene.
We explore the use of data augmentation, dropout, ensembling, single-view reconstruction, and model linearization to compute a Wellington Posterior.
Additional methods include the use of conditional generative models such as generative adversarial networks, neural radiance fields, and conditional prior networks.
arXiv Detail & Related papers (2021-06-25T20:10:00Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Quantifying Sources of Uncertainty in Deep Learning-Based Image
Reconstruction [5.129343375966527]
We propose a scalable and efficient framework to simultaneously quantify aleatoric and epistemic uncertainties in learned iterative image reconstruction.
We show that our method exhibits competitive performance against conventional benchmarks for computed tomography with both sparse view and limited angle data.
arXiv Detail & Related papers (2020-11-17T04:12:52Z) - Salvage Reusable Samples from Noisy Data for Robust Learning [70.48919625304]
We propose a reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks.
arXiv Detail & Related papers (2020-08-06T02:07:21Z) - Uncertainty quantification in imaging and automatic horizon tracking: a
Bayesian deep-prior based approach [0.5156484100374059]
Uncertainty quantification (UQ) deals with a probabilistic description of the solution nonuniqueness and data noise sensitivity.
In this paper, we focus on how UQ trickles down to horizon tracking for the determination of stratigraphic models.
arXiv Detail & Related papers (2020-04-01T04:26:33Z) - A deep-learning based Bayesian approach to seismic imaging and
uncertainty quantification [0.4588028371034407]
Uncertainty is essential when dealing with ill-conditioned inverse problems.
It is often not possible to formulate a prior distribution that precisely encodes our prior knowledge about the unknown.
We propose to use the functional form of a randomly convolutional neural network as an implicit structured prior.
arXiv Detail & Related papers (2020-01-13T23:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.