Parameterizing uncertainty by deep invertible networks, an application
to reservoir characterization
- URL: http://arxiv.org/abs/2004.07871v1
- Date: Thu, 16 Apr 2020 18:37:56 GMT
- Title: Parameterizing uncertainty by deep invertible networks, an application
to reservoir characterization
- Authors: Gabrio Rizzuti and Ali Siahkoohi and Philipp A. Witte and Felix J.
Herrmann
- Abstract summary: Uncertainty quantification for full-waveform inversion provides a probabilistic characterization of the ill-conditioning of the problem.
We propose an approach characterized by training a deep network that "pushes forward" Gaussian random inputs into the model space as if they were sampled from the actual posterior distribution.
- Score: 0.9176056742068814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification for full-waveform inversion provides a
probabilistic characterization of the ill-conditioning of the problem,
comprising the sensitivity of the solution with respect to the starting model
and data noise. This analysis allows to assess the confidence in the candidate
solution and how it is reflected in the tasks that are typically performed
after imaging (e.g., stratigraphic segmentation following reservoir
characterization). Classically, uncertainty comes in the form of a probability
distribution formulated from Bayesian principles, from which we seek to obtain
samples. A popular solution involves Monte Carlo sampling. Here, we propose
instead an approach characterized by training a deep network that "pushes
forward" Gaussian random inputs into the model space (representing, for
example, density or velocity) as if they were sampled from the actual posterior
distribution. Such network is designed to solve a variational optimization
problem based on the Kullback-Leibler divergence between the posterior and the
network output distributions. This work is fundamentally rooted in recent
developments for invertible networks. Special invertible architectures, besides
being computational advantageous with respect to traditional networks, do also
enable analytic computation of the output density function. Therefore, after
training, these networks can be readily used as a new prior for a related
inversion problem. This stands in stark contrast with Monte-Carlo methods,
which only produce samples. We validate these ideas with an application to
angle-versus-ray parameter analysis for reservoir characterization.
Related papers
- A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty [1.8416014644193066]
We propose a data-driven, closure model for Reynolds-averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty.
A fully Bayesian formulation is proposed, combined with a sparsity-inducing prior in order to identify regions in the problem domain where the parametric closure is insufficient.
arXiv Detail & Related papers (2023-07-05T16:53:31Z) - Learning to solve Bayesian inverse problems: An amortized variational inference approach using Gaussian and Flow guides [0.0]
We develop a methodology that enables real-time inference by learning the Bayesian inverse map, i.e., the map from data to posteriors.
Our approach provides the posterior distribution for a given observation just at the cost of a forward pass of the neural network.
arXiv Detail & Related papers (2023-05-31T16:25:07Z) - Probabilistic Verification of ReLU Neural Networks via Characteristic
Functions [11.489187712465325]
We use ideas from probability theory in the frequency domain to provide probabilistic verification guarantees for ReLU neural networks.
We interpret a (deep) feedforward neural network as a discrete dynamical system over a finite horizon.
We obtain the corresponding cumulative distribution function of the output set, which can be used to check if the network is performing as expected.
arXiv Detail & Related papers (2022-12-03T05:53:57Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - The Ridgelet Prior: A Covariance Function Approach to Prior
Specification for Bayesian Neural Networks [4.307812758854161]
We construct a prior distribution for the parameters of a network that approximates the posited Gaussian process in the output space of the network.
This establishes the property that a Bayesian neural network can approximate any Gaussian process whose covariance function is sufficiently regular.
arXiv Detail & Related papers (2020-10-16T16:39:45Z) - Uncertainty quantification in imaging and automatic horizon tracking: a
Bayesian deep-prior based approach [0.5156484100374059]
Uncertainty quantification (UQ) deals with a probabilistic description of the solution nonuniqueness and data noise sensitivity.
In this paper, we focus on how UQ trickles down to horizon tracking for the determination of stratigraphic models.
arXiv Detail & Related papers (2020-04-01T04:26:33Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.