Solving inverse problems using conditional invertible neural networks
- URL: http://arxiv.org/abs/2007.15849v2
- Date: Fri, 19 Feb 2021 02:08:55 GMT
- Title: Solving inverse problems using conditional invertible neural networks
- Authors: Govinda Anantha Padmanabha, Nicholas Zabaras
- Abstract summary: We develop a model that maps the given observations to the unknown input field in the form of a surrogate model.
This inverse surrogate model will then allow us to estimate the unknown input field for any given sparse and noisy output observations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse modeling for computing a high-dimensional spatially-varying property
field from indirect sparse and noisy observations is a challenging problem.
This is due to the complex physical system of interest often expressed in the
form of multiscale PDEs, the high-dimensionality of the spatial property of
interest, and the incomplete and noisy nature of observations. To address these
challenges, we develop a model that maps the given observations to the unknown
input field in the form of a surrogate model. This inverse surrogate model will
then allow us to estimate the unknown input field for any given sparse and
noisy output observations. Here, the inverse mapping is limited to a broad
prior distribution of the input field with which the surrogate model is
trained. In this work, we construct a two- and three-dimensional inverse
surrogate models consisting of an invertible and a conditional neural network
trained in an end-to-end fashion with limited training data. The invertible
network is developed using a flow-based generative model. The developed inverse
surrogate model is then applied for an inversion task of a multiphase flow
problem where given the pressure and saturation observations the aim is to
recover a high-dimensional non-Gaussian permeability field where the two facies
consist of heterogeneous permeability and varying length-scales. For both the
two- and three-dimensional surrogate models, the predicted sample realizations
of the non-Gaussian permeability field are diverse with the predictive mean
being close to the ground truth even when the model is trained with limited
data.
Related papers
- Latent diffusion models for parameterization and data assimilation of facies-based geomodels [0.0]
Diffusion models are trained to generate new geological realizations from input fields characterized by random noise.
Latent diffusion models are shown to provide realizations that are visually consistent with samples from geomodeling software.
arXiv Detail & Related papers (2024-06-21T01:32:03Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Generative models and Bayesian inversion using Laplace approximation [0.3670422696827525]
Recently, inverse problems were solved using generative models as highly informative priors.
We show that derived Bayes estimates are consistent, in contrast to the approach employing the low-dimensional manifold of the generative model.
arXiv Detail & Related papers (2022-03-15T10:05:43Z) - Uncertainty quantification and inverse modeling for subsurface flow in
3D heterogeneous formations using a theory-guided convolutional
encoder-decoder network [5.018057056965207]
We build surrogate models for dynamic 3D subsurface single-phase flow problems with multiple vertical producing wells.
The surrogate model provides efficient pressure estimation of the entire formation at any timestep.
The well production rate or bottom hole pressure can then be determined based on Peaceman's formula.
arXiv Detail & Related papers (2021-11-14T10:11:46Z) - Data-driven Uncertainty Quantification in Computational Human Head
Models [0.6745502291821954]
Modern biofidelic head model simulations are associated with very high computational cost and high-dimensional inputs and outputs.
In this study, a two-stage, data-driven manifold learning-based framework is proposed for uncertainty quantification (UQ) of computational head models.
It is demonstrated that the surrogate models provide highly accurate approximations of the computational model while significantly reducing the computational cost.
arXiv Detail & Related papers (2021-10-29T05:42:31Z) - Deep-learning-based coupled flow-geomechanics surrogate model for CO$_2$
sequestration [4.635171370680939]
The 3D recurrent R-U-Net model combines deep convolutional and recurrent neural networks to capture the spatial distribution and temporal evolution of saturation, pressure and surface displacement fields.
The surrogate model is trained to predict the 3D CO2 saturation and pressure fields in the storage aquifer, and 2D displacement maps at the Earth's surface.
arXiv Detail & Related papers (2021-05-04T07:34:15Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.