Physics-Informed Machine Learning Method for Large-Scale Data
Assimilation Problems
- URL: http://arxiv.org/abs/2108.00037v1
- Date: Fri, 30 Jul 2021 18:43:14 GMT
- Title: Physics-Informed Machine Learning Method for Large-Scale Data
Assimilation Problems
- Authors: Yu-Hong Yeung (1), David A. Barajas-Solano (1), Alexandre M.
Tartakovsky (1 and 2) ((1) Physical and Computational Sciences Directorate,
Pacific Northwest National Laboratory, (2) Department of Civil and
Environmental Engineering, University of Illinois Urbana-Champaign)
- Abstract summary: We extend the physics-informed conditional Karhunen-Lo'eve expansion (PICKLE) method for modeling subsurface flow with unknown flux (Neumann) and varying head (Dirichlet) boundary conditions.
We demonstrate that the PICKLE method is comparable in accuracy with the standard maximum a posteriori (MAP) method, but is significantly faster than MAP for large-scale problems.
- Score: 48.7576911714538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a physics-informed machine learning approach for large-scale data
assimilation and parameter estimation and apply it for estimating
transmissivity and hydraulic head in the two-dimensional steady-state
subsurface flow model of the Hanford Site given synthetic measurements of said
variables. In our approach, we extend the physics-informed conditional
Karhunen-Lo\'{e}ve expansion (PICKLE) method for modeling subsurface flow with
unknown flux (Neumann) and varying head (Dirichlet) boundary conditions. We
demonstrate that the PICKLE method is comparable in accuracy with the standard
maximum a posteriori (MAP) method, but is significantly faster than MAP for
large-scale problems. Both methods use a mesh to discretize the computational
domain. In MAP, the parameters and states are discretized on the mesh;
therefore, the size of the MAP parameter estimation problem directly depends on
the mesh size. In PICKLE, the mesh is used to evaluate the residuals of the
governing equation, while the parameters and states are approximated by the
truncated conditional Karhunen-Lo\'{e}ve expansions with the number of
parameters controlled by the smoothness of the parameter and state fields, and
not by the mesh size. For a considered example, we demonstrate that the
computational cost of PICKLE increases near linearly (as $N_{FV}^{1.15}$) with
the number of grid points $N_{FV}$, while that of MAP increases much faster as
$N_{FV}^{3.28}$. We demonstrated that once trained for one set of Dirichlet
boundary conditions (i.e., one river stage), the PICKLE method provides
accurate estimates of the hydraulic head for any value of the Dirichlet
boundary conditions (i.e., for any river stage).
Related papers
- Randomized Physics-Informed Machine Learning for Uncertainty
Quantification in High-Dimensional Inverse Problems [49.1574468325115]
We propose a physics-informed machine learning method for uncertainty quantification in high-dimensional inverse problems.
We show analytically and through comparison with Hamiltonian Monte Carlo that the rPICKLE posterior converges to the true posterior given by the Bayes rule.
arXiv Detail & Related papers (2023-12-11T07:33:16Z) - Conditional Korhunen-Lo\'{e}ve regression model with Basis Adaptation
for high-dimensional problems: uncertainty quantification and inverse
modeling [62.997667081978825]
We propose a methodology for improving the accuracy of surrogate models of the observable response of physical systems.
We apply the proposed methodology to constructing surrogate models via the Basis Adaptation (BA) method of the stationary hydraulic head response.
arXiv Detail & Related papers (2023-07-05T18:14:38Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Adaptive deep density approximation for fractional Fokker-Planck
equations [6.066542157374599]
We present an explicit PDF model induced by a flow-based deep generative model, KRnet, which constructs a transport map from a simple distribution to the target distribution.
We consider two methods to approximate the fractional Laplacian.
Based on these two different ways for the approximation of the fractional Laplacian, we propose two models, MCNF and GRBFNF, to approximate stationary FPEs and time-dependent FPEs.
arXiv Detail & Related papers (2022-10-26T00:58:17Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - Variational encoder geostatistical analysis (VEGAS) with an application
to large scale riverine bathymetry [1.2093180801186911]
Estimation of riverbed profiles, also known as bathymetry, plays a vital role in many applications.
We propose a reduced-order model (ROM) based approach that utilizes a variational autoencoder (VAE), a type of deep neural network with a narrow layer in the middle.
We have tested our inversion approach on a one-mile reach of the Savannah River, GA, USA.
arXiv Detail & Related papers (2021-11-23T08:27:48Z) - Bayesian multiscale deep generative model for the solution of
high-dimensional inverse problems [0.0]
A novel multiscale Bayesian inference approach is introduced based on deep probabilistic generative models.
The method allows high-dimensional parameter estimation while exhibiting stability, efficiency and accuracy.
arXiv Detail & Related papers (2021-02-04T11:47:21Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.