PINNs for the Solution of the Hyperbolic Buckley-Leverett Problem with a
Non-convex Flux Function
- URL: http://arxiv.org/abs/2112.14826v1
- Date: Wed, 29 Dec 2021 21:22:44 GMT
- Title: PINNs for the Solution of the Hyperbolic Buckley-Leverett Problem with a
Non-convex Flux Function
- Authors: Waleed Diab and Mohammed Al Kobaisi
- Abstract summary: displacement of two immiscible fluids is a common problem in fluid flow in porous media.
The displacement of two immiscible fluids is a common problem in fluid flow in porous media.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The displacement of two immiscible fluids is a common problem in fluid flow
in porous media. Such a problem can be posed as a partial differential equation
(PDE) in what is commonly referred to as a Buckley-Leverett (B-L) problem. The
B-L problem is a non-linear hyperbolic conservation law that is known to be
notoriously difficult to solve using traditional numerical methods. Here, we
address the forward hyperbolic B-L problem with a nonconvex flux function using
physics-informed neural networks (PINNs). The contributions of this paper are
twofold. First, we present a PINN approach to solve the hyperbolic B-L problem
by embedding the Oleinik entropy condition into the neural network residual. We
do not use a diffusion term (artificial viscosity) in the residual-loss, but we
rely on the strong form of the PDE. Second, we use the Adam optimizer with
residual-based adaptive refinement (RAR) algorithm to achieve an ultra-low loss
without weighting. Our solution method can accurately capture the shock-front
and produce an accurate overall solution. We report a L2 validation error of 2
x 10-2 and a L2 loss of 1x 10-6. The proposed method does not require any
additional regularization or weighting of losses to obtain such accurate
solution.
Related papers
- Straightness of Rectified Flow: A Theoretical Insight into Wasserstein Convergence [54.580605276017096]
Rectified Flow (RF) aims to learn straight flow trajectories from noise to data using a sequence of convex optimization problems.
RF theoretically straightens the trajectory through successive rectifications, reducing the number of evaluations function (NFEs) while sampling.
We provide the first theoretical analysis of the Wasserstein distance between the sampling distribution of RF and the target distribution.
arXiv Detail & Related papers (2024-10-19T02:36:11Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Deep Backward and Galerkin Methods for the Finite State Master Equation [12.570464662548787]
This paper proposes and analyzes two neural network methods to solve the master equation for finite-state mean field games.
We prove two types of results: there exist neural networks that make the algorithms' loss functions arbitrarily small, and conversely, if the losses are small, then the neural networks are good approximations of the master equation's solution.
arXiv Detail & Related papers (2024-03-08T01:12:11Z) - Physics-constrained convolutional neural networks for inverse problems in spatiotemporal partial differential equations [4.266376725904727]
We propose a physics-constrained convolutional neural network (PCCNN) to solve two types of inverse problems in partial differential equations (PDEs)
In the first inverse problem, we are given data that is offset from the biased data.
In the second inverse problem, we are given information on the solution of PDE.
We find that the PC-CNN correctly recovers the true solution for a variety of biases.
arXiv Detail & Related papers (2024-01-18T13:51:48Z) - The Implicit Bias of Minima Stability in Multivariate Shallow ReLU
Networks [53.95175206863992]
We study the type of solutions to which gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss.
We prove that although shallow ReLU networks are universal approximators, stable shallow networks are not.
arXiv Detail & Related papers (2023-06-30T09:17:39Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Solving PDEs on Unknown Manifolds with Machine Learning [8.220217498103315]
This paper presents a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifold.
We show that the proposed NN solver can robustly generalize the PDE on new data points with errors that are almost identical to generalizations on new data points.
arXiv Detail & Related papers (2021-06-12T03:55:15Z) - Overparameterization of deep ResNet: zero loss and mean-field analysis [19.45069138853531]
Finding parameters in a deep neural network (NN) that fit data is a non optimization problem.
We show that a basic first-order optimization method (gradient descent) finds a global solution with perfect fit in many practical situations.
We give estimates of the depth and width needed to reduce the loss below a given threshold, with high probability.
arXiv Detail & Related papers (2021-05-30T02:46:09Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Approximation Schemes for ReLU Regression [80.33702497406632]
We consider the fundamental problem of ReLU regression.
The goal is to output the best fitting ReLU with respect to square loss given to draws from some unknown distribution.
arXiv Detail & Related papers (2020-05-26T16:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.