Simulation and Prediction of Countercurrent Spontaneous Imbibition at
Early and Late Times Using Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2306.05554v3
- Date: Thu, 14 Sep 2023 19:47:53 GMT
- Title: Simulation and Prediction of Countercurrent Spontaneous Imbibition at
Early and Late Times Using Physics-Informed Neural Networks
- Authors: Jassem Abbasi, P{\aa}l {\O}steb{\o} Andersen
- Abstract summary: The application of Physics-Informed Neural Networks (PINNs) is investigated for the first time in solving the one-dimensional Countercurrent spontaneous imbibition problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of Physics-Informed Neural Networks (PINNs) is investigated
for the first time in solving the one-dimensional Countercurrent spontaneous
imbibition (COUCSI) problem at both early and late time (i.e., before and after
the imbibition front meets the no-flow boundary). We introduce utilization of
Change-of-Variables as a technique for improving performance of PINNs. We
formulated the COUCSI problem in three equivalent forms by changing the
independent variables. The first describes saturation as function of normalized
position X and time T; the second as function of X and Y=T^0.5; and the third
as a sole function of Z=X/T^0.5 (valid only at early time). The PINN model was
generated using a feed-forward neural network and trained based on minimizing a
weighted loss function, including the physics-informed loss term and terms
corresponding to the initial and boundary conditions. All three formulations
could closely approximate the correct solutions, with water saturation mean
absolute errors around 0.019 and 0.009 for XT and XY formulations and 0.012 for
the Z formulation at early time. The Z formulation perfectly captured the
self-similarity of the system at early time. This was less captured by XT and
XY formulations. The total variation of saturation was preserved in the Z
formulation, and it was better preserved with XY- than XT formulation.
Redefining the problem based on the physics-inspired variables reduced the
non-linearity of the problem and allowed higher solution accuracies, a higher
degree of loss-landscape convexity, a lower number of required collocation
points, smaller network sizes, and more computationally efficient solutions.
Related papers
- A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Polynomial-Time Solutions for ReLU Network Training: A Complexity
Classification via Max-Cut and Zonotopes [70.52097560486683]
We prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
In particular, when $epsilonleqsqrt84/83-1approx 0.006$, we show that it is NP-hard to find an approximate global dataset of the ReLU network objective with relative error $epsilon$ with respect to the objective value.
arXiv Detail & Related papers (2023-11-18T04:41:07Z) - Enhancing Convergence Speed with Feature-Enforcing Physics-Informed Neural Networks: Utilizing Boundary Conditions as Prior Knowledge for Faster Convergence [0.0]
This study introduces an accelerated training method for Vanilla Physics-Informed-Neural-Networks (PINN)
It addresses three factors that imbalance the loss function: initial weight state of a neural network, domain to boundary points ratio, and loss weighting factor.
It is found that incorporating weights generated in the first training phase into the structure of a neural network neutralizes the effects of imbalance factors.
arXiv Detail & Related papers (2023-08-17T09:10:07Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Wave simulation in non-smooth media by PINN with quadratic neural
network and PML condition [2.7651063843287718]
The recently proposed physics-informed neural network (PINN) has achieved successful applications in solving a wide range of partial differential equations (PDEs)
In this paper, we solve the acoustic and visco-acoustic scattered-field wave equation in the frequency domain with PINN instead of the wave equation to remove source perturbation.
We show that PML and quadratic neurons improve the results as well as attenuation and discuss the reason for this improvement.
arXiv Detail & Related papers (2022-08-16T13:29:01Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Physics-Informed Neural Network Method for Solving One-Dimensional
Advection Equation Using PyTorch [0.0]
PINNs approach allows training neural networks while respecting the PDEs as a strong constraint in the optimization.
In standard small-scale circulation simulations, it is shown that the conventional approach incorporates a pseudo diffusive effect that is almost as large as the effect of the turbulent diffusion model.
Of all the schemes tested, only the PINNs approximation accurately predicted the outcome.
arXiv Detail & Related papers (2021-03-15T05:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.