Enriched Physics-informed Neural Networks for Dynamic
Poisson-Nernst-Planck Systems
- URL: http://arxiv.org/abs/2402.01768v1
- Date: Thu, 1 Feb 2024 02:57:07 GMT
- Title: Enriched Physics-informed Neural Networks for Dynamic
Poisson-Nernst-Planck Systems
- Authors: Xujia Huang, Fajie Wang, Benrong Zhang and Hanqing Liu
- Abstract summary: This paper proposes a meshless deep learning algorithm, enriched physics-informed neural networks (EPINNs) to solve dynamic Poisson-Nernst-Planck (PNP) equations.
The EPINNs takes the traditional physics-informed neural networks as the foundation framework, and adds the adaptive loss weight to balance the loss functions.
Numerical results indicate that the new method has better applicability than traditional numerical methods in solving such coupled nonlinear systems.
- Score: 0.8192907805418583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a meshless deep learning algorithm, enriched
physics-informed neural networks (EPINNs), to solve dynamic
Poisson-Nernst-Planck (PNP) equations with strong coupling and nonlinear
characteristics. The EPINNs takes the traditional physics-informed neural
networks as the foundation framework, and adds the adaptive loss weight to
balance the loss functions, which automatically assigns the weights of losses
by updating the parameters in each iteration based on the maximum likelihood
estimate. The resampling strategy is employed in the EPINNs to accelerate the
convergence of loss function. Meanwhile, the GPU parallel computing technique
is adopted to accelerate the solving process. Four examples are provided to
demonstrate the validity and effectiveness of the proposed method. Numerical
results indicate that the new method has better applicability than traditional
numerical methods in solving such coupled nonlinear systems. More importantly,
the EPINNs is more accurate, stable, and fast than the traditional
physics-informed neural networks. This work provides a simple and
high-performance numerical tool for addressing PNPs with arbitrary boundary
shapes and boundary conditions.
Related papers
- Stable Weight Updating: A Key to Reliable PDE Solutions Using Deep Learning [0.0]
This paper introduces novel residual-based architectures, designed to enhance stability and accuracy in physics-informed neural networks (PINNs)
The architectures augment traditional neural networks by incorporating residual connections, which facilitate smoother weight updates and improve backpropagation efficiency.
The Squared Residual Network, in particular, exhibits robust performance, achieving enhanced stability and accuracy compared to conventional neural networks.
arXiv Detail & Related papers (2024-07-10T05:20:43Z) - Neural Networks-based Random Vortex Methods for Modelling Incompressible Flows [0.0]
We introduce a novel Neural Networks-based approach for approximating solutions to the (2D) incompressible Navier--Stokes equations.
Our algorithm uses a Physics-informed Neural Network, that approximates the vorticity based on a loss function that uses a computationally efficient formulation of the Random Vortex dynamics.
arXiv Detail & Related papers (2024-05-22T14:36:23Z) - Physics-Informed Neural Networks for Time-Domain Simulations: Accuracy,
Computational Cost, and Flexibility [0.0]
Physics-Informed Neural Networks (PINNs) have emerged as a promising solution for drastically accelerating computations of non-linear dynamical systems.
This work investigates the applicability of these methods for power system dynamics, focusing on the dynamic response to load disturbances.
To facilitate a deeper understanding, this paper also present a new regularisation of Neural Network (NN) training by introducing a gradient-based term in the loss function.
arXiv Detail & Related papers (2023-03-15T23:53:32Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism [1.6114012813668932]
Physics-Informed Neural Networks (PINNs) have emerged as a promising application of deep neural networks to the numerical solution of nonlinear partial differential equations (PDEs)
We propose a fundamentally new way to train PINNs adaptively, where the adaptation weights are fully trainable and applied to each training point individually.
In numerical experiments with several linear and nonlinear benchmark problems, the SA-PINN outperformed other state-of-the-art PINN algorithm in L2 error.
arXiv Detail & Related papers (2020-09-07T04:07:52Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Physics-informed deep learning for incompressible laminar flows [13.084113582897965]
We propose a mixed-variable scheme of physics-informed neural network (PINN) for fluid dynamics.
A parametric study indicates that the mixed-variable scheme can improve the PINN trainability and the solution accuracy.
arXiv Detail & Related papers (2020-02-24T21:51:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.