Neuroevolution of Physics-Informed Neural Nets: Benchmark Problems and
Comparative Results
- URL: http://arxiv.org/abs/2212.07624v3
- Date: Wed, 6 Dec 2023 08:22:39 GMT
- Title: Neuroevolution of Physics-Informed Neural Nets: Benchmark Problems and
Comparative Results
- Authors: Nicholas Sung Wei Yong, Jian Cheng Wong, Pao-Hsiung Chiu, Abhishek
Gupta, Chinchun Ooi, Yew-Soon Ong
- Abstract summary: Physics-informed neural networks (PINNs) are one of the key techniques at the forefront of recent advances.
PINNs' unique loss formulations lead to a high degree of complexity and ruggedness that may not be conducive for gradient descent.
Neuroevolution algorithms, with their superior global search capacity, may be a better choice for PINNs.
- Score: 25.12291688711645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The potential of learned models for fundamental scientific research and
discovery is drawing increasing attention worldwide. Physics-informed neural
networks (PINNs), where the loss function directly embeds governing equations
of scientific phenomena, is one of the key techniques at the forefront of
recent advances. PINNs are typically trained using stochastic gradient descent
methods, akin to their deep learning counterparts. However, analysis in this
paper shows that PINNs' unique loss formulations lead to a high degree of
complexity and ruggedness that may not be conducive for gradient descent.
Unlike in standard deep learning, PINN training requires globally optimum
parameter values that satisfy physical laws as closely as possible. Spurious
local optimum, indicative of erroneous physics, must be avoided. Hence,
neuroevolution algorithms, with their superior global search capacity, may be a
better choice for PINNs relative to gradient descent methods. Here, we propose
a set of five benchmark problems, with open-source codes, spanning diverse
physical phenomena for novel neuroevolution algorithm development. Using this,
we compare two neuroevolution algorithms against the commonly used stochastic
gradient descent, and our baseline results support the claim that
neuroevolution can surpass gradient descent, ensuring better physics compliance
in the predicted outputs. %Furthermore, implementing neuroevolution with JAX
leads to orders of magnitude speedup relative to standard implementations.
Related papers
- Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions [0.1874930567916036]
"AI for Science" aims to solve fundamental scientific problems using AI techniques.
In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it.
This reduces the number of terms in the loss function than the standard PINN losses.
arXiv Detail & Related papers (2024-07-30T11:19:48Z) - Improved physics-informed neural network in mitigating gradient related failures [11.356695216531328]
Physics-informed neural networks (PINNs) integrate fundamental physical principles with advanced data-driven techniques.
PINNs face persistent challenges with stiffness in gradient flow, which limits their predictive capabilities.
This paper presents an improved PINN to mitigate gradient-related failures.
arXiv Detail & Related papers (2024-07-28T07:58:10Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - PSO-PINN: Physics-Informed Neural Networks Trained with Particle Swarm
Optimization [0.0]
We propose the use of a hybrid particle swarm optimization and gradient descent approach to train PINNs.
The resulting PSO-PINN algorithm mitigates the undesired behaviors of PINNs trained with standard gradient descent.
Experimental results show that PSO-PINN consistently outperforms a baseline PINN trained with Adam gradient descent.
arXiv Detail & Related papers (2022-02-04T02:21:31Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Can Transfer Neuroevolution Tractably Solve Your Differential Equations? [22.714772862513826]
This paper introduces neuroevolution for solving differential equations.
Neuroevolution carries out a parallel exploration of diverse solutions with the goal of circumventing local optima.
A novel and computationally efficient transfer neuroevolution algorithm is proposed in this paper.
arXiv Detail & Related papers (2021-01-06T13:07:52Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.