Discovering Physics-Informed Neural Networks Model for Solving Partial Differential Equations through Evolutionary Computation
- URL: http://arxiv.org/abs/2405.11208v1
- Date: Sat, 18 May 2024 07:32:02 GMT
- Title: Discovering Physics-Informed Neural Networks Model for Solving Partial Differential Equations through Evolutionary Computation
- Authors: Bo Zhang, Chao Yang,
- Abstract summary: This article proposes an evolutionary computation method aimed at discovering the PINNs model with higher approximation accuracy and faster convergence rate.
In experiments, the performance of different models that are searched through Bayesian optimization, random search and evolution is compared in solving Klein-Gordon, Burgers, and Lam'e equations.
- Score: 5.8407437499182935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the researches about solving partial differential equations (PDEs) based on artificial neural network have attracted considerable attention. In these researches, the neural network models are usually designed depend on human experience or trial and error. Despite the emergence of several model searching methods, these methods primarily concentrate on optimizing the hyperparameters of fully connected neural network model based on the framework of physics-informed neural networks (PINNs), and the corresponding search spaces are relatively restricted, thereby limiting the exploration of superior models. This article proposes an evolutionary computation method aimed at discovering the PINNs model with higher approximation accuracy and faster convergence rate. In addition to searching the numbers of layers and neurons per hidden layer, this method concurrently explores the optimal shortcut connections between the layers and the novel parametric activation functions expressed by the binary trees. In evolution, the strategy about dynamic population size and training epochs (DPSTE) is adopted, which significantly increases the number of models to be explored and facilitates the discovery of models with fast convergence rate. In experiments, the performance of different models that are searched through Bayesian optimization, random search and evolution is compared in solving Klein-Gordon, Burgers, and Lam\'e equations. The experimental results affirm that the models discovered by the proposed evolutionary computation method generally exhibit superior approximation accuracy and convergence rate, and these models also show commendable generalization performance with respect to the source term, initial and boundary conditions, equation coefficient and computational domain. The corresponding code is available at https://github.com/MathBon/Discover-PINNs-Model.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - An automatic selection of optimal recurrent neural network architecture
for processes dynamics modelling purposes [0.0]
The research has included four original proposals of algorithms dedicated to neural network architecture search.
Algorithms have been based on well-known optimisation techniques such as evolutionary algorithms and gradient descent methods.
The research involved an extended validation study based on data generated from a mathematical model of the fast processes occurring in a pressurised water nuclear reactor.
arXiv Detail & Related papers (2023-09-25T11:06:35Z) - Analyzing Populations of Neural Networks via Dynamical Model Embedding [10.455447557943463]
A core challenge in the interpretation of deep neural networks is identifying commonalities between the underlying algorithms implemented by distinct networks trained for the same task.
Motivated by this problem, we introduce DYNAMO, an algorithm that constructs low-dimensional manifold where each point corresponds to a neural network model, and two points are nearby if the corresponding neural networks enact similar high-level computational processes.
DYNAMO takes as input a collection of pre-trained neural networks and outputs a meta-model that emulates the dynamics of the hidden states as well as the outputs of any model in the collection.
arXiv Detail & Related papers (2023-02-27T19:00:05Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Stochastic analysis of heterogeneous porous material with modified
neural architecture search (NAS) based physics-informed neural networks using
transfer learning [0.0]
modified neural architecture search method (NAS) based physics-informed deep learning model is presented.
A three dimensional flow model is built to provide a benchmark to the simulation of groundwater flow in highly heterogeneous aquifers.
arXiv Detail & Related papers (2020-10-03T19:57:54Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.