PMNN:Physical Model-driven Neural Network for solving time-fractional
differential equations
- URL: http://arxiv.org/abs/2310.04788v1
- Date: Sat, 7 Oct 2023 12:43:32 GMT
- Title: PMNN:Physical Model-driven Neural Network for solving time-fractional
differential equations
- Authors: Zhiying Ma, Jie Hou, Wenhao Zhu, Yaxin Peng and Ying Li
- Abstract summary: An innovative Physical Model-driven Neural Network (PMNN) method is proposed to solve time-fractional differential equations.
It effectively combines deep neural networks (DNNs) with approximation of fractional derivatives.
- Score: 17.66402435033991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, an innovative Physical Model-driven Neural Network (PMNN)
method is proposed to solve time-fractional differential equations. It
establishes a temporal iteration scheme based on physical model-driven neural
networks which effectively combines deep neural networks (DNNs) with
interpolation approximation of fractional derivatives. Specifically, once the
fractional differential operator is discretized, DNNs are employed as a bridge
to integrate interpolation approximation techniques with differential
equations. On the basis of this integration, we construct a neural-based
iteration scheme. Subsequently, by training DNNs to learn this temporal
iteration scheme, approximate solutions to the differential equations can be
obtained. The proposed method aims to preserve the intrinsic physical
information within the equations as far as possible. It fully utilizes the
powerful fitting capability of neural networks while maintaining the efficiency
of the difference schemes for fractional differential equations. Moreover, we
validate the efficiency and accuracy of PMNN through several numerical
experiments.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - HNS: An Efficient Hermite Neural Solver for Solving Time-Fractional
Partial Differential Equations [12.520882780496738]
We present the high-precision Hermite Neural Solver (HNS) for solving time-fractional partial differential equations.
The experimental results show that HNS has significantly improved accuracy and flexibility compared to existing L1-based methods.
arXiv Detail & Related papers (2023-10-07T12:44:47Z) - Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural
Networks [4.604003661048267]
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations.
We propose a comprehensive theoretical framework that sheds light on this important problem.
We derive an integro-differential equation that governs PINN prediction in the large data-set limit.
arXiv Detail & Related papers (2023-07-12T18:00:02Z) - Splitting physics-informed neural networks for inferring the dynamics of
integer- and fractional-order neuron models [0.0]
We introduce a new approach for solving forward systems of differential equations using a combination of splitting methods and physics-informed neural networks (PINNs)
The proposed method, splitting PINN, effectively addresses the challenge of applying PINNs to forward dynamical systems.
arXiv Detail & Related papers (2023-04-26T00:11:00Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Partial Differential Equations is All You Need for Generating Neural Architectures -- A Theory for Physical Artificial Intelligence Systems [40.20472268839781]
We generalize the reaction-diffusion equation in statistical physics, Schr"odinger equation in quantum mechanics, Helmholtz equation in paraxial optics.
We take finite difference method to discretize NPDE for finding numerical solution.
Basic building blocks of deep neural network architecture, including multi-layer perceptron, convolutional neural network and recurrent neural networks, are generated.
arXiv Detail & Related papers (2021-03-10T00:05:46Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.