HNS: An Efficient Hermite Neural Solver for Solving Time-Fractional
Partial Differential Equations
- URL: http://arxiv.org/abs/2310.04789v1
- Date: Sat, 7 Oct 2023 12:44:47 GMT
- Title: HNS: An Efficient Hermite Neural Solver for Solving Time-Fractional
Partial Differential Equations
- Authors: Jie Hou, Zhiying Ma, Shihui Ying and Ying Li
- Abstract summary: We present the high-precision Hermite Neural Solver (HNS) for solving time-fractional partial differential equations.
The experimental results show that HNS has significantly improved accuracy and flexibility compared to existing L1-based methods.
- Score: 12.520882780496738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network solvers represent an innovative and promising approach for
tackling time-fractional partial differential equations by utilizing deep
learning techniques. L1 interpolation approximation serves as the standard
method for addressing time-fractional derivatives within neural network
solvers. However, we have discovered that neural network solvers based on L1
interpolation approximation are unable to fully exploit the benefits of neural
networks, and the accuracy of these models is constrained to interpolation
errors. In this paper, we present the high-precision Hermite Neural Solver
(HNS) for solving time-fractional partial differential equations. Specifically,
we first construct a high-order explicit approximation scheme for fractional
derivatives using Hermite interpolation techniques, and rigorously analyze its
approximation accuracy. Afterward, taking into account the infinitely
differentiable properties of deep neural networks, we integrate the high-order
Hermite interpolation explicit approximation scheme with deep neural networks
to propose the HNS. The experimental results show that HNS achieves higher
accuracy than methods based on the L1 scheme for both forward and inverse
problems, as well as in high-dimensional scenarios. This indicates that HNS has
significantly improved accuracy and flexibility compared to existing L1-based
methods, and has overcome the limitations of explicit finite difference
approximation methods that are often constrained to function value
interpolation. As a result, the HNS is not a simple combination of numerical
computing methods and neural networks, but rather achieves a complementary and
mutually reinforcing advantages of both approaches. The data and code can be
found at \url{https://github.com/hsbhc/HNS}.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - PMNN:Physical Model-driven Neural Network for solving time-fractional
differential equations [17.66402435033991]
An innovative Physical Model-driven Neural Network (PMNN) method is proposed to solve time-fractional differential equations.
It effectively combines deep neural networks (DNNs) with approximation of fractional derivatives.
arXiv Detail & Related papers (2023-10-07T12:43:32Z) - A new approach to generalisation error of machine learning algorithms:
Estimates and convergence [0.0]
We introduce a new approach to the estimation of the (generalisation) error and to convergence.
Our results include estimates of the error without any structural assumption on the neural networks.
arXiv Detail & Related papers (2023-06-23T20:57:31Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Hierarchical Learning to Solve Partial Differential Equations Using
Physics-Informed Neural Networks [2.0305676256390934]
We propose a hierarchical approach to improve the convergence rate and accuracy of the neural network solution to partial differential equations.
We validate the efficiency and robustness of the proposed hierarchical approach through a suite of linear and nonlinear partial differential equations.
arXiv Detail & Related papers (2021-12-02T13:53:42Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.