Efficiently Solving High-Order and Nonlinear ODEs with Rational Fraction
Polynomial: the Ratio Net
- URL: http://arxiv.org/abs/2105.11309v2
- Date: Wed, 31 Jan 2024 14:39:49 GMT
- Title: Efficiently Solving High-Order and Nonlinear ODEs with Rational Fraction
Polynomial: the Ratio Net
- Authors: Chenxin Qin, Ruhao Liu, Maocai Li, Shengyuan Li, Yi Liu, and Chichun
Zhou
- Abstract summary: This study takes a different approach by introducing neural network architecture for constructing trial functions, known as ratio net.
Through empirical trials, it demonstrated that the proposed method exhibits higher efficiency compared to existing approaches.
The ratio net holds promise for advancing the efficiency and effectiveness of solving differential equations.
- Score: 3.155317790896023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in solving ordinary differential equations (ODEs) with neural
networks have been remarkable. Neural networks excel at serving as trial
functions and approximating solutions within functional spaces, aided by
gradient backpropagation algorithms. However, challenges remain in solving
complex ODEs, including high-order and nonlinear cases, emphasizing the need
for improved efficiency and effectiveness. Traditional methods have typically
relied on established knowledge integration to improve problem-solving
efficiency. In contrast, this study takes a different approach by introducing a
new neural network architecture for constructing trial functions, known as
ratio net. This architecture draws inspiration from rational fraction
polynomial approximation functions, specifically the Pade approximant. Through
empirical trials, it demonstrated that the proposed method exhibits higher
efficiency compared to existing approaches, including polynomial-based and
multilayer perceptron (MLP) neural network-based methods. The ratio net holds
promise for advancing the efficiency and effectiveness of solving differential
equations.
Related papers
- Component-based Sketching for Deep ReLU Nets [55.404661149594375]
We develop a sketching scheme based on deep net components for various tasks.
We transform deep net training into a linear empirical risk minimization problem.
We show that the proposed component-based sketching provides almost optimal rates in approximating saturated functions.
arXiv Detail & Related papers (2024-09-21T15:30:43Z) - Enhancing Convolutional Neural Networks with Higher-Order Numerical Difference Methods [6.26650196870495]
Convolutional Neural Networks (CNNs) have been able to assist humans in solving many real-world problems.
This paper proposes a stacking scheme based on the linear multi-step method to enhance the performance of CNNs.
arXiv Detail & Related papers (2024-09-08T05:13:58Z) - Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - Accelerating Fractional PINNs using Operational Matrices of Derivative [0.24578723416255746]
This paper presents a novel operational matrix method to accelerate the training of fractional Physics-Informed Neural Networks (fPINNs)
Our approach involves a non-uniform discretization of the fractional Caputo operator, facilitating swift computation of fractional derivatives within Caputo-type fractional differential problems with $0alpha1$.
The effectiveness of our proposed method is validated across diverse differential equations, including Delay Differential Equations (DDEs) and Systems of Differential Algebraic Equations (DAEs)
arXiv Detail & Related papers (2024-01-25T11:00:19Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Connections between Numerical Algorithms for PDEs and Neural Networks [8.660429288575369]
We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural networks.
Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks.
arXiv Detail & Related papers (2021-07-30T16:42:45Z) - Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network [5.676923179244324]
It is shown that one can obtain a relatively accurate estimate of the considered parameters using the proposed method.
It provides a natural framework in inverse problems of partial differential equations with deep learning.
arXiv Detail & Related papers (2021-07-19T02:54:37Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.