Taylor-Model Physics-Informed Neural Networks (PINNs) for Ordinary Differential Equations
- URL: http://arxiv.org/abs/2507.03860v1
- Date: Sat, 05 Jul 2025 02:03:36 GMT
- Title: Taylor-Model Physics-Informed Neural Networks (PINNs) for Ordinary Differential Equations
- Authors: Chandra Kanth Nagesh, Sriram Sankaranarayanan, Ramneet Kaur, Tuhin Sahai, Susmit Jha,
- Abstract summary: We study the problem of learning neural network models for Ordinary Differential Equations (ODEs) with parametric uncertainties.<n>Such neural network models capture the solution to the ODE over a given set of parameters, initial conditions, and range of times.<n>We show how the use of these higher order PINNs can improve accuracy using interesting, but challenging ODE benchmarks.
- Score: 11.108683045232867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of learning neural network models for Ordinary Differential Equations (ODEs) with parametric uncertainties. Such neural network models capture the solution to the ODE over a given set of parameters, initial conditions, and range of times. Physics-Informed Neural Networks (PINNs) have emerged as a promising approach for learning such models that combine data-driven deep learning with symbolic physics models in a principled manner. However, the accuracy of PINNs degrade when they are used to solve an entire family of initial value problems characterized by varying parameters and initial conditions. In this paper, we combine symbolic differentiation and Taylor series methods to propose a class of higher-order models for capturing the solutions to ODEs. These models combine neural networks and symbolic terms: they use higher order Lie derivatives and a Taylor series expansion obtained symbolically, with the remainder term modeled as a neural network. The key insight is that the remainder term can itself be modeled as a solution to a first-order ODE. We show how the use of these higher order PINNs can improve accuracy using interesting, but challenging ODE benchmarks. We also show that the resulting model can be quite useful for situations such as controlling uncertain physical systems modeled as ODEs.
Related papers
- Applications and Manipulations of Physics-Informed Neural Networks in Solving Differential Equations [0.0]
A Physics-Informed Neural Network (PINN) can solve both forward and inverse problems.<n> PINNs inject prior analytical information about the data into the cost function to improve model performance outside the training set boundaries.<n>We will create PINNs with residuals of varying complexity, beginning with linear and quadratic models and then expanding to fit models for the heat equation and other complex differential equations.
arXiv Detail & Related papers (2025-07-19T03:39:49Z) - An efficient wavelet-based physics-informed neural networks for singularly perturbed problems [0.0]
Physics-informed neural networks (PINNs) are a class of deep learning models that utilize physics in the form of differential equations to address complex problems.<n>We present a wavelet-based PINNs model to tackle solutions of differential equations with rapid oscillations, steep gradients, or singular behavior.<n>The proposed model significantly improves with traditional PINNs, recently developed wavelet-based PINNs, and other state-of-the-art methods.
arXiv Detail & Related papers (2024-09-18T10:01:37Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Stabilized Neural Ordinary Differential Equations for Long-Time
Forecasting of Dynamical Systems [1.001737665513683]
We present a data-driven modeling method that accurately captures shocks and chaotic dynamics.
We learn the right-hand-side (SRH) of an ODE by adding the outputs of two NN together where one learns a linear term and the other a nonlinear term.
Specifically, we implement this by training a sparse linear convolutional NN to learn the linear term and a dense fully-connected nonlinear NN to learn the nonlinear term.
arXiv Detail & Related papers (2022-03-29T16:10:34Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Artificial neural network as a universal model of nonlinear dynamical
systems [0.0]
The map is built as an artificial neural network whose weights encode a modeled system.
We consider the Lorenz system, the Roessler system and also Hindmarch-Rose neuron.
High similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunovs exponents.
arXiv Detail & Related papers (2021-03-06T16:02:41Z) - Parameter Estimation with Dense and Convolutional Neural Networks
Applied to the FitzHugh-Nagumo ODE [0.0]
We present deep neural networks using dense and convolutional layers to solve an inverse problem, where we seek to estimate parameters of a Fitz-Nagumo model.
We demonstrate that deep neural networks have the potential to estimate parameters in dynamical models and processes, and they are capable of predicting parameters accurately for the framework.
arXiv Detail & Related papers (2020-12-12T01:20:42Z) - Neural Ordinary Differential Equation based Recurrent Neural Network
Model [0.7233897166339269]
differential equations are a promising new member in the neural network family.
This paper explores the strength of the ordinary differential equation (ODE) is explored with a new extension.
Two new ODE-based RNN models (GRU-ODE model and LSTM-ODE) can compute the hidden state and cell state at any point of time using an ODE solver.
Experiments show that these new ODE based RNN models require less training time than Latent ODEs and conventional Neural ODEs.
arXiv Detail & Related papers (2020-05-20T01:02:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.