Optimized neural forms for solving ordinary differential equations
- URL: http://arxiv.org/abs/2404.19454v1
- Date: Tue, 30 Apr 2024 11:10:34 GMT
- Title: Optimized neural forms for solving ordinary differential equations
- Authors: Adam D. Kypriadis, Isaac E. Lagaris, Aristidis Likas, Konstantinos E. Parsopoulos,
- Abstract summary: A critical issue in approximating solutions of ordinary differential equations using neural networks is the exact satisfaction of the boundary or initial conditions.
This paper presents a novel formalism for crafting optimized neural forms.
It also outlines a method for establishing an upper bound on the absolute deviation from the exact solution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A critical issue in approximating solutions of ordinary differential equations using neural networks is the exact satisfaction of the boundary or initial conditions. For this purpose, neural forms have been introduced, i.e., functional expressions that depend on neural networks which, by design, satisfy the prescribed conditions exactly. Expanding upon prior progress, the present work contributes in three distinct aspects. First, it presents a novel formalism for crafting optimized neural forms. Second, it outlines a method for establishing an upper bound on the absolute deviation from the exact solution. Third, it introduces a technique for converting problems with Neumann or Robin conditions into equivalent problems with parametric Dirichlet conditions. The proposed optimized neural forms were numerically tested on a set of diverse problems, encompassing first-order and second-order ordinary differential equations, as well as first-order systems. Stiff and delay differential equations were also considered. The obtained solutions were compared against solutions obtained via Runge-Kutta methods and exact solutions wherever available. The reported results and analysis verify that in addition to the exact satisfaction of the boundary or initial conditions, optimized neural forms provide closed-form solutions of superior interpolation capability and controllable overall accuracy.
Related papers
- High precision PINNs in unbounded domains: application to singularity formulation in PDEs [83.50980325611066]
We study the choices of neural network ansatz, sampling strategy, and optimization algorithm.<n>For 1D Burgers equation, our framework can lead to a solution with very high precision.<n>For the 2D Boussinesq equation, we obtain a solution whose loss is $4$ digits smaller than that obtained in citewang2023asymptotic with fewer training steps.
arXiv Detail & Related papers (2025-06-24T02:01:44Z) - Solving 2-D Helmholtz equation in the rectangular, circular, and elliptical domains using neural networks [0.0]
Physics-informed neural networks offered an alternate way to solve several differential equations that govern complicated physics.
Their success in predicting the acoustic field is limited by the vanishing-gradient problem that occurs when solving the Helmholtz equation.
The problem of solving the two-dimensional Helmholtz equation with the prescribed boundary conditions is posed as an unconstrained optimization problem using trial solution method.
A trial neural network that satisfies the given boundary conditions prior to the training process is constructed using the technique of transfiniteconstrained and the theory of R-functions.
arXiv Detail & Related papers (2025-03-26T04:28:49Z) - Neuro-Symbolic AI for Analytical Solutions of Differential Equations [11.177091143370466]
We present an approach to find analytical solutions of differential equations using a neuro-symbolic AI framework.
This integration unifies numerical and symbolic differential equation solvers via a neuro-symbolic AI framework.
We show advantages over commercial solvers, symbolic methods, and approximate neural networks on a diverse set of problems.
arXiv Detail & Related papers (2025-02-03T16:06:56Z) - Exact and approximate error bounds for physics-informed neural networks [1.236974227340167]
We report important progress in calculating error bounds of physics-informed neural networks (PINNs) solutions of nonlinear first-order ODEs.
We give a general expression that describes the error of the solution that the PINN-based method provides for a nonlinear first-order ODE.
We propose a technique to calculate an approximate bound for the general case and an exact bound for a particular case.
arXiv Detail & Related papers (2024-11-21T05:15:28Z) - Transformed Physics-Informed Neural Networks for The Convection-Diffusion Equation [0.0]
Singularly perturbed problems have solutions with steep boundary layers that are hard to resolve numerically.
Traditional numerical methods, such as Finite Difference Methods, require a refined mesh to obtain stable and accurate solutions.
We consider the use of Physics-Informed Neural Networks (PINNs) to produce numerical solutions of singularly perturbed problems.
arXiv Detail & Related papers (2024-09-12T00:24:21Z) - A Deep Learning Framework for Solving Hyperbolic Partial Differential
Equations: Part I [0.0]
This research focuses on the development of a physics informed deep learning framework to approximate solutions to nonlinear PDEs.
The framework naturally handles imposition of boundary conditions (Neumann/Dirichlet), entropy conditions, and regularity requirements.
arXiv Detail & Related papers (2023-07-09T08:27:17Z) - Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery [77.34726150561087]
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions.
The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.
arXiv Detail & Related papers (2023-06-29T15:37:19Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Symbolic Recovery of Differential Equations: The Identifiability Problem [52.158782751264205]
Symbolic recovery of differential equations is the ambitious attempt at automating the derivation of governing equations.
We provide both necessary and sufficient conditions for a function to uniquely determine the corresponding differential equation.
We then use our results to devise numerical algorithms aiming to determine whether a function solves a differential equation uniquely.
arXiv Detail & Related papers (2022-10-15T17:32:49Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Galerkin Neural Networks: A Framework for Approximating Variational
Equations with Error Control [0.0]
We present a new approach to using neural networks to approximate the solutions of variational equations.
We use a sequence of finite-dimensional subspaces whose basis functions are realizations of a sequence of neural networks.
arXiv Detail & Related papers (2021-05-28T20:25:40Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Solving Differential Equations Using Neural Network Solution Bundles [1.2891210250935146]
We propose a neural network be used as a solution bundle, a collection of solutions to an ODE for various initial states and system parameters.
The solution bundle exhibits fast, parallelizable evaluation of the system state, facilitating the use of Bayesian inference for parameter estimation.
arXiv Detail & Related papers (2020-06-17T02:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.