Training Stiff Neural Ordinary Differential Equations with Implicit Single-Step Methods
- URL: http://arxiv.org/abs/2410.05592v1
- Date: Tue, 8 Oct 2024 01:08:17 GMT
- Title: Training Stiff Neural Ordinary Differential Equations with Implicit Single-Step Methods
- Authors: Colby Fronk, Linda Petzold,
- Abstract summary: Stiff systems of ordinary differential equations (ODEs) are pervasive in many science and engineering fields.
Standard neural ODE approaches struggle to learn them.
This paper proposes an approach based on single-step implicit schemes to enable neural ODEs to handle stiffness.
- Score: 3.941173292703699
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Stiff systems of ordinary differential equations (ODEs) are pervasive in many science and engineering fields, yet standard neural ODE approaches struggle to learn them. This limitation is the main barrier to the widespread adoption of neural ODEs. In this paper, we propose an approach based on single-step implicit schemes to enable neural ODEs to handle stiffness and demonstrate that our implicit neural ODE method can learn stiff dynamics. This work addresses a key limitation in current neural ODE methods, paving the way for their use in a wider range of scientific problems.
Related papers
- Semi-Implicit Neural Ordinary Differential Equations [5.196303789025002]
We present a semi-implicit neural ODE approach that exploits the partitionable structure of the underlying dynamics.
Our technique leads to an implicit neural network with significant computational advantages over existing approaches.
arXiv Detail & Related papers (2024-12-15T20:21:02Z) - Training Stiff Neural Ordinary Differential Equations with Explicit Exponential Integration Methods [3.941173292703699]
Stiff ordinary differential equations (ODEs) are common in many science and engineering fields.
Standard neural ODE approaches struggle to accurately learn stiff systems.
This paper expands on our earlier work by exploring explicit exponential integration methods.
arXiv Detail & Related papers (2024-12-02T06:40:08Z) - Faster Training of Neural ODEs Using Gau{\ss}-Legendre Quadrature [68.9206193762751]
We propose an alternative way to speed up the training of neural ODEs.
We use Gauss-Legendre quadrature to solve integrals faster than ODE-based methods.
We also extend the idea to training SDEs using the Wong-Zakai theorem, by training a corresponding ODE and transferring the parameters.
arXiv Detail & Related papers (2023-08-21T11:31:15Z) - Uncertainty and Structure in Neural Ordinary Differential Equations [28.12033356095061]
We show that basic and lightweight Bayesian deep learning techniques like the Laplace approximation can be applied to neural ODEs.
We explore how mechanistic knowledge and uncertainty quantification interact on two recently proposed neural ODE frameworks.
arXiv Detail & Related papers (2023-05-22T17:50:42Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - Stiff Neural Ordinary Differential Equations [0.0]
We first show the challenges of learning neural ODE in the classical stiff ODE systems of Robertson's problem.
We then present successful demonstrations in stiff systems of Robertson's problem and an air pollution problem.
arXiv Detail & Related papers (2021-03-29T05:24:56Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Hypersolvers: Toward Fast Continuous-Depth Models [16.43439140464003]
We introduce hypersolvers, neural networks designed to solve ODEs with low overhead and theoretical guarantees on accuracy.
The synergistic combination of hypersolvers and Neural ODEs allows for cheap inference and unlocks a new frontier for practical application of continuous-depth models.
arXiv Detail & Related papers (2020-07-19T06:31:31Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z) - Towards Understanding Normalization in Neural ODEs [71.26657499537366]
We show that it is possible to achieve 93% accuracy in the CIFAR-10 classification task.
This is the highest reported accuracy among neural ODEs tested on this problem.
arXiv Detail & Related papers (2020-04-20T11:54:55Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.