Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
- URL: http://arxiv.org/abs/2204.08621v1
- Date: Tue, 19 Apr 2022 02:55:10 GMT
- Title: Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs
- Authors: Justin Baker and Hedi Xia and Yiwei Wang and Elena Cherkaev and Akil
Narayan and Long Chen and Jack Xin and Andrea L. Bertozzi and Stanley J.
Osher and Bao Wang
- Abstract summary: This paper considers learning neural ODEs using implicit ODE solvers of different orders leveraging proximal operators.
The proximal implicit solver guarantees superiority over explicit solvers in numerical stability and computational efficiency.
- Score: 16.516974867571175
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Learning neural ODEs often requires solving very stiff ODE systems, primarily
using explicit adaptive step size ODE solvers. These solvers are
computationally expensive, requiring the use of tiny step sizes for numerical
stability and accuracy guarantees. This paper considers learning neural ODEs
using implicit ODE solvers of different orders leveraging proximal operators.
The proximal implicit solver consists of inner-outer iterations: the inner
iterations approximate each implicit update step using a fast optimization
algorithm, and the outer iterations solve the ODE system over time. The
proximal implicit ODE solver guarantees superiority over explicit solvers in
numerical stability and computational efficiency. We validate the advantages of
proximal implicit solvers over existing popular neural ODE solvers on various
challenging benchmark tasks, including learning continuous-depth graph neural
networks and continuous normalizing flows.
Related papers
- Faster Training of Neural ODEs Using Gau{\ss}-Legendre Quadrature [68.9206193762751]
We propose an alternative way to speed up the training of neural ODEs.
We use Gauss-Legendre quadrature to solve integrals faster than ODE-based methods.
We also extend the idea to training SDEs using the Wong-Zakai theorem, by training a corresponding ODE and transferring the parameters.
arXiv Detail & Related papers (2023-08-21T11:31:15Z) - Eigen-informed NeuralODEs: Dealing with stability and convergence issues
of NeuralODEs [0.0]
We present a technique to add knowledge of ODE properties based on eigenvalues to the training objective of a NeuralODE.
We show, that the presented training process is far more robust against local minima, instabilities and sparse data samples and improves training convergence and performance.
arXiv Detail & Related papers (2023-02-07T14:45:39Z) - Experimental study of Neural ODE training with adaptive solver for
dynamical systems modeling [72.84259710412293]
Some ODE solvers called adaptive can adapt their evaluation strategy depending on the complexity of the problem at hand.
This paper describes a simple set of experiments to show why adaptive solvers cannot be seamlessly leveraged as a black-box for dynamical systems modelling.
arXiv Detail & Related papers (2022-11-13T17:48:04Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - A memory-efficient neural ODE framework based on high-level adjoint
differentiation [4.063868707697316]
We present a new neural ODE framework, PNODE, based on high-level discrete algorithmic differentiation.
We show that PNODE achieves the highest memory efficiency when compared with other reverse-accurate methods.
arXiv Detail & Related papers (2022-06-02T20:46:26Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Learning ODEs via Diffeomorphisms for Fast and Robust Integration [40.52862415144424]
Differentiable solvers are central for learning Neural ODEs.
We propose an alternative approach to learning ODEs from data.
We observe improvements of up to two orders of magnitude when integrating learned ODEs with gradient.
arXiv Detail & Related papers (2021-07-04T14:32:16Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - ResNet After All? Neural ODEs and Their Numerical Solution [28.954378025052925]
We show that trained Neural Ordinary Differential Equation models actually depend on the specific numerical method used during training.
We propose a method that monitors the behavior of the ODE solver during training to adapt its step size.
arXiv Detail & Related papers (2020-07-30T11:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.