Experimental study of Neural ODE training with adaptive solver for
dynamical systems modeling
- URL: http://arxiv.org/abs/2211.06972v1
- Date: Sun, 13 Nov 2022 17:48:04 GMT
- Title: Experimental study of Neural ODE training with adaptive solver for
dynamical systems modeling
- Authors: Alexandre Allauzen and Thiago Petrilli Maffei Dardis and Hannah Plath
- Abstract summary: Some ODE solvers called adaptive can adapt their evaluation strategy depending on the complexity of the problem at hand.
This paper describes a simple set of experiments to show why adaptive solvers cannot be seamlessly leveraged as a black-box for dynamical systems modelling.
- Score: 72.84259710412293
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural Ordinary Differential Equations (ODEs) was recently introduced as a
new family of neural network models, which relies on black-box ODE solvers for
inference and training. Some ODE solvers called adaptive can adapt their
evaluation strategy depending on the complexity of the problem at hand, opening
great perspectives in machine learning. However, this paper describes a simple
set of experiments to show why adaptive solvers cannot be seamlessly leveraged
as a black-box for dynamical systems modelling. By taking the Lorenz'63 system
as a showcase, we show that a naive application of the Fehlberg's method does
not yield the expected results. Moreover, a simple workaround is proposed that
assumes a tighter interaction between the solver and the training strategy. The
code is available on github:
https://github.com/Allauzen/adaptive-step-size-neural-ode
Related papers
- Bridging Logic and Learning: A Neural-Symbolic Approach for Enhanced
Reasoning in Neural Models (ASPER) [0.13053649021965597]
This paper introduces an approach designed to improve the performance of neural models in learning reasoning tasks.
It achieves this by integrating Answer Set Programming solvers and domain-specific expertise.
The model shows a significant improvement in solving Sudoku puzzles using only 12 puzzles for training and testing.
arXiv Detail & Related papers (2023-12-18T19:06:00Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - Learning ODEs via Diffeomorphisms for Fast and Robust Integration [40.52862415144424]
Differentiable solvers are central for learning Neural ODEs.
We propose an alternative approach to learning ODEs from data.
We observe improvements of up to two orders of magnitude when integrating learned ODEs with gradient.
arXiv Detail & Related papers (2021-07-04T14:32:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Artificial neural network as a universal model of nonlinear dynamical
systems [0.0]
The map is built as an artificial neural network whose weights encode a modeled system.
We consider the Lorenz system, the Roessler system and also Hindmarch-Rose neuron.
High similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunovs exponents.
arXiv Detail & Related papers (2021-03-06T16:02:41Z) - ResNet After All? Neural ODEs and Their Numerical Solution [28.954378025052925]
We show that trained Neural Ordinary Differential Equation models actually depend on the specific numerical method used during training.
We propose a method that monitors the behavior of the ODE solver during training to adapt its step size.
arXiv Detail & Related papers (2020-07-30T11:24:05Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.