Learning Neural Event Functions for Ordinary Differential Equations
- URL: http://arxiv.org/abs/2011.03902v4
- Date: Wed, 27 Oct 2021 17:16:56 GMT
- Title: Learning Neural Event Functions for Ordinary Differential Equations
- Authors: Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel
- Abstract summary: We extend Neural ODEs to implicitly defined termination criteria modeled by neural event functions.
We propose simulation-based training of point processes with applications in discrete control.
- Score: 31.474420819149724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing Neural ODE formulation relies on an explicit knowledge of the
termination time. We extend Neural ODEs to implicitly defined termination
criteria modeled by neural event functions, which can be chained together and
differentiated through. Neural Event ODEs are capable of modeling discrete and
instantaneous changes in a continuous-time system, without prior knowledge of
when these changes should occur or how many such changes should exist. We test
our approach in modeling hybrid discrete- and continuous- systems such as
switching dynamical systems and collision in multi-body systems, and we propose
simulation-based training of point processes with applications in discrete
control.
Related papers
- Individualized Dosing Dynamics via Neural Eigen Decomposition [51.62933814971523]
We introduce the Neural Eigen Differential Equation algorithm (NESDE)
NESDE provides individualized modeling, tunable generalization to new treatment policies, and fast, continuous, closed-form prediction.
We demonstrate the robustness of NESDE in both synthetic and real medical problems, and use the learned dynamics to publish simulated medical gym environments.
arXiv Detail & Related papers (2023-06-24T17:01:51Z) - Anamnesic Neural Differential Equations with Orthogonal Polynomial
Projections [6.345523830122166]
We propose PolyODE, a formulation that enforces long-range memory and preserves a global representation of the underlying dynamical system.
Our construction is backed by favourable theoretical guarantees and we demonstrate that it outperforms previous works in the reconstruction of past and future data.
arXiv Detail & Related papers (2023-03-03T10:49:09Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Neural Hybrid Automata: Learning Dynamics with Multiple Modes and
Stochastic Transitions [36.81150424798492]
We introduce Neural Hybrid Automata (NHAs), a recipe for learning SHS dynamics without a priori knowledge on the number of modes and inter-modal transition dynamics.
NHAs provide a systematic inference method based on normalizing flows, neural differential equations and self-supervision.
We showcase NHAs on several tasks, including mode recovery and flow learning in systems with transitions, and end-to-end learning of hierarchical robot controllers.
arXiv Detail & Related papers (2021-06-08T08:04:39Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Bayesian Neural Ordinary Differential Equations [0.9422623204346027]
We demonstrate the successful integration of Neural ODEs with Bayesian inference frameworks.
We achieve a posterior sample accuracy of 98.5% on the test ensemble of 10,000 images.
This gives a scientific machine learning tool for probabilistic estimation of uncertainties.
arXiv Detail & Related papers (2020-12-14T04:05:26Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z) - Learning Continuous-Time Dynamics by Stochastic Differential Networks [32.63114111531396]
We propose a flexible continuous-time recurrent neural network named Variational Differential Networks (VSDN)
VSDN embeds the complicated dynamics of the sporadic time series by neural Differential Equations (SDE)
We show that VSDNs outperform state-of-the-art continuous-time deep learning models and achieve remarkable performance on prediction and tasks for sporadic time series.
arXiv Detail & Related papers (2020-06-11T01:40:34Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.