Neural Ordinary Differential Equation based Recurrent Neural Network
Model
- URL: http://arxiv.org/abs/2005.09807v1
- Date: Wed, 20 May 2020 01:02:29 GMT
- Title: Neural Ordinary Differential Equation based Recurrent Neural Network
Model
- Authors: Mansura Habiba, Barak A. Pearlmutter
- Abstract summary: differential equations are a promising new member in the neural network family.
This paper explores the strength of the ordinary differential equation (ODE) is explored with a new extension.
Two new ODE-based RNN models (GRU-ODE model and LSTM-ODE) can compute the hidden state and cell state at any point of time using an ODE solver.
Experiments show that these new ODE based RNN models require less training time than Latent ODEs and conventional Neural ODEs.
- Score: 0.7233897166339269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural differential equations are a promising new member in the neural
network family. They show the potential of differential equations for time
series data analysis. In this paper, the strength of the ordinary differential
equation (ODE) is explored with a new extension. The main goal of this work is
to answer the following questions: (i)~can ODE be used to redefine the existing
neural network model? (ii)~can Neural ODEs solve the irregular sampling rate
challenge of existing neural network models for a continuous time series, i.e.,
length and dynamic nature, (iii)~how to reduce the training and evaluation time
of existing Neural ODE systems? This work leverages the mathematical foundation
of ODEs to redesign traditional RNNs such as Long Short-Term Memory (LSTM) and
Gated Recurrent Unit (GRU). The main contribution of this paper is to
illustrate the design of two new ODE-based RNN models (GRU-ODE model and
LSTM-ODE) which can compute the hidden state and cell state at any point of
time using an ODE solver. These models reduce the computation overhead of
hidden state and cell state by a vast amount. The performance evaluation of
these two new models for learning continuous time series with irregular
sampling rate is then demonstrated. Experiments show that these new ODE based
RNN models require less training time than Latent ODEs and conventional Neural
ODEs. They can achieve higher accuracy quickly, and the design of the neural
network is simpler than, previous neural ODE systems.
Related papers
- Neural Chronos ODE: Unveiling Temporal Patterns and Forecasting Future
and Past Trends in Time Series Data [0.0]
Experimental results demonstrate that Neural CODE outperforms Neural ODE in learning the dynamics of a spiral forward and backward in time.
We compare the performance of CODE-RNN/-GRU/-LSTM and CODE-BiRNN/-BiGRU/-BiLSTM against ODE-RNN/-GRU/-LSTM on three real-life time series data tasks.
arXiv Detail & Related papers (2023-07-03T13:54:50Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Neural Generalized Ordinary Differential Equations with Layer-varying
Parameters [1.3691539554014036]
We show that the layer-varying Neural-GODE is more flexible and general than the standard Neural-ODE.
The Neural-GODE enjoys the computational and memory benefits while performing comparably to ResNets in prediction accuracy.
arXiv Detail & Related papers (2022-09-21T20:02:28Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - Piecewise-constant Neural ODEs [41.116259317376475]
We make a piecewise-constant approximation to Neural ODEs to mitigate these issues.
Our model can be integrated exactly via Euler integration and can generate autoregressive samples in 3-20 times fewer steps than comparable RNN and ODE-RNN models.
arXiv Detail & Related papers (2021-06-11T21:46:55Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.