Neural Chronos ODE: Unveiling Temporal Patterns and Forecasting Future
and Past Trends in Time Series Data
- URL: http://arxiv.org/abs/2307.01023v1
- Date: Mon, 3 Jul 2023 13:54:50 GMT
- Title: Neural Chronos ODE: Unveiling Temporal Patterns and Forecasting Future
and Past Trends in Time Series Data
- Authors: C.Coelho, M. Fernanda P. Costa and L.L. Ferr\'as
- Abstract summary: Experimental results demonstrate that Neural CODE outperforms Neural ODE in learning the dynamics of a spiral forward and backward in time.
We compare the performance of CODE-RNN/-GRU/-LSTM and CODE-BiRNN/-BiGRU/-BiLSTM against ODE-RNN/-GRU/-LSTM on three real-life time series data tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work introduces Neural Chronos Ordinary Differential Equations (Neural
CODE), a deep neural network architecture that fits a continuous-time ODE
dynamics for predicting the chronology of a system both forward and backward in
time. To train the model, we solve the ODE as an initial value problem and a
final value problem, similar to Neural ODEs. We also explore two approaches to
combining Neural CODE with Recurrent Neural Networks by replacing Neural ODE
with Neural CODE (CODE-RNN), and incorporating a bidirectional RNN for full
information flow in both time directions (CODE-BiRNN), and variants with other
update cells namely GRU and LSTM: CODE-GRU, CODE-BiGRU, CODE-LSTM, CODE-BiLSTM.
Experimental results demonstrate that Neural CODE outperforms Neural ODE in
learning the dynamics of a spiral forward and backward in time, even with
sparser data. We also compare the performance of CODE-RNN/-GRU/-LSTM and
CODE-BiRNN/-BiGRU/-BiLSTM against ODE-RNN/-GRU/-LSTM on three real-life time
series data tasks: imputation of missing data for lower and higher dimensional
data, and forward and backward extrapolation with shorter and longer time
horizons. Our findings show that the proposed architectures converge faster,
with CODE-BiRNN/-BiGRU/-BiLSTM consistently outperforming the other
architectures on all tasks.
Related papers
- Enhancing Continuous Time Series Modelling with a Latent ODE-LSTM
Approach [0.0]
Continuous Time Series (CTS) are found in many applications.
CTS with irregular sampling rate are difficult to model with standard Recurrent Neural Networks (RNNs)
arXiv Detail & Related papers (2023-07-11T09:01:49Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Neural Differential Recurrent Neural Network with Adaptive Time Steps [11.999568208578799]
We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states.
We adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series.
arXiv Detail & Related papers (2023-06-02T16:46:47Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Neural Generalized Ordinary Differential Equations with Layer-varying
Parameters [1.3691539554014036]
We show that the layer-varying Neural-GODE is more flexible and general than the standard Neural-ODE.
The Neural-GODE enjoys the computational and memory benefits while performing comparably to ResNets in prediction accuracy.
arXiv Detail & Related papers (2022-09-21T20:02:28Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - CARRNN: A Continuous Autoregressive Recurrent Neural Network for Deep
Representation Learning from Sporadic Temporal Data [1.8352113484137622]
In this paper, a novel deep learning-based model is developed for modeling multiple temporal features in sporadic data.
The proposed model, called CARRNN, uses a generalized discrete-time autoregressive model that is trainable end-to-end using neural networks modulated by time lags.
It is applied to multivariate time-series regression tasks using data provided for Alzheimer's disease progression modeling and intensive care unit (ICU) mortality rate prediction.
arXiv Detail & Related papers (2021-04-08T12:43:44Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Neural Ordinary Differential Equation based Recurrent Neural Network
Model [0.7233897166339269]
differential equations are a promising new member in the neural network family.
This paper explores the strength of the ordinary differential equation (ODE) is explored with a new extension.
Two new ODE-based RNN models (GRU-ODE model and LSTM-ODE) can compute the hidden state and cell state at any point of time using an ODE solver.
Experiments show that these new ODE based RNN models require less training time than Latent ODEs and conventional Neural ODEs.
arXiv Detail & Related papers (2020-05-20T01:02:29Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.