Delay-SDE-net: A deep learning approach for time series modelling with
memory and uncertainty estimates
- URL: http://arxiv.org/abs/2303.08587v1
- Date: Tue, 14 Mar 2023 14:31:38 GMT
- Title: Delay-SDE-net: A deep learning approach for time series modelling with
memory and uncertainty estimates
- Authors: Mari Dahl Eggen and Alise Danielle Midtfjord
- Abstract summary: This paper presents the Delay-SDE-net, a neural network model based on delay differential equations (SDDEs)
The use of SDDEs with multiple delays as modelling makes it a suitable model for time series with memory effects.
We derive the theoretical error of the Delay-SDE-net and analyze the convergence rate numerically.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To model time series accurately is important within a wide range of fields.
As the world is generally too complex to be modelled exactly, it is often
meaningful to assess the probability of a dynamical system to be in a specific
state. This paper presents the Delay-SDE-net, a neural network model based on
stochastic delay differential equations (SDDEs). The use of SDDEs with multiple
delays as modelling framework makes it a suitable model for time series with
memory effects, as it includes memory through previous states of the system.
The stochastic part of the Delay-SDE-net provides a basis for estimating
uncertainty in modelling, and is split into two neural networks to account for
aleatoric and epistemic uncertainty. The uncertainty is provided instantly,
making the model suitable for applications where time is sparse. We derive the
theoretical error of the Delay-SDE-net and analyze the convergence rate
numerically. At comparisons with similar models, the Delay-SDE-net has
consistently the best performance, both in predicting time series values and
uncertainties.
Related papers
- Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - Towards Flexible Time-to-event Modeling: Optimizing Neural Networks via
Rank Regression [17.684526928033065]
We introduce the Deep AFT Rank-regression model for Time-to-event prediction (DART)
This model uses an objective function based on Gehan's rank statistic, which is efficient and reliable for representation learning.
The proposed method is a semiparametric approach to AFT modeling that does not impose any distributional assumptions on the survival time distribution.
arXiv Detail & Related papers (2023-07-16T13:58:28Z) - Neural Differential Recurrent Neural Network with Adaptive Time Steps [11.999568208578799]
We propose an RNN-based model, called RNN-ODE-Adap, that uses a neural ODE to represent the time development of the hidden states.
We adaptively select time steps based on the steepness of changes of the data over time so as to train the model more efficiently for the "spike-like" time series.
arXiv Detail & Related papers (2023-06-02T16:46:47Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - EgPDE-Net: Building Continuous Neural Networks for Time Series
Prediction with Exogenous Variables [22.145726318053526]
Inter-series correlation and time dependence among variables are rarely considered in the present continuous methods.
We propose a continuous-time model for arbitrary-step prediction to learn an unknown PDE system.
arXiv Detail & Related papers (2022-08-03T08:34:31Z) - Fractional SDE-Net: Generation of Time Series Data with Long-term Memory [10.267057557137665]
We propose fSDE-Net: neural fractional Differential Equation Network.
We derive the solver of fSDE-Net and theoretically analyze the existence and uniqueness of the solution.
Our experiments demonstrate that the fSDE-Net model can replicate distributional properties well.
arXiv Detail & Related papers (2022-01-16T05:37:02Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.