Neural Rough Differential Equations for Long Time Series
- URL: http://arxiv.org/abs/2009.08295v4
- Date: Mon, 21 Jun 2021 12:04:06 GMT
- Title: Neural Rough Differential Equations for Long Time Series
- Authors: James Morrill and Cristopher Salvi and Patrick Kidger and James Foster
and Terry Lyons
- Abstract summary: We use rough path theory to extend the formulation of Neural CDEs.
Instead of directly embedding into path space, we represent the input signal over small time intervals through its textitlog-signature
This is the approach for solving textitrough differential equations (RDEs)
- Score: 19.004296236396947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural controlled differential equations (CDEs) are the continuous-time
analogue of recurrent neural networks, as Neural ODEs are to residual networks,
and offer a memory-efficient continuous-time way to model functions of
potentially irregular time series. Existing methods for computing the forward
pass of a Neural CDE involve embedding the incoming time series into path
space, often via interpolation, and using evaluations of this path to drive the
hidden state. Here, we use rough path theory to extend this formulation.
Instead of directly embedding into path space, we instead represent the input
signal over small time intervals through its \textit{log-signature}, which are
statistics describing how the signal drives a CDE. This is the approach for
solving \textit{rough differential equations} (RDEs), and correspondingly we
describe our main contribution as the introduction of Neural RDEs. This
extension has a purpose: by generalising the Neural CDE approach to a broader
class of driving signals, we demonstrate particular advantages for tackling
long time series. In this regime, we demonstrate efficacy on problems of length
up to 17k observations and observe significant training speed-ups, improvements
in model performance, and reduced memory requirements compared to existing
approaches.
Related papers
- Log Neural Controlled Differential Equations: The Lie Brackets Make a Difference [22.224853384201595]
Neural CDEs (NCDEs) treat time series data as observations from a control path.
We introduce Log-NCDEs, a novel, effective, and efficient method for training NCDEs.
arXiv Detail & Related papers (2024-02-28T17:40:05Z) - Faster Training of Neural ODEs Using Gau{\ss}-Legendre Quadrature [68.9206193762751]
We propose an alternative way to speed up the training of neural ODEs.
We use Gauss-Legendre quadrature to solve integrals faster than ODE-based methods.
We also extend the idea to training SDEs using the Wong-Zakai theorem, by training a corresponding ODE and transferring the parameters.
arXiv Detail & Related papers (2023-08-21T11:31:15Z) - Neural Delay Differential Equations: System Reconstruction and Image
Classification [14.59919398960571]
We propose a new class of continuous-depth neural networks with delay, named Neural Delay Differential Equations (NDDEs)
Compared to NODEs, NDDEs have a stronger capacity of nonlinear representations.
We achieve lower loss and higher accuracy not only for the data produced synthetically but also for the CIFAR10, a well-known image dataset.
arXiv Detail & Related papers (2023-04-11T16:09:28Z) - Learning the Delay Using Neural Delay Differential Equations [0.5505013339790825]
We develop a continuous time neural network approach based on Delay Differential Equations (DDEs)
Our model uses the adjoint sensitivity method to learn the model parameters and delay directly from data.
We conclude our discussion with potential future directions and applications.
arXiv Detail & Related papers (2023-04-03T19:50:36Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - EXIT: Extrapolation and Interpolation-based Neural Controlled
Differential Equations for Time-series Classification and Forecasting [19.37382379378985]
neural controlled differential equations (NCDEs) are considered as a breakthrough in deep learning.
In this work, we enhance NCDEs by redesigning their core part, i.e., generating a continuous path from a discrete time-series input.
Our NCDE design can use both the extrapolation and the extrapolated information for downstream machine learning tasks.
arXiv Detail & Related papers (2022-04-19T09:37:36Z) - Time Series Forecasting with Ensembled Stochastic Differential Equations
Driven by L\'evy Noise [2.3076895420652965]
We use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series.
Our contributions are, first, we use the phase space reconstruction method to extract intrinsic dimension of the time series data.
Second, we explore SDEs driven by $alpha$-stable L'evy motion to model the time series data and solve the problem through neural network approximation.
arXiv Detail & Related papers (2021-11-25T16:49:01Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Short-Term Memory Optimization in Recurrent Neural Networks by
Autoencoder-based Initialization [79.42778415729475]
We explore an alternative solution based on explicit memorization using linear autoencoders for sequences.
We show how such pretraining can better support solving hard classification tasks with long sequences.
We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase.
arXiv Detail & Related papers (2020-11-05T14:57:16Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.