Liquid Time-constant Networks
- URL: http://arxiv.org/abs/2006.04439v4
- Date: Mon, 14 Dec 2020 22:23:52 GMT
- Title: Liquid Time-constant Networks
- Authors: Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, Radu
Grosu
- Abstract summary: We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
- Score: 117.57116214802504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities,
we construct networks of linear first-order dynamical systems modulated via
nonlinear interlinked gates. The resulting models represent dynamical systems
with varying (i.e., liquid) time-constants coupled to their hidden state, with
outputs being computed by numerical differential equation solvers. These neural
networks exhibit stable and bounded behavior, yield superior expressivity
within the family of neural ordinary differential equations, and give rise to
improved performance on time-series prediction tasks. To demonstrate these
properties, we first take a theoretical approach to find bounds over their
dynamics and compute their expressive power by the trajectory length measure in
latent trajectory space. We then conduct a series of time-series prediction
experiments to manifest the approximation capability of Liquid Time-Constant
Networks (LTCs) compared to classical and modern RNNs. Code and data are
available at https://github.com/raminmh/liquid_time_constant_networks
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Brain-Inspired Spiking Neural Network for Online Unsupervised Time
Series Prediction [13.521272923545409]
We present a novel Continuous Learning-based Unsupervised Recurrent Spiking Neural Network Model (CLURSNN)
CLURSNN makes online predictions by reconstructing the underlying dynamical system using Random Delay Embedding.
We show that the proposed online time series prediction methodology outperforms state-of-the-art DNN models when predicting an evolving Lorenz63 dynamical system.
arXiv Detail & Related papers (2023-04-10T16:18:37Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Accelerating Simulation of Stiff Nonlinear Systems using Continuous-Time
Echo State Networks [1.1545092788508224]
We present a data-driven method for generating surrogates of nonlinear ordinary differential equations with dynamics at widely separated timescales.
We empirically demonstrate near-constant time performance using our CTESNs on a physically motivated scalable model of a heating system.
arXiv Detail & Related papers (2020-10-07T17:40:06Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - Learning Continuous-Time Dynamics by Stochastic Differential Networks [32.63114111531396]
We propose a flexible continuous-time recurrent neural network named Variational Differential Networks (VSDN)
VSDN embeds the complicated dynamics of the sporadic time series by neural Differential Equations (SDE)
We show that VSDNs outperform state-of-the-art continuous-time deep learning models and achieve remarkable performance on prediction and tasks for sporadic time series.
arXiv Detail & Related papers (2020-06-11T01:40:34Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.