Euler State Networks: Non-dissipative Reservoir Computing
- URL: http://arxiv.org/abs/2203.09382v3
- Date: Fri, 24 Mar 2023 15:18:21 GMT
- Title: Euler State Networks: Non-dissipative Reservoir Computing
- Authors: Claudio Gallicchio
- Abstract summary: We propose a novel Reservoir Computing (RC) model, called the Euler State Network (EuSN)
Our mathematical analysis shows that the resulting model is biased towards a unitary effective spectral radius and zero local Lyapunov exponents, intrinsically operating near to the edge of stability.
Results on time-series classification benchmarks indicate that EuSN is able to match (or even exceed) the accuracy of trainable Recurrent Neural Networks.
- Score: 3.55810827129032
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Inspired by the numerical solution of ordinary differential equations, in
this paper we propose a novel Reservoir Computing (RC) model, called the Euler
State Network (EuSN). The presented approach makes use of forward Euler
discretization and antisymmetric recurrent matrices to design reservoir
dynamics that are both stable and non-dissipative by construction.
Our mathematical analysis shows that the resulting model is biased towards a
unitary effective spectral radius and zero local Lyapunov exponents,
intrinsically operating near to the edge of stability. Experiments on long-term
memory tasks show the clear superiority of the proposed approach over standard
RC models in problems requiring effective propagation of input information over
multiple time-steps. Furthermore, results on time-series classification
benchmarks indicate that EuSN is able to match (or even exceed) the accuracy of
trainable Recurrent Neural Networks, while retaining the training efficiency of
the RC family, resulting in up to $\approx$ 490-fold savings in computation
time and $\approx$ 1750-fold savings in energy consumption.
Related papers
- Deterministic Reservoir Computing for Chaotic Time Series Prediction [5.261277318790788]
We propose a deterministic alternative to the higher-dimensional mapping therein, TCRC-LM and TCRC-CM.
To further enhance the predictive capabilities in the task of time series forecasting, we propose the novel utilization of the Lobachevsky function as non-linear activation function.
arXiv Detail & Related papers (2025-01-26T17:46:40Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Gated Recurrent Neural Networks with Weighted Time-Delay Feedback [59.125047512495456]
We introduce a novel gated recurrent unit (GRU) with a weighted time-delay feedback mechanism.
We show that $tau$-GRU can converge faster and generalize better than state-of-the-art recurrent units and gated recurrent architectures.
arXiv Detail & Related papers (2022-12-01T02:26:34Z) - Composite FORCE learning of chaotic echo state networks for time-series
prediction [7.650966670809372]
This paper proposes a composite FORCE learning method to train ESNs whose initial activity is spontaneously chaotic.
numerical results have shown that it significantly improves learning and prediction performances compared with existing methods.
arXiv Detail & Related papers (2022-07-06T03:44:09Z) - Unsupervised Reservoir Computing for Solving Ordinary Differential
Equations [1.6371837018687636]
unsupervised reservoir computing (RC), an echo-state recurrent neural network capable of discovering approximate solutions that satisfy ordinary differential equations (ODEs)
We use Bayesian optimization to efficiently discover optimal sets in a high-dimensional hyper- parameter space and numerically show that one set is robust and can be used to solve an ODE for different initial conditions and time ranges.
arXiv Detail & Related papers (2021-08-25T18:16:42Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.