Discrete-time signatures and randomness in reservoir computing
- URL: http://arxiv.org/abs/2010.14615v1
- Date: Thu, 17 Sep 2020 10:55:59 GMT
- Title: Discrete-time signatures and randomness in reservoir computing
- Authors: Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega,
and Josef Teichmann
- Abstract summary: Reservoir computing is the possibility of approximating input/output systems with randomly chosen recurrent neural systems and a trained linear readout layer.
Light is shed on this phenomenon by constructing what is called strongly universal reservoir systems.
Explicit expressions for the probability distributions needed in the generation of the projected reservoir system are stated and bounds for the committed approximation error are provided.
- Score: 8.579665234755478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new explanation of geometric nature of the reservoir computing phenomenon
is presented. Reservoir computing is understood in the literature as the
possibility of approximating input/output systems with randomly chosen
recurrent neural systems and a trained linear readout layer. Light is shed on
this phenomenon by constructing what is called strongly universal reservoir
systems as random projections of a family of state-space systems that generate
Volterra series expansions. This procedure yields a state-affine reservoir
system with randomly generated coefficients in a dimension that is
logarithmically reduced with respect to the original system. This reservoir
system is able to approximate any element in the fading memory filters class
just by training a different linear readout for each different filter. Explicit
expressions for the probability distributions needed in the generation of the
projected reservoir system are stated and bounds for the committed
approximation error are provided.
Related papers
- On the dimension of pullback attractors in recurrent neural networks [0.0]
Recently, it has been conjectured that reservoir computers, a particular class of RNNs, trained on observations of a dynamical systems can be interpreted as embeddings.
In this work, we use a nonautonomous dynamical systems approach to establish an upper bound for the fractal dimension of the subset of reservoir state space approximated during training and prediction phase.
arXiv Detail & Related papers (2025-01-20T09:38:30Z) - Stochastic Reservoir Computers [0.0]
In reservoir computing, the number of distinct states of the entire reservoir computer can potentially scale exponentially with the size of the reservoir hardware.
While shot noise is a limiting factor in the performance of reservoir computing, we show significantly improved performance compared to a reservoir computer with similar hardware in cases where the effects of noise are small.
arXiv Detail & Related papers (2024-05-20T21:26:00Z) - Universality of reservoir systems with recurrent neural networks [2.812750563066397]
A reservoir system approximates a set of functions just by adjusting its linear readout while the reservoir is fixed.
We will show what we call uniform strong universality of a family of RNN reservoir systems for a certain class of functions to be approximated.
arXiv Detail & Related papers (2024-03-04T09:59:11Z) - Reservoir computing with logistic map [0.0]
We demonstrate here a method to predict temporal and nontemporal tasks by constructing virtual nodes as constituting a reservoir in reservoir computing.
We predict three nonlinear systems, namely Lorenz, Rossler, and Hindmarsh-Rose, for temporal tasks and a seventh order for nontemporal tasks with great accuracy.
Remarkably, the logistic map performs well and predicts close to the actual or target values.
arXiv Detail & Related papers (2024-01-17T09:22:15Z) - Iterative Sketching for Secure Coded Regression [66.53950020718021]
We propose methods for speeding up distributed linear regression.
Specifically, we randomly rotate the basis of the system of equations and then subsample blocks, to simultaneously secure the information and reduce the dimension of the regression problem.
arXiv Detail & Related papers (2023-08-08T11:10:42Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Bayesian Renormalization [68.8204255655161]
We present a fully information theoretic approach to renormalization inspired by Bayesian statistical inference.
The main insight of Bayesian Renormalization is that the Fisher metric defines a correlation length that plays the role of an emergent RG scale.
We provide insight into how the Bayesian Renormalization scheme relates to existing methods for data compression and data generation.
arXiv Detail & Related papers (2023-05-17T18:00:28Z) - Optimization of a Hydrodynamic Computational Reservoir through Evolution [58.720142291102135]
We interface with a model of a hydrodynamic system, under development by a startup, as a computational reservoir.
We optimized the readout times and how inputs are mapped to the wave amplitude or frequency using an evolutionary search algorithm.
Applying evolutionary methods to this reservoir system substantially improved separability on an XNOR task, in comparison to implementations with hand-selected parameters.
arXiv Detail & Related papers (2023-04-20T19:15:02Z) - Infinite-dimensional reservoir computing [9.152759278163954]
Reservoir computing approximation and generalization bounds are proved for a new concept class of input/output systems.
The results in the paper yield a fully implementable recurrent neural network-based learning algorithm with provable convergence guarantees.
arXiv Detail & Related papers (2023-04-02T08:59:12Z) - Dealing with Collinearity in Large-Scale Linear System Identification
Using Gaussian Regression [3.04585143845864]
We consider estimation of networks consisting of several interconnected dynamic systems.
We develop a strategy cast in a Bayesian regularization framework where any impulse response is seen as realization of a zero-mean Gaussian process.
We design a novel Markov chain Monte Carlo scheme able to reconstruct the impulse responses posterior by efficiently dealing with collinearity.
arXiv Detail & Related papers (2023-02-21T19:35:47Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Linear embedding of nonlinear dynamical systems and prospects for
efficient quantum algorithms [74.17312533172291]
We describe a method for mapping any finite nonlinear dynamical system to an infinite linear dynamical system (embedding)
We then explore an approach for approximating the resulting infinite linear system with finite linear systems (truncation)
arXiv Detail & Related papers (2020-12-12T00:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.