Universality of reservoir systems with recurrent neural networks
- URL: http://arxiv.org/abs/2403.01900v2
- Date: Mon, 07 Apr 2025 03:54:31 GMT
- Title: Universality of reservoir systems with recurrent neural networks
- Authors: Hiroki Yasumoto, Toshiyuki Tanaka,
- Abstract summary: We show what we call uniform strong universality of RNN reservoir systems for a certain class of dynamical systems.<n>We construct an RNN reservoir system via parallel concatenation that has an upper bound of approximation error independent of each target in the class.
- Score: 2.380927607570675
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Approximation capability of reservoir systems whose reservoir is a recurrent neural network (RNN) is discussed. We show what we call uniform strong universality of RNN reservoir systems for a certain class of dynamical systems. This means that, given an approximation error to be achieved, one can construct an RNN reservoir system that approximates each target dynamical system in the class just via adjusting its linear readout. To show the universality, we construct an RNN reservoir system via parallel concatenation that has an upper bound of approximation error independent of each target in the class.
Related papers
- Deep Recurrent Stochastic Configuration Networks for Modelling Nonlinear Dynamic Systems [3.8719670789415925]
This paper proposes a novel deep reservoir computing framework, termed deep recurrent configuration network (DeepRSCN)
DeepRSCNs are incrementally constructed, with all reservoir nodes directly linked to the final output.
Given a set of training samples, DeepRSCNs can quickly generate learning representations, which consist of random basis functions with cascaded input readout weights.
arXiv Detail & Related papers (2024-10-28T10:33:15Z) - Simple Cycle Reservoirs are Universal [0.358439716487063]
Reservoir models form a subclass of recurrent neural networks with fixed non-trainable input and dynamic coupling weights.
We show that they are capable of universal approximation of any unrestricted linear reservoir system.
arXiv Detail & Related papers (2023-08-21T15:35:59Z) - Infinite-dimensional reservoir computing [9.152759278163954]
Reservoir computing approximation and generalization bounds are proved for a new concept class of input/output systems.
The results in the paper yield a fully implementable recurrent neural network-based learning algorithm with provable convergence guarantees.
arXiv Detail & Related papers (2023-04-02T08:59:12Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Reservoir Computing Using Complex Systems [0.0]
Reservoir Computing is a machine learning framework for utilising physical systems for computation.
We show how a single node reservoir can be employed for computation and explore the available options to improve the computational capability of the physical reservoirs.
arXiv Detail & Related papers (2022-12-17T00:25:56Z) - Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for
Pooling and Unpooling [101.72318949104627]
We propose a novel framework of hierarchical convolutional neural networks (HS-CNNs) with a lifting structure to learn adaptive spherical wavelets for pooling and unpooling.
LiftHS-CNN ensures a more efficient hierarchical feature learning for both image- and pixel-level tasks.
arXiv Detail & Related papers (2022-05-31T07:23:42Z) - Recurrent Neural Networks for Dynamical Systems: Applications to
Ordinary Differential Equations, Collective Motion, and Hydrological Modeling [0.20999222360659606]
We train and test RNNs uniquely in each task to demonstrate the broad applicability of RNNs in reconstruction and forecasting the dynamics of dynamical systems.
We analyze the performance of RNNs applied to three tasks: reconstruction of correct Lorenz solutions for a system with an error formulation, reconstruction of corrupted collective motion, trajectories, and forecasting of streamflow time series possessing spikes.
arXiv Detail & Related papers (2022-02-14T20:34:49Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Metric Entropy Limits on Recurrent Neural Network Learning of Linear
Dynamical Systems [0.0]
We show that RNNs can optimally learn - or identify in system-theory parlance - stable LTI systems.
For LTI systems whose input-output relation is characterized through a difference equation, this means that RNNs can learn the difference equation from input-output traces in a metric-entropy optimal manner.
arXiv Detail & Related papers (2021-05-06T10:12:30Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their
Asymptotic Overconfidence [65.24701908364383]
A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data.
But far away from them, ReLU neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
We show that it can be applied emphpost-hoc to any pre-trained ReLU BNN at a low cost.
arXiv Detail & Related papers (2020-10-06T13:32:18Z) - Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and
(gradient) stable architecture for learning long time dependencies [15.2292571922932]
We propose a novel architecture for recurrent neural networks.
Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations.
Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks.
arXiv Detail & Related papers (2020-10-02T12:35:04Z) - Discrete-time signatures and randomness in reservoir computing [8.579665234755478]
Reservoir computing is the possibility of approximating input/output systems with randomly chosen recurrent neural systems and a trained linear readout layer.
Light is shed on this phenomenon by constructing what is called strongly universal reservoir systems.
Explicit expressions for the probability distributions needed in the generation of the projected reservoir system are stated and bounds for the committed approximation error are provided.
arXiv Detail & Related papers (2020-09-17T10:55:59Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.