Simple Cycle Reservoirs are Universal
- URL: http://arxiv.org/abs/2308.10793v2
- Date: Tue, 4 Jun 2024 19:11:46 GMT
- Title: Simple Cycle Reservoirs are Universal
- Authors: Boyu Li, Robert Simon Fong, Peter Tiňo,
- Abstract summary: Reservoir models form a subclass of recurrent neural networks with fixed non-trainable input and dynamic coupling weights.
We show that they are capable of universal approximation of any unrestricted linear reservoir system.
- Score: 0.358439716487063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reservoir computation models form a subclass of recurrent neural networks with fixed non-trainable input and dynamic coupling weights. Only the static readout from the state space (reservoir) is trainable, thus avoiding the known problems with propagation of gradient information backwards through time. Reservoir models have been successfully applied in a variety of tasks and were shown to be universal approximators of time-invariant fading memory dynamic filters under various settings. Simple cycle reservoirs (SCR) have been suggested as severely restricted reservoir architecture, with equal weight ring connectivity of the reservoir units and input-to-reservoir weights of binary nature with the same absolute value. Such architectures are well suited for hardware implementations without performance degradation in many practical tasks. In this contribution, we rigorously study the expressive power of SCR in the complex domain and show that they are capable of universal approximation of any unrestricted linear reservoir system (with continuous readout) and hence any time-invariant fading memory filter over uniformly bounded input streams.
Related papers
- Universality of Real Minimal Complexity Reservoir [0.358439716487063]
Reservoir Computing (RC) models are distinguished by their fixed, non-trainable input layer and dynamically coupled reservoir.
Simple Cycle Reservoirs (SCR) represent a specialized class of RC models with a highly constrained reservoir architecture.
SCRs operating in real domain are universal approximators of time-invariant dynamic filters with fading memory.
arXiv Detail & Related papers (2024-08-15T10:44:33Z) - A Universal Framework for Quantum Dissipation:Minimally Extended State
Space and Exact Time-Local Dynamics [5.221249829454763]
dynamics of open quantum systems is formulated in a minimally extended state space.
Time-local evolution equation is created in a mixed Liouville-Fock space.
arXiv Detail & Related papers (2023-07-31T15:57:10Z) - Accumulative reservoir construction: Bridging continuously relaxed and
periodically refreshed extended reservoirs [0.0]
We introduce an accumulative reservoir construction that employs a series of partial refreshes of the extended reservoirs.
This provides a unified framework for both continuous (Lindblad) relaxation and a recently introduced periodically refresh approach.
We show how the range of behavior impacts errors and the computational cost, including within tensor networks.
arXiv Detail & Related papers (2022-10-10T17:59:58Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - Evolutionary Echo State Network: evolving reservoirs in the Fourier
space [1.7658686315825685]
The Echo State Network (ESN) is a class of Recurrent Neural Network with a large number of hidden-hidden weights (in the so-called reservoir)
We propose a new computational model of the ESN type, that represents the reservoir weights in the Fourier space and performs a fine-tuning of these weights applying genetic algorithms in the frequency domain.
arXiv Detail & Related papers (2022-06-10T08:59:40Z) - Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm
of spatio-temporal data [4.996878640124385]
We propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatialtemporal data.
NIF consists of two modified multilayer perceptrons (i) ShapeNet, which isolates and represents the spatial complexity (i) ShapeNet, which accounts for any other input measurements, including parametric dependencies, time, and sensor measurements.
We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatial-temporal dynamics, efficient many-spatial-temporal generalization, and improved performance for sparse
arXiv Detail & Related papers (2022-04-07T05:02:58Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Compressing Deep ODE-Nets using Basis Function Expansions [105.05435207079759]
We consider formulations of the weights as continuous-depth functions using linear combinations of basis functions.
This perspective allows us to compress the weights through a change of basis, without retraining, while maintaining near state-of-the-art performance.
In turn, both inference time and the memory footprint are reduced, enabling quick and rigorous adaptation between computational environments.
arXiv Detail & Related papers (2021-06-21T03:04:51Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.