ParalESN: Enabling parallel information processing in Reservoir Computing
- URL: http://arxiv.org/abs/2601.22296v1
- Date: Thu, 29 Jan 2026 20:18:27 GMT
- Title: ParalESN: Enabling parallel information processing in Reservoir Computing
- Authors: Matteo Pinna, Giacomo Lagomarsini, Andrea Ceni, Claudio Gallicchio,
- Abstract summary: Reservoir Computing has established itself as an efficient paradigm for temporal processing.<n>This work introduces Parallel Echo State Network (ParalESN) to address these limitations.<n>ParalESN enables the construction of high-dimensional and efficient reservoirs based on diagonal linear recurrence in the complex space.
- Score: 10.601079644990504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reservoir Computing (RC) has established itself as an efficient paradigm for temporal processing. However, its scalability remains severely constrained by (i) the necessity of processing temporal data sequentially and (ii) the prohibitive memory footprint of high-dimensional reservoirs. In this work, we revisit RC through the lens of structured operators and state space modeling to address these limitations, introducing Parallel Echo State Network (ParalESN). ParalESN enables the construction of high-dimensional and efficient reservoirs based on diagonal linear recurrence in the complex space, enabling parallel processing of temporal data. We provide a theoretical analysis demonstrating that ParalESN preserves the Echo State Property and the universality guarantees of traditional Echo State Networks while admitting an equivalent representation of arbitrary linear reservoirs in the complex diagonal form. Empirically, ParalESN matches the predictive accuracy of traditional RC on time series benchmarks, while delivering substantial computational savings. On 1-D pixel-level classification tasks, ParalESN achieves competitive accuracy with fully trainable neural networks while reducing computational costs and energy consumption by orders of magnitude. Overall, ParalESN offers a promising, scalable, and principled pathway for integrating RC within the deep learning landscape.
Related papers
- ANCRe: Adaptive Neural Connection Reassignment for Efficient Depth Scaling [57.91760520589592]
Scaling network depth has been a central driver behind the success of modern foundation models.<n>This paper revisits the default mechanism for deepening neural networks, namely residual connections.<n>We introduce adaptive neural connection reassignment (ANCRe), a principled and lightweight framework that parameterizes and learns residual connectivities from the data.
arXiv Detail & Related papers (2026-02-09T18:54:18Z) - Sequential Reservoir Computing for Efficient High-Dimensional Spatiotemporal Forecasting [1.5313142881179707]
Reservoir Computing (RC) mitigates challenges by replacing backpropagation with fixed recurrent atemporal readout optimization.<n>We introduce a Sequential Reservoir Computing (Sequential RC) architecture that decomposes a large reservoir into a series of smaller, interconnected layers.
arXiv Detail & Related papers (2026-01-01T02:24:56Z) - Recurrent Stochastic Configuration Networks with Incremental Blocks [0.0]
Recurrent configuration networks (RSCNs) have shown promise in modelling nonlinear dynamic systems with order uncertainty.
This paper develops the original RSCNs with block increments, termed block RSCNs (BRSCNs)
BRSCNs can simultaneously add multiple reservoir nodes (subreservoirs) during the construction.
arXiv Detail & Related papers (2024-11-18T05:58:47Z) - Universality of Real Minimal Complexity Reservoir [0.358439716487063]
Reservoir Computing (RC) models are distinguished by their fixed, non-trainable input layer and dynamically coupled reservoir.
Simple Cycle Reservoirs (SCR) represent a specialized class of RC models with a highly constrained reservoir architecture.
SCRs operating in real domain are universal approximators of time-invariant dynamic filters with fading memory.
arXiv Detail & Related papers (2024-08-15T10:44:33Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Parallel Spatio-Temporal Attention-Based TCN for Multivariate Time
Series Prediction [4.211344046281808]
A recurrent neural network with attention to help extend the prediction windows is the current-state-of-the-art for this task.
We argue that their vanishing gradients, short memories, and serial architecture make RNNs fundamentally unsuited to long-horizon forecasting with complex data.
We propose a framework called PSTA-TCN, that combines a paralleltemporal-temporal attention mechanism to extract dynamic internal correlations with stacked TCN backbones.
arXiv Detail & Related papers (2022-03-02T09:27:56Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Exploiting Multiple Timescales in Hierarchical Echo State Networks [0.0]
Echo state networks (ESNs) are a powerful form of reservoir computing that only require training of linear output weights.
Here we explore the timescales in hierarchical ESNs, where the reservoir is partitioned into two smaller reservoirs linked with distinct properties.
arXiv Detail & Related papers (2021-01-11T22:33:17Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.