Sequential Reservoir Computing for Efficient High-Dimensional Spatiotemporal Forecasting
- URL: http://arxiv.org/abs/2601.00172v1
- Date: Thu, 01 Jan 2026 02:24:56 GMT
- Title: Sequential Reservoir Computing for Efficient High-Dimensional Spatiotemporal Forecasting
- Authors: Ata Akbari Asanjan, Filip Wudarski, Daniel O'Connor, Shaun Geaney, Elena Strbac, P. Aaron Lott, Davide Venturelli,
- Abstract summary: Reservoir Computing (RC) mitigates challenges by replacing backpropagation with fixed recurrent atemporal readout optimization.<n>We introduce a Sequential Reservoir Computing (Sequential RC) architecture that decomposes a large reservoir into a series of smaller, interconnected layers.
- Score: 1.5313142881179707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forecasting high-dimensional spatiotemporal systems remains computationally challenging for recurrent neural networks (RNNs) and long short-term memory (LSTM) models due to gradient-based training and memory bottlenecks. Reservoir Computing (RC) mitigates these challenges by replacing backpropagation with fixed recurrent layers and a convex readout optimization, yet conventional RC architectures still scale poorly with input dimensionality. We introduce a Sequential Reservoir Computing (Sequential RC) architecture that decomposes a large reservoir into a series of smaller, interconnected reservoirs. This design reduces memory and computational costs while preserving long-term temporal dependencies. Using both low-dimensional chaotic systems (Lorenz63) and high-dimensional physical simulations (2D vorticity and shallow-water equations), Sequential RC achieves 15-25% longer valid forecast horizons, 20-30% lower error metrics (SSIM, RMSE), and up to three orders of magnitude lower training cost compared to LSTM and standard RNN baselines. The results demonstrate that Sequential RC maintains the simplicity and efficiency of conventional RC while achieving superior scalability for high-dimensional dynamical systems. This approach provides a practical path toward real-time, energy-efficient forecasting in scientific and engineering applications.
Related papers
- ParalESN: Enabling parallel information processing in Reservoir Computing [10.601079644990504]
Reservoir Computing has established itself as an efficient paradigm for temporal processing.<n>This work introduces Parallel Echo State Network (ParalESN) to address these limitations.<n>ParalESN enables the construction of high-dimensional and efficient reservoirs based on diagonal linear recurrence in the complex space.
arXiv Detail & Related papers (2026-01-29T20:18:27Z) - The Curious Case of In-Training Compression of State Space Models [49.819321766705514]
State Space Models (SSMs) tackle long sequence modeling tasks efficiently, offer both parallelizable training and fast inference.<n>Key design challenge is striking the right balance between maximizing expressivity and limiting this computational burden.<n>Our approach, textscCompreSSM, applies to Linear Time-Invariant SSMs such as Linear Recurrent Units, but is also extendable to selective models.
arXiv Detail & Related papers (2025-10-03T09:02:33Z) - CR-Net: Scaling Parameter-Efficient Training with Cross-Layer Low-Rank Structure [8.92064131103945]
Cross-layer Low-Rank residual Network (CR-Net) is an innovative framework inspired by our discovery that inter-layer activation residuals possess low-rank properties.<n>CR-Net consistently outperforms state-of-the-art low-rank frameworks while requiring fewer computational resources and less memory.
arXiv Detail & Related papers (2025-09-23T13:43:02Z) - Structuring Multiple Simple Cycle Reservoirs with Particle Swarm Optimization [4.452666723220885]
Reservoir Computing (RC) is a time-efficient computational paradigm derived from Recurrent Neural Networks (RNNs)<n>This paper introduces Multiple Simple Cycle Reservoirs (MSCRs), a multi-reservoir framework that extends Echo State Networks (ESNs)<n>We demonstrate that optimizing MSCR using Particle Swarm Optimization (PSO) outperforms existing multi-reservoir models, achieving competitive predictive performance with a lower-dimensional state space.
arXiv Detail & Related papers (2025-04-06T12:25:40Z) - Fast Training of Recurrent Neural Networks with Stationary State Feedbacks [48.22082789438538]
Recurrent neural networks (RNNs) have recently demonstrated strong performance and faster inference than Transformers.<n>We propose a novel method that replaces BPTT with a fixed gradient feedback mechanism.
arXiv Detail & Related papers (2025-03-29T14:45:52Z) - SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning [49.83621156017321]
SimBa is an architecture designed to scale up parameters in deep RL by injecting a simplicity bias.<n>By scaling up parameters with SimBa, the sample efficiency of various deep RL algorithms-including off-policy, on-policy, and unsupervised methods-is consistently improved.
arXiv Detail & Related papers (2024-10-13T07:20:53Z) - Universality of Real Minimal Complexity Reservoir [0.358439716487063]
Reservoir Computing (RC) models are distinguished by their fixed, non-trainable input layer and dynamically coupled reservoir.
Simple Cycle Reservoirs (SCR) represent a specialized class of RC models with a highly constrained reservoir architecture.
SCRs operating in real domain are universal approximators of time-invariant dynamic filters with fading memory.
arXiv Detail & Related papers (2024-08-15T10:44:33Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Model-Size Reduction for Reservoir Computing by Concatenating Internal
States Through Time [2.6872737601772956]
Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly.
To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires.
We propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step.
arXiv Detail & Related papers (2020-06-11T06:11:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.