Structuring Multiple Simple Cycle Reservoirs with Particle Swarm Optimization
- URL: http://arxiv.org/abs/2504.05347v2
- Date: Mon, 21 Apr 2025 07:10:54 GMT
- Title: Structuring Multiple Simple Cycle Reservoirs with Particle Swarm Optimization
- Authors: Ziqiang Li, Robert Simon Fong, Kantaro Fujiwara, Kazuyuki Aihara, Gouhei Tanaka,
- Abstract summary: Reservoir Computing (RC) is a time-efficient computational paradigm derived from Recurrent Neural Networks (RNNs)<n>This paper introduces Multiple Simple Cycle Reservoirs (MSCRs), a multi-reservoir framework that extends Echo State Networks (ESNs)<n>We demonstrate that optimizing MSCR using Particle Swarm Optimization (PSO) outperforms existing multi-reservoir models, achieving competitive predictive performance with a lower-dimensional state space.
- Score: 4.452666723220885
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reservoir Computing (RC) is a time-efficient computational paradigm derived from Recurrent Neural Networks (RNNs). The Simple Cycle Reservoir (SCR) is an RC model that stands out for its minimalistic design, offering extremely low construction complexity and proven capability of universally approximating time-invariant causal fading memory filters, even in the linear dynamics regime. This paper introduces Multiple Simple Cycle Reservoirs (MSCRs), a multi-reservoir framework that extends Echo State Networks (ESNs) by replacing a single large reservoir with multiple interconnected SCRs. We demonstrate that optimizing MSCR using Particle Swarm Optimization (PSO) outperforms existing multi-reservoir models, achieving competitive predictive performance with a lower-dimensional state space. By modeling interconnections as a weighted Directed Acyclic Graph (DAG), our approach enables flexible, task-specific network topology adaptation. Numerical simulations on three benchmark time-series prediction tasks confirm these advantages over rival algorithms. These findings highlight the potential of MSCR-PSO as a promising framework for optimizing multi-reservoir systems, providing a foundation for further advancements and applications of interconnected SCRs for developing efficient AI devices.
Related papers
- Recurrent Stochastic Configuration Networks with Incremental Blocks [0.0]
Recurrent configuration networks (RSCNs) have shown promise in modelling nonlinear dynamic systems with order uncertainty.
This paper develops the original RSCNs with block increments, termed block RSCNs (BRSCNs)
BRSCNs can simultaneously add multiple reservoir nodes (subreservoirs) during the construction.
arXiv Detail & Related papers (2024-11-18T05:58:47Z) - SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning [49.83621156017321]
SimBa is an architecture designed to scale up parameters in deep RL by injecting a simplicity bias.
By scaling up parameters with SimBa, the sample efficiency of various deep RL algorithms-including off-policy, on-policy, and unsupervised methods-is consistently improved.
arXiv Detail & Related papers (2024-10-13T07:20:53Z) - Universality of Real Minimal Complexity Reservoir [0.358439716487063]
Reservoir Computing (RC) models are distinguished by their fixed, non-trainable input layer and dynamically coupled reservoir.
Simple Cycle Reservoirs (SCR) represent a specialized class of RC models with a highly constrained reservoir architecture.
SCRs operating in real domain are universal approximators of time-invariant dynamic filters with fading memory.
arXiv Detail & Related papers (2024-08-15T10:44:33Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - MsDC-DEQ-Net: Deep Equilibrium Model (DEQ) with Multi-scale Dilated
Convolution for Image Compressive Sensing (CS) [0.0]
Compressive sensing (CS) is a technique that enables the recovery of sparse signals using fewer measurements than traditional sampling methods.
We develop an interpretable and concise neural network model for reconstructing natural images using CS.
The model, called MsDC-DEQ-Net, exhibits competitive performance compared to state-of-the-art network-based methods.
arXiv Detail & Related papers (2024-01-05T16:25:58Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Robust Deep Compressive Sensing with Recurrent-Residual Structural
Constraints [0.0]
Existing deep sensing (CS) methods either ignore adaptive online optimization or depend on costly iterative reconstruction.
This work explores a novel image CS framework with recurrent-residual structural constraint, termed as R$2$CS-NET.
As the first deep CS framework efficiently bridging adaptive online optimization, the R$2$CS-NET integrates the robustness of online optimization with the efficiency and nonlinear capacity of deep learning methods.
arXiv Detail & Related papers (2022-07-15T05:56:13Z) - Residual Multiplicative Filter Networks for Multiscale Reconstruction [24.962697695403037]
We introduce a new coordinate network architecture and training scheme that enables coarse-to-fine optimization with fine-grained control over the frequency support of learned reconstructions.
We demonstrate how these modifications enable multiscale optimization for coarse-to-fine fitting to natural images.
We then evaluate our model on synthetically generated datasets for the the problem of single-particle cryo-EM reconstruction.
arXiv Detail & Related papers (2022-06-01T20:16:28Z) - TMS: A Temporal Multi-scale Backbone Design for Speaker Embedding [60.292702363839716]
Current SOTA backbone networks for speaker embedding are designed to aggregate multi-scale features from an utterance with multi-branch network architectures for speaker representation.
We propose an effective temporal multi-scale (TMS) model where multi-scale branches could be efficiently designed in a speaker embedding network almost without increasing computational costs.
arXiv Detail & Related papers (2022-03-17T05:49:35Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.