Learning Reservoir Dynamics with Temporal Self-Modulation
- URL: http://arxiv.org/abs/2301.09235v1
- Date: Mon, 23 Jan 2023 00:44:05 GMT
- Title: Learning Reservoir Dynamics with Temporal Self-Modulation
- Authors: Yusuke Sakemi, Sou Nobukawa, Toshitaka Matsuki, Takashi Morie,
Kazuyuki Aihara
- Abstract summary: Reservoir computing (RC) can efficiently process time-series data by transferring the input signal to randomly connected neural networks (RNNs)
However, the learning performance of RC is inferior to that of other state-of-the-art RNN models.
We propose self-modulated RC (SM-RC), which extends RC by adding a self-modulation mechanism.
- Score: 3.2548794659022393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reservoir computing (RC) can efficiently process time-series data by
transferring the input signal to randomly connected recurrent neural networks
(RNNs), which are referred to as a reservoir. The high-dimensional
representation of time-series data in the reservoir significantly simplifies
subsequent learning tasks. Although this simple architecture allows fast
learning and facile physical implementation, the learning performance is
inferior to that of other state-of-the-art RNN models. In this paper, to
improve the learning ability of RC, we propose self-modulated RC (SM-RC), which
extends RC by adding a self-modulation mechanism. The self-modulation mechanism
is realized with two gating variables: an input gate and a reservoir gate. The
input gate modulates the input signal, and the reservoir gate modulates the
dynamical properties of the reservoir. We demonstrated that SM-RC can perform
attention tasks where input information is retained or discarded depending on
the input signal. We also found that a chaotic state emerged as a result of
learning in SM-RC. This indicates that self-modulation mechanisms provide RC
with qualitatively different information-processing capabilities. Furthermore,
SM-RC outperformed RC in NARMA and Lorentz model tasks. In particular, SM-RC
achieved a higher prediction accuracy than RC with a reservoir 10 times larger
in the Lorentz model tasks. Because the SM-RC architecture only requires two
additional gates, it is physically implementable as RC, providing a new
direction for realizing edge AI.
Related papers
- Universality of Real Minimal Complexity Reservoir [0.358439716487063]
Reservoir Computing (RC) models are distinguished by their fixed, non-trainable input layer and dynamically coupled reservoir.
Simple Cycle Reservoirs (SCR) represent a specialized class of RC models with a highly constrained reservoir architecture.
SCRs operating in real domain are universal approximators of time-invariant dynamic filters with fading memory.
arXiv Detail & Related papers (2024-08-15T10:44:33Z) - Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification [0.6291443816903801]
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing signals.
Key component in RC hardware is the ability to generate dynamic reservoir states.
This study illuminates the adeptness of memristor-based RC systems in managing novel temporal challenges.
arXiv Detail & Related papers (2024-03-04T08:22:29Z) - Sparse Modular Activation for Efficient Sequence Modeling [94.11125833685583]
Recent models combining Linear State Space Models with self-attention mechanisms have demonstrated impressive results across a range of sequence modeling tasks.
Current approaches apply attention modules statically and uniformly to all elements in the input sequences, leading to sub-optimal quality-efficiency trade-offs.
We introduce Sparse Modular Activation (SMA), a general mechanism enabling neural networks to sparsely activate sub-modules for sequence elements in a differentiable manner.
arXiv Detail & Related papers (2023-06-19T23:10:02Z) - Systematic Architectural Design of Scale Transformed Attention Condenser
DNNs via Multi-Scale Class Representational Response Similarity Analysis [93.0013343535411]
We propose a novel type of analysis called Multi-Scale Class Representational Response Similarity Analysis (ClassRepSim)
We show that adding STAC modules to ResNet style architectures can result in up to a 1.6% increase in top-1 accuracy.
Results from ClassRepSim analysis can be used to select an effective parameterization of the STAC module resulting in competitive performance.
arXiv Detail & Related papers (2023-06-16T18:29:26Z) - Deep Learning-Based Rate-Splitting Multiple Access for Reconfigurable
Intelligent Surface-Aided Tera-Hertz Massive MIMO [56.022764337221325]
Reconfigurable intelligent surface (RIS) can significantly enhance the service coverage of Tera-Hertz massive multiple-input multiple-output (MIMO) communication systems.
However, obtaining accurate high-dimensional channel state information (CSI) with limited pilot and feedback signaling overhead is challenging.
This paper proposes a deep learning (DL)-based rate-splitting multiple access scheme for RIS-aided Tera-Hertz multi-user multiple access systems.
arXiv Detail & Related papers (2022-09-18T03:07:37Z) - Heterogeneous Reservoir Computing Models for Persian Speech Recognition [0.0]
Reservoir computing models (RC) models have been proven inexpensive to train, have vastly fewer parameters, and are compatible with emergent hardware technologies.
We propose heterogeneous single and multi-layer ESNs to create non-linear transformations of the inputs that capture temporal context at different scales.
arXiv Detail & Related papers (2022-05-25T09:15:15Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Harnessing Tensor Structures -- Multi-Mode Reservoir Computing and Its
Application in Massive MIMO [39.46260351352041]
We introduce a new neural network (NN) structure, multi-mode reservoir computing (Multi-Mode RC)
The Multi-Mode RC-based learning framework can efficiently and effectively combat practical constraints of wireless systems.
arXiv Detail & Related papers (2021-01-25T20:30:22Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - RCNet: Incorporating Structural Information into Deep RNN for MIMO-OFDM
Symbol Detection with Limited Training [26.12840500767443]
We introduce the Time-Frequency RC to take advantage of the structural information inherent in OFDM signals.
We show that RCNet can offer a faster learning convergence and as much as 20% gain in bit error rate over a shallow RC structure.
arXiv Detail & Related papers (2020-03-15T21:06:40Z) - Refined Gate: A Simple and Effective Gating Mechanism for Recurrent
Units [68.30422112784355]
We propose a new gating mechanism within general gated recurrent neural networks to handle this issue.
The proposed gates directly short connect the extracted input features to the outputs of vanilla gates.
We verify the proposed gating mechanism on three popular types of gated RNNs including LSTM, GRU and MGU.
arXiv Detail & Related papers (2020-02-26T07:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.