Model-Size Reduction for Reservoir Computing by Concatenating Internal
States Through Time
- URL: http://arxiv.org/abs/2006.06218v1
- Date: Thu, 11 Jun 2020 06:11:03 GMT
- Title: Model-Size Reduction for Reservoir Computing by Concatenating Internal
States Through Time
- Authors: Yusuke Sakemi, Kai Morino, Timoth\'ee Leleu, Kazuyuki Aihara
- Abstract summary: Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly.
To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires.
We propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step.
- Score: 2.6872737601772956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reservoir computing (RC) is a machine learning algorithm that can learn
complex time series from data very rapidly based on the use of high-dimensional
dynamical systems, such as random networks of neurons, called "reservoirs." To
implement RC in edge computing, it is highly important to reduce the amount of
computational resources that RC requires. In this study, we propose methods
that reduce the size of the reservoir by inputting the past or drifting states
of the reservoir to the output layer at the current time step. These proposed
methods are analyzed based on information processing capacity, which is a
performance measure of RC proposed by Dambre et al. (2012). In addition, we
evaluate the effectiveness of the proposed methods on time-series prediction
tasks: the generalized Henon-map and NARMA. On these tasks, we found that the
proposed methods were able to reduce the size of the reservoir up to one tenth
without a substantial increase in regression error. Because the applications of
the proposed methods are not limited to a specific network structure of the
reservoir, the proposed methods could further improve the energy efficiency of
RC-based systems, such as FPGAs and photonic systems.
Related papers
- Oscillations enhance time-series prediction in reservoir computing with feedback [3.3686252536891454]
Reservoir computing is a machine learning framework used for modeling the brain.
It is difficult to accurately reproduce the long-term target time series because the reservoir system becomes unstable.
This study proposes oscillation-driven reservoir computing (ODRC) with feedback.
arXiv Detail & Related papers (2024-06-05T02:30:29Z) - Hybridizing Traditional and Next-Generation Reservoir Computing to Accurately and Efficiently Forecast Dynamical Systems [0.0]
Reservoir computers (RCs) are powerful machine learning architectures for time series prediction.
Next generation reservoir computers (NGRCs) have been introduced, offering distinct advantages over RCs.
Here, we introduce a hybrid RC-NGRC approach for time series forecasting of dynamical systems.
arXiv Detail & Related papers (2024-03-04T17:35:17Z) - Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification [0.6291443816903801]
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing signals.
Key component in RC hardware is the ability to generate dynamic reservoir states.
This study illuminates the adeptness of memristor-based RC systems in managing novel temporal challenges.
arXiv Detail & Related papers (2024-03-04T08:22:29Z) - A Systematic Exploration of Reservoir Computing for Forecasting Complex
Spatiotemporal Dynamics [0.0]
Reservoir computer (RC) is a type of recurrent neural network that has demonstrated success in prediction architecture of intrinsicly chaotic dynamical systems.
We explore the architecture and design choices for a "best in class" RC for a number of characteristic dynamical systems.
We show the application of these choices in scaling up to larger models using localization.
arXiv Detail & Related papers (2022-01-21T22:31:12Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Hierarchical Architectures in Reservoir Computing Systems [0.0]
Reservoir computing (RC) offers efficient temporal data processing with a low training cost.
We investigate the influence of the hierarchical reservoir structure on the properties of the reservoir and the performance of the RC system.
arXiv Detail & Related papers (2021-05-14T16:11:35Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Temporal Attention-Augmented Graph Convolutional Network for Efficient
Skeleton-Based Human Action Recognition [97.14064057840089]
Graphal networks (GCNs) have been very successful in modeling non-Euclidean data structures.
Most GCN-based action recognition methods use deep feed-forward networks with high computational complexity to process all skeletons in an action.
We propose a temporal attention module (TAM) for increasing the efficiency in skeleton-based action recognition.
arXiv Detail & Related papers (2020-10-23T08:01:55Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.