Sparsity in Reservoir Computing Neural Networks
- URL: http://arxiv.org/abs/2006.02957v1
- Date: Thu, 4 Jun 2020 15:38:17 GMT
- Title: Sparsity in Reservoir Computing Neural Networks
- Authors: Claudio Gallicchio
- Abstract summary: Reservoir Computing (RC) is a strategy for designing Recurrent Neural Networks featured by striking efficiency of training.
In this paper, we empirically investigate the role of sparsity in RC network design under the perspective of the richness of the developed temporal representations.
- Score: 3.55810827129032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reservoir Computing (RC) is a well-known strategy for designing Recurrent
Neural Networks featured by striking efficiency of training. The crucial aspect
of RC is to properly instantiate the hidden recurrent layer that serves as
dynamical memory to the system. In this respect, the common recipe is to create
a pool of randomly and sparsely connected recurrent neurons. While the aspect
of sparsity in the design of RC systems has been debated in the literature, it
is nowadays understood mainly as a way to enhance the efficiency of
computation, exploiting sparse matrix operations. In this paper, we empirically
investigate the role of sparsity in RC network design under the perspective of
the richness of the developed temporal representations. We analyze both
sparsity in the recurrent connections, and in the connections from the input to
the reservoir. Our results point out that sparsity, in particular in
input-reservoir connections, has a major role in developing internal temporal
representations that have a longer short-term memory of past inputs and a
higher dimension.
Related papers
- Analysis and Fully Memristor-based Reservoir Computing for Temporal Data Classification [0.6291443816903801]
Reservoir computing (RC) offers a neuromorphic framework that is particularly effective for processing signals.
Key component in RC hardware is the ability to generate dynamic reservoir states.
This study illuminates the adeptness of memristor-based RC systems in managing novel temporal challenges.
arXiv Detail & Related papers (2024-03-04T08:22:29Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Embedding Theory of Reservoir Computing and Reducing Reservoir Network
Using Time Delays [6.543793376734818]
Reservoir computing (RC) is under explosive development due to its exceptional efficacy and high performance in reconstruction or/and prediction of complex physical systems.
Here, we rigorously prove that RC is essentially a high dimensional embedding of the original input nonlinear dynamical system.
We significantly reduce the network size of RC for reconstructing and predicting some representative physical systems, and, more surprisingly, only using a single neuron reservoir with time delays is sometimes sufficient for achieving those tasks.
arXiv Detail & Related papers (2023-03-16T02:25:51Z) - Brain-like combination of feedforward and recurrent network components
achieves prototype extraction and robust pattern recognition [0.0]
Associative memory has been a prominent candidate for the computation performed by the massively recurrent neocortical networks.
We combine a recurrent attractor network with a feedforward network that learns distributed representations using an unsupervised Hebbian-Bayesian learning rule.
We demonstrate that the recurrent attractor component implements associative memory when trained on the feedforward-driven internal (hidden) representations.
arXiv Detail & Related papers (2022-06-30T06:03:11Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - Recurrence-in-Recurrence Networks for Video Deblurring [58.49075799159015]
State-of-the-art video deblurring methods often adopt recurrent neural networks to model the temporal dependency between the frames.
In this paper, we propose recurrence-in-recurrence network architecture to cope with the limitations of short-ranged memory.
arXiv Detail & Related papers (2022-03-12T11:58:13Z) - Hierarchical Architectures in Reservoir Computing Systems [0.0]
Reservoir computing (RC) offers efficient temporal data processing with a low training cost.
We investigate the influence of the hierarchical reservoir structure on the properties of the reservoir and the performance of the RC system.
arXiv Detail & Related papers (2021-05-14T16:11:35Z) - Temporal Memory Relation Network for Workflow Recognition from Surgical
Video [53.20825496640025]
We propose a novel end-to-end temporal memory relation network (TMNet) for relating long-range and multi-scale temporal patterns.
We have extensively validated our approach on two benchmark surgical video datasets.
arXiv Detail & Related papers (2021-03-30T13:20:26Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Depth Enables Long-Term Memory for Recurrent Neural Networks [0.0]
We introduce a measure of the network's ability to support information flow across time, referred to as the Start-End separation rank.
We prove that deep recurrent networks support Start-End separation ranks which are higher than those supported by their shallow counterparts.
arXiv Detail & Related papers (2020-03-23T10:29:14Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.