Hierarchical Architectures in Reservoir Computing Systems
- URL: http://arxiv.org/abs/2105.06923v1
- Date: Fri, 14 May 2021 16:11:35 GMT
- Title: Hierarchical Architectures in Reservoir Computing Systems
- Authors: John Moon, Wei D. Lu (University of Michigan)
- Abstract summary: Reservoir computing (RC) offers efficient temporal data processing with a low training cost.
We investigate the influence of the hierarchical reservoir structure on the properties of the reservoir and the performance of the RC system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reservoir computing (RC) offers efficient temporal data processing with a low
training cost by separating recurrent neural networks into a fixed network with
recurrent connections and a trainable linear network. The quality of the fixed
network, called reservoir, is the most important factor that determines the
performance of the RC system. In this paper, we investigate the influence of
the hierarchical reservoir structure on the properties of the reservoir and the
performance of the RC system. Analogous to deep neural networks, stacking
sub-reservoirs in series is an efficient way to enhance the nonlinearity of
data transformation to high-dimensional space and expand the diversity of
temporal information captured by the reservoir. These deep reservoir systems
offer better performance when compared to simply increasing the size of the
reservoir or the number of sub-reservoirs. Low frequency components are mainly
captured by the sub-reservoirs in later stage of the deep reservoir structure,
similar to observations that more abstract information can be extracted by
layers in the late stage of deep neural networks. When the total size of the
reservoir is fixed, tradeoff between the number of sub-reservoirs and the size
of each sub-reservoir needs to be carefully considered, due to the degraded
ability of individual sub-reservoirs at small sizes. Improved performance of
the deep reservoir structure alleviates the difficulty of implementing the RC
system on hardware systems.
Related papers
- Chaotic attractor reconstruction using small reservoirs - the influence
of topology [0.0]
Reservoir computing has been shown to be an effective method of forecasting chaotic dynamics.
We show that a reservoir of uncoupled nodes more reliably produces long term timeseries predictions.
arXiv Detail & Related papers (2024-02-23T09:43:52Z) - A Lightweight Recurrent Learning Network for Sustainable Compressed
Sensing [27.964167481909588]
We propose a lightweight but effective deep neural network based on recurrent learning to achieve a sustainable CS system.
Our proposed model can achieve a better reconstruction quality than existing state-of-the-art CS algorithms.
arXiv Detail & Related papers (2023-04-23T14:54:15Z) - RC-Net: A Convolutional Neural Network for Retinal Vessel Segmentation [3.0846824529023387]
We present RC-Net, a fully convolutional network, where the number of filters per layer is optimized to reduce feature overlapping and complexity.
In our experiments, RC-Net is quite competitive, outperforming alternatives vessels segmentation methods with two or even three orders of magnitude less trainable parameters.
arXiv Detail & Related papers (2021-12-21T10:24:01Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Sequential Hierarchical Learning with Distribution Transformation for
Image Super-Resolution [83.70890515772456]
We build a sequential hierarchical learning super-resolution network (SHSR) for effective image SR.
We consider the inter-scale correlations of features, and devise a sequential multi-scale block (SMB) to progressively explore the hierarchical information.
Experiment results show SHSR achieves superior quantitative performance and visual quality to state-of-the-art methods.
arXiv Detail & Related papers (2020-07-19T01:35:53Z) - Model-Size Reduction for Reservoir Computing by Concatenating Internal
States Through Time [2.6872737601772956]
Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly.
To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires.
We propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step.
arXiv Detail & Related papers (2020-06-11T06:11:03Z) - Sparsity in Reservoir Computing Neural Networks [3.55810827129032]
Reservoir Computing (RC) is a strategy for designing Recurrent Neural Networks featured by striking efficiency of training.
In this paper, we empirically investigate the role of sparsity in RC network design under the perspective of the richness of the developed temporal representations.
arXiv Detail & Related papers (2020-06-04T15:38:17Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.