PyRCN: Exploration and Application of ESNs
- URL: http://arxiv.org/abs/2103.04807v1
- Date: Mon, 8 Mar 2021 15:00:48 GMT
- Title: PyRCN: Exploration and Application of ESNs
- Authors: Peter Steiner (1), Azarakhsh Jalalvand (2), Simon Stone (1), Peter
Birkholz (2) ((1) Institute for Acoustics and Speech Communication,
Technische Universit\"at Dresden, Dresden, Germany, (2) IDLab, Ghent
University - imec, Ghent, Belgium)
- Abstract summary: Echo State Networks (ESNs) are capable of solving temporal tasks, but with a substantially easier training paradigm based on linear regression.
This paper aims to facilitate the understanding of ESNs in theory and practice.
The paper introduces the Python toolbox PyRCN for developing, training and analyzing ESNs on arbitrarily large datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a family member of Recurrent Neural Networks and similar to
Long-Short-Term Memory cells, Echo State Networks (ESNs) are capable of solving
temporal tasks, but with a substantially easier training paradigm based on
linear regression. However, optimizing hyper-parameters and efficiently
implementing the training process might be somewhat overwhelming for the
first-time users of ESNs. This paper aims to facilitate the understanding of
ESNs in theory and practice. Treating ESNs as non-linear filters, we explain
the effect of the hyper-parameters using familiar concepts such as impulse
responses. Furthermore, the paper introduces the Python toolbox PyRCN (Python
Reservoir Computing Network) for developing, training and analyzing ESNs on
arbitrarily large datasets. The tool is based on widely-used scientific
packages, such as numpy and scipy and offers an interface to scikit-learn.
Example code and results for classification and regression tasks are provided.
Related papers
- SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks [4.532517021515834]
Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials.
We introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.
arXiv Detail & Related papers (2023-12-28T18:48:10Z) - cito: An R package for training neural networks using torch [0.0]
'cito' is a user-friendly R package for deep learning (DL) applications.
It allows specifying DNNs in the familiar formula syntax used by many R packages.
'cito' includes many user-friendly functions for model plotting and analysis.
arXiv Detail & Related papers (2023-03-16T18:54:20Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Analytic Learning of Convolutional Neural Network For Pattern
Recognition [20.916630175697065]
Training convolutional neural networks (CNNs) with back-propagation (BP) is time-consuming and resource-intensive.
We propose an analytic convolutional neural network learning (ACnnL)
ACnnL builds a closed-form solution similar to its counterpart, but differs in their regularization constraints.
arXiv Detail & Related papers (2022-02-14T06:32:21Z) - Contextual HyperNetworks for Novel Feature Adaptation [43.49619456740745]
Contextual HyperNetwork (CHN) generates parameters for extending the base model to a new feature.
At prediction time, the CHN requires only a single forward pass through a neural network, yielding a significant speed-up.
We show that this system obtains improved few-shot learning performance for novel features over existing imputation and meta-learning baselines.
arXiv Detail & Related papers (2021-04-12T23:19:49Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - RIFLE: Backpropagation in Depth for Deep Transfer Learning through
Re-Initializing the Fully-connected LayEr [60.07531696857743]
Fine-tuning the deep convolution neural network(CNN) using a pre-trained model helps transfer knowledge learned from larger datasets to the target task.
We propose RIFLE - a strategy that deepens backpropagation in transfer learning settings.
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning.
arXiv Detail & Related papers (2020-07-07T11:27:43Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.