Improving the Performance of Echo State Networks Through Feedback
- URL: http://arxiv.org/abs/2312.15141v1
- Date: Sat, 23 Dec 2023 02:34:50 GMT
- Title: Improving the Performance of Echo State Networks Through Feedback
- Authors: Peter J. Ehlers, Hendra I. Nurdin, Daniel Soh
- Abstract summary: Reservoir computing, using nonlinear dynamical systems, offers a cost-effective alternative to neural networks.
A potential drawback of ESNs is that the fixed reservoir may not offer the complexity needed for specific problems.
In this paper, we demonstrate that by feeding some component of the reservoir state back into the network through the input, we can drastically improve upon the performance of a given ESN.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reservoir computing, using nonlinear dynamical systems, offers a
cost-effective alternative to neural networks for complex tasks involving
processing of sequential data, time series modeling, and system identification.
Echo state networks (ESNs), a type of reservoir computer, mirror neural
networks but simplify training. They apply fixed, random linear transformations
to the internal state, followed by nonlinear changes. This process, guided by
input signals and linear regression, adapts the system to match target
characteristics, reducing computational demands. A potential drawback of ESNs
is that the fixed reservoir may not offer the complexity needed for specific
problems. While directly altering (training) the internal ESN would reintroduce
the computational burden, an indirect modification can be achieved by
redirecting some output as input. This feedback can influence the internal
reservoir state, yielding ESNs with enhanced complexity suitable for broader
challenges. In this paper, we demonstrate that by feeding some component of the
reservoir state back into the network through the input, we can drastically
improve upon the performance of a given ESN. We rigorously prove that, for any
given ESN, feedback will almost always improve the accuracy of the output. For
a set of three tasks, each representing different problem classes, we find that
with feedback the average error measures are reduced by $30\%-60\%$.
Remarkably, feedback provides at least an equivalent performance boost to
doubling the initial number of computational nodes, a computationally expensive
and technologically challenging alternative. These results demonstrate the
broad applicability and substantial usefulness of this feedback scheme.
Related papers
- Parallel Spiking Unit for Efficient Training of Spiking Neural Networks [8.912926151352888]
Spiking Neural Networks (SNNs) are used to advance artificial intelligence.
SNNs are hampered by their inherent sequential computational dependency.
This paper introduces the innovative Parallel Spiking Unit (PSU) and its two derivatives, the Input-aware PSU (IPSU) and Reset-aware PSU (RPSU)
These variants skillfully decouple the leaky integration and firing mechanisms in spiking neurons while probabilistically managing the reset process.
arXiv Detail & Related papers (2024-02-01T09:36:26Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Incrementally-Computable Neural Networks: Efficient Inference for
Dynamic Inputs [75.40636935415601]
Deep learning often faces the challenge of efficiently processing dynamic inputs, such as sensor data or user inputs.
We take an incremental computing approach, looking to reuse calculations as the inputs change.
We apply this approach to the transformers architecture, creating an efficient incremental inference algorithm with complexity proportional to the fraction of modified inputs.
arXiv Detail & Related papers (2023-07-27T16:30:27Z) - Investigation of Proper Orthogonal Decomposition for Echo State Networks [3.645570876484422]
Echo State Networks (ESN) are a type of Recurrent Neural Network that yields promising results in representing time series and nonlinear dynamic systems.
A large number of states not only makes the time-step computation more costly but also may pose robustness issues.
One way to circumvent this complexity issue is through Model Order Reduction strategies such as the Proper Orthogonal Decomposition (POD) and its variants (POD-DEIM)
arXiv Detail & Related papers (2022-11-30T17:23:25Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Desire Backpropagation: A Lightweight Training Algorithm for Multi-Layer
Spiking Neural Networks based on Spike-Timing-Dependent Plasticity [13.384228628766236]
Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks.
We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones.
We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively.
arXiv Detail & Related papers (2022-11-10T08:32:13Z) - Learning in Feedback-driven Recurrent Spiking Neural Networks using
full-FORCE Training [4.124948554183487]
We propose a supervised training procedure for RSNNs, where a second network is introduced only during the training.
The proposed training procedure consists of generating targets for both recurrent and readout layers.
We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems.
arXiv Detail & Related papers (2022-05-26T19:01:19Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.